+
+# 📝 Conteúdo
+
+- [📝 Conteúdo](#-conteúdo)
+- [📦 Introdução](#-introdução)
+- [🎨 Criar Fluxos](#-criar-fluxos)
+- [Deploy](#deploy)
+ - [Deploy usando Google Cloud Platform](#deploy-usando-google-cloud-platform)
+ - [Deploy on Railway](#deploy-on-railway)
+ - [Deploy on Render](#deploy-on-render)
+- [🖥️ Interface de Linha de Comando (CLI)](#️-interface-de-linha-de-comando-cli)
+ - [Uso](#uso)
+ - [Variáveis de Ambiente](#variáveis-de-ambiente)
+- [👋 Contribuir](#-contribuir)
+- [🌟 Contribuidores](#-contribuidores)
+- [📄 Licença](#-licença)
+
+# 📦 Introdução
+
+Você pode instalar o Langflow com pip:
+
+```shell
+# Certifique-se de ter >=Python 3.10 instalado no seu sistema.
+# Instale a versão pré-lançamento (recomendada para as atualizações mais recentes)
+python -m pip install langflow --pre --force-reinstall
+
+# ou versão estável
+python -m pip install langflow -U
+```
+
+Então, execute o Langflow com:
+
+```shell
+python -m langflow run
+```
+
+Você também pode visualizar o Langflow no [HuggingFace Spaces](https://huggingface.co/spaces/Langflow/Langflow-Preview). [Clone o Space usando este link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) para criar seu próprio workspace do Langflow em minutos.
+
+# 🎨 Criar Fluxos
+
+Criar fluxos com Langflow é fácil. Basta arrastar componentes da barra lateral para o canvas e conectá-los para começar a construir sua aplicação.
+
+Explore editando os parâmetros do prompt, agrupando componentes e construindo seus próprios componentes personalizados (Custom Components).
+
+Quando terminar, você pode exportar seu fluxo como um arquivo JSON.
+
+Carregue o fluxo com:
+
+```python
+from langflow.load import run_flow_from_json
+
+results = run_flow_from_json("path/to/flow.json", input_value="Hello, World!")
+```
+
+# Deploy
+
+## Deploy usando Google Cloud Platform
+
+Siga nosso passo a passo para fazer deploy do Langflow no Google Cloud Platform (GCP) usando o Google Cloud Shell. O guia está disponível no documento [**Langflow on Google Cloud Platform**](https://github.com/langflow-ai/langflow/blob/dev/docs/docs/deployment/gcp-deployment.md).
+
+Alternativamente, clique no botão **"Open in Cloud Shell"** abaixo para iniciar o Google Cloud Shell, clonar o repositório do Langflow e começar um **tutorial interativo** que o guiará pelo processo de configuração dos recursos necessários e deploy do Langflow no seu projeto GCP.
+
+[](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/langflow-ai/langflow&working_dir=scripts/gcp&shellonly=true&tutorial=walkthroughtutorial_spot.md)
+
+## Deploy on Railway
+
+Use este template para implantar o Langflow 1.0 Preview no Railway:
+
+[](https://railway.app/template/UsJ1uB?referralCode=MnPSdg)
+
+Ou este para implantar o Langflow 0.6.x:
+
+[](https://railway.app/template/JMXEWp?referralCode=MnPSdg)
+
+## Deploy on Render
+
+
+
+
+
+# 🖥️ Interface de Linha de Comando (CLI)
+
+O Langflow fornece uma interface de linha de comando (CLI) para fácil gerenciamento e configuração.
+
+## Uso
+
+Você pode executar o Langflow usando o seguinte comando:
+
+```shell
+langflow run [OPTIONS]
+```
+
+Cada opção é detalhada abaixo:
+
+- `--help`: Exibe todas as opções disponíveis.
+- `--host`: Define o host para vincular o servidor. Pode ser configurado usando a variável de ambiente `LANGFLOW_HOST`. O padrão é `127.0.0.1`.
+- `--workers`: Define o número de processos. Pode ser configurado usando a variável de ambiente `LANGFLOW_WORKERS`. O padrão é `1`.
+- `--timeout`: Define o tempo limite do worker em segundos. O padrão é `60`.
+- `--port`: Define a porta para escutar. Pode ser configurado usando a variável de ambiente `LANGFLOW_PORT`. O padrão é `7860`.
+- `--env-file`: Especifica o caminho para o arquivo .env contendo variáveis de ambiente. O padrão é `.env`.
+- `--log-level`: Define o nível de log. Pode ser configurado usando a variável de ambiente `LANGFLOW_LOG_LEVEL`. O padrão é `critical`.
+- `--components-path`: Especifica o caminho para o diretório contendo componentes personalizados. Pode ser configurado usando a variável de ambiente `LANGFLOW_COMPONENTS_PATH`. O padrão é `langflow/components`.
+- `--log-file`: Especifica o caminho para o arquivo de log. Pode ser configurado usando a variável de ambiente `LANGFLOW_LOG_FILE`. O padrão é `logs/langflow.log`.
+- `--cache`: Seleciona o tipo de cache a ser usado. As opções são `InMemoryCache` e `SQLiteCache`. Pode ser configurado usando a variável de ambiente `LANGFLOW_LANGCHAIN_CACHE`. O padrão é `SQLiteCache`.
+- `--dev/--no-dev`: Alterna o modo de desenvolvimento. O padrão é `no-dev`.
+- `--path`: Especifica o caminho para o diretório frontend contendo os arquivos de build. Esta opção é apenas para fins de desenvolvimento. Pode ser configurado usando a variável de ambiente `LANGFLOW_FRONTEND_PATH`.
+- `--open-browser/--no-open-browser`: Alterna a opção de abrir o navegador após iniciar o servidor. Pode ser configurado usando a variável de ambiente `LANGFLOW_OPEN_BROWSER`. O padrão é `open-browser`.
+- `--remove-api-keys/--no-remove-api-keys`: Alterna a opção de remover as chaves de API dos projetos salvos no banco de dados. Pode ser configurado usando a variável de ambiente `LANGFLOW_REMOVE_API_KEYS`. O padrão é `no-remove-api-keys`.
+- `--install-completion [bash|zsh|fish|powershell|pwsh]`: Instala a conclusão para o shell especificado.
+- `--show-completion [bash|zsh|fish|powershell|pwsh]`: Exibe a conclusão para o shell especificado, permitindo que você copie ou personalize a instalação.
+- `--backend-only`: Este parâmetro, com valor padrão `False`, permite executar apenas o servidor backend sem o frontend. Também pode ser configurado usando a variável de ambiente `LANGFLOW_BACKEND_ONLY`.
+- `--store`: Este parâmetro, com valor padrão `True`, ativa os recursos da loja, use `--no-store` para desativá-los. Pode ser configurado usando a variável de ambiente `LANGFLOW_STORE`.
+
+Esses parâmetros são importantes para usuários que precisam personalizar o comportamento do Langflow, especialmente em cenários de desenvolvimento ou deploy especializado.
+
+### Variáveis de Ambiente
+
+Você pode configurar muitas das opções de CLI usando variáveis de ambiente. Estas podem ser exportadas no seu sistema operacional ou adicionadas a um arquivo `.env` e carregadas usando a opção `--env-file`.
+
+Um arquivo de exemplo `.env` chamado `.env.example` está incluído no projeto. Copie este arquivo para um novo arquivo chamado `.env` e substitua os valores de exemplo pelas suas configurações reais. Se você estiver definindo valores tanto no seu sistema operacional quanto no arquivo `.env`, as configurações do `.env` terão precedência.
+
+# 👋 Contribuir
+
+Aceitamos contribuições de desenvolvedores de todos os níveis para nosso projeto open-source no GitHub. Se você deseja contribuir, por favor, confira nossas [diretrizes de contribuição](./CONTRIBUTING.md) e ajude a tornar o Langflow mais acessível.
+
+---
+
+[](https://star-history.com/#langflow-ai/langflow&Date)
+
+# 🌟 Contribuidores
+
+[](https://github.com/langflow-ai/langflow/graphs/contributors)
+
+# 📄 Licença
+
+O Langflow é lançado sob a licença MIT. Veja o arquivo [LICENSE](LICENSE) para detalhes.
diff --git a/README.md b/README.md
index 626a472dd..68c8fde29 100644
--- a/README.md
+++ b/README.md
@@ -1,21 +1,63 @@
-# [](https://www.langflow.org)
+# [](https://www.langflow.org)
-### [Langflow](https://www.langflow.org) is a new, visual way to build, iterate and deploy AI apps.
+
+ A visual framework for building multi-agent and RAG applications
+
+
+ Open-source, Python-powered, fully customizable, LLM and vector store agnostic
+
+
+# 📝 Content
+
+- [📝 Content](#-content)
+- [📦 Get Started](#-get-started)
+- [🎨 Create Flows](#-create-flows)
+- [Deploy](#deploy)
+ - [Deploy Langflow on Google Cloud Platform](#deploy-langflow-on-google-cloud-platform)
+ - [Deploy on Railway](#deploy-on-railway)
+ - [Deploy on Render](#deploy-on-render)
+- [🖥️ Command Line Interface (CLI)](#️-command-line-interface-cli)
+ - [Usage](#usage)
+ - [Environment Variables](#environment-variables)
+- [👋 Contribute](#-contribute)
+- [🌟 Contributors](#-contributors)
+- [📄 License](#-license)
+
+# 📦 Get Started
You can install Langflow with pip:
```shell
-# Make sure you have Python 3.10 installed on your system.
-# Install the pre-release version
+# Make sure you have >=Python 3.10 installed on your system.
+# Install the pre-release version (recommended for the latest updates)
python -m pip install langflow --pre --force-reinstall
# or stable version
@@ -28,9 +70,9 @@ Then, run Langflow with:
python -m langflow run
```
-You can also preview Langflow in [HuggingFace Spaces](https://huggingface.co/spaces/Langflow/Langflow-Preview). [Clone the space using this link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true), to create your own Langflow workspace in minutes.
+You can also preview Langflow in [HuggingFace Spaces](https://huggingface.co/spaces/Langflow/Langflow-Preview). [Clone the space using this link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) to create your own Langflow workspace in minutes.
-# 🎨 Creating Flows
+# 🎨 Create Flows
Creating flows with Langflow is easy. Simply drag components from the sidebar onto the canvas and connect them to start building your application.
@@ -46,6 +88,32 @@ from langflow.load import run_flow_from_json
results = run_flow_from_json("path/to/flow.json", input_value="Hello, World!")
```
+# Deploy
+
+## Deploy Langflow on Google Cloud Platform
+
+Follow our step-by-step guide to deploy Langflow on Google Cloud Platform (GCP) using Google Cloud Shell. The guide is available in the [**Langflow in Google Cloud Platform**](https://github.com/langflow-ai/langflow/blob/dev/docs/docs/deployment/gcp-deployment.md) document.
+
+Alternatively, click the **"Open in Cloud Shell"** button below to launch Google Cloud Shell, clone the Langflow repository, and start an **interactive tutorial** that will guide you through the process of setting up the necessary resources and deploying Langflow on your GCP project.
+
+[](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/langflow-ai/langflow&working_dir=scripts/gcp&shellonly=true&tutorial=walkthroughtutorial_spot.md)
+
+## Deploy on Railway
+
+Use this template to deploy Langflow 1.0 Preview on Railway:
+
+[](https://railway.app/template/UsJ1uB?referralCode=MnPSdg)
+
+Or this one to deploy Langflow 0.6.x:
+
+[](https://railway.app/template/JMXEWp?referralCode=MnPSdg)
+
+## Deploy on Render
+
+
+
+
+
# 🖥️ Command Line Interface (CLI)
Langflow provides a command-line interface (CLI) for easy management and configuration.
@@ -87,33 +155,7 @@ You can configure many of the CLI options using environment variables. These can
A sample `.env` file named `.env.example` is included with the project. Copy this file to a new file named `.env` and replace the example values with your actual settings. If you're setting values in both your OS and the `.env` file, the `.env` settings will take precedence.
-# Deployment
-
-## Deploy Langflow on Google Cloud Platform
-
-Follow our step-by-step guide to deploy Langflow on Google Cloud Platform (GCP) using Google Cloud Shell. The guide is available in the [**Langflow in Google Cloud Platform**](GCP_DEPLOYMENT.md) document.
-
-Alternatively, click the **"Open in Cloud Shell"** button below to launch Google Cloud Shell, clone the Langflow repository, and start an **interactive tutorial** that will guide you through the process of setting up the necessary resources and deploying Langflow on your GCP project.
-
-[](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/langflow-ai/langflow&working_dir=scripts/gcp&shellonly=true&tutorial=walkthroughtutorial_spot.md)
-
-## Deploy on Railway
-
-Use this template to deploy Langflow 1.0 Preview on Railway:
-
-[](https://railway.app/template/UsJ1uB?referralCode=MnPSdg)
-
-Or this one to deploy Langflow 0.6.x:
-
-[](https://railway.app/template/JMXEWp?referralCode=MnPSdg)
-
-## Deploy on Render
-
-
-
-
-
-# 👋 Contributing
+# 👋 Contribute
We welcome contributions from developers of all levels to our open-source project on GitHub. If you'd like to contribute, please check our [contributing guidelines](./CONTRIBUTING.md) and help make Langflow more accessible.
diff --git a/README.zh_CN.md b/README.zh_CN.md
new file mode 100644
index 000000000..fee764902
--- /dev/null
+++ b/README.zh_CN.md
@@ -0,0 +1,172 @@
+
+
+# [](https://www.langflow.org)
+
+
+
+# 📝 目录
+
+- [📝 目录](#-目录)
+- [📦 快速开始](#-快速开始)
+- [🎨 创建工作流](#-创建工作流)
+- [部署](#部署)
+ - [在Google Cloud Platform上部署Langflow](#在google-cloud-platform上部署langflow)
+ - [在Railway上部署](#在railway上部署)
+ - [在Render上部署](#在render上部署)
+- [🖥️ 命令行界面 (CLI)](#️-命令行界面-cli)
+ - [用法](#用法)
+ - [环境变量](#环境变量)
+- [👋 贡献](#-贡献)
+- [🌟 贡献者](#-贡献者)
+- [📄 许可证](#-许可证)
+
+# 📦 快速开始
+
+使用 pip 安装 Langflow:
+
+```shell
+# 确保您的系统已经安装上>=Python 3.10
+# 安装Langflow预发布版本
+python -m pip install langflow --pre --force-reinstall
+
+# 安装Langflow稳定版本
+python -m pip install langflow -U
+```
+
+然后运行Langflow:
+
+```shell
+python -m langflow run
+```
+
+您可以在[HuggingFace Spaces](https://huggingface.co/spaces/Langflow/Langflow-Preview)中在线体验 Langflow,也可以使用该链接[克隆空间](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true),在几分钟内创建您自己的 Langflow 运行工作空间。
+
+# 🎨 创建工作流
+
+使用 Langflow 来创建工作流非常简单。只需从侧边栏拖动组件到画布上,然后连接组件即可开始构建应用程序。
+
+您可以通过编辑提示参数、将组件分组到单个高级组件中以及构建您自己的自定义组件来展开探索。
+
+完成后,可以将工作流导出为 JSON 文件。
+
+然后使用以下脚本加载工作流:
+
+```python
+from langflow.load import run_flow_from_json
+
+results = run_flow_from_json("path/to/flow.json", input_value="Hello, World!")
+```
+
+# 部署
+
+## 在Google Cloud Platform上部署Langflow
+
+请按照我们的分步指南使用 Google Cloud Shell 在 Google Cloud Platform (GCP) 上部署 Langflow。该指南在 [**Langflow in Google Cloud Platform**](GCP_DEPLOYMENT.md) 文档中提供。
+
+或者,点击下面的 "Open in Cloud Shell" 按钮,启动 Google Cloud Shell,克隆 Langflow 仓库,并开始一个互动教程,该教程将指导您设置必要的资源并在 GCP 项目中部署 Langflow。
+
+[](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/langflow-ai/langflow&working_dir=scripts/gcp&shellonly=true&tutorial=walkthroughtutorial_spot.md)
+
+## 在Railway上部署
+
+使用此模板在 Railway 上部署 Langflow 1.0 预览版:
+
+[](https://railway.app/template/UsJ1uB?referralCode=MnPSdg)
+
+或者使用此模板部署 Langflow 0.6.x:
+
+[](https://railway.app/template/JMXEWp?referralCode=MnPSdg)
+
+## 在Render上部署
+
+
+
+
+
+# 🖥️ 命令行界面 (CLI)
+
+Langflow提供了一个命令行界面以便于平台的管理和配置。
+
+## 用法
+
+您可以使用以下命令运行Langflow:
+
+```shell
+langflow run [OPTIONS]
+```
+
+命令行参数的详细说明:
+
+- `--help`: 显示所有可用参数。
+- `--host`: 定义绑定服务器的主机host参数,可以使用 LANGFLOW_HOST 环境变量设置,默认值为 127.0.0.1。
+- `--workers`: 设置工作进程的数量,可以使用 LANGFLOW_WORKERS 环境变量设置,默认值为 1。
+- `--timeout`: 设置工作进程的超时时间(秒),默认值为 60。
+- `--port`: 设置服务监听的端口,可以使用 LANGFLOW_PORT 环境变量设置,默认值为 7860。
+- `--config`: 定义配置文件的路径,默认值为 config.yaml。
+- `--env-file`: 指定包含环境变量的 .env 文件路径,默认值为 .env。
+- `--log-level`: 定义日志记录级别,可以使用 LANGFLOW_LOG_LEVEL 环境变量设置,默认值为 critical。
+- `--components-path`: 指定包含自定义组件的目录路径,可以使用 LANGFLOW_COMPONENTS_PATH 环境变量设置,默认值为 langflow/components。
+- `--log-file`: 指定日志文件的路径,可以使用 LANGFLOW_LOG_FILE 环境变量设置,默认值为 logs/langflow.log。
+- `--cache`: 选择要使用的缓存类型,可选项为 InMemoryCache 和 SQLiteCache,可以使用 LANGFLOW_LANGCHAIN_CACHE 环境变量设置,默认值为 SQLiteCache。
+- `--dev/--no-dev`: 切换开发/非开发模式,默认值为 no-dev即非开发模式。
+- `--path`: 指定包含前端构建文件的目录路径,此参数仅用于开发目的,可以使用 LANGFLOW_FRONTEND_PATH 环境变量设置。
+- `--open-browser/--no-open-browser`: 切换启动服务器后是否打开浏览器,可以使用 LANGFLOW_OPEN_BROWSER 环境变量设置,默认值为 open-browser即启动后打开浏览器。
+- `--remove-api-keys/--no-remove-api-keys`: 切换是否从数据库中保存的项目中移除 API 密钥,可以使用 LANGFLOW_REMOVE_API_KEYS 环境变量设置,默认值为 no-remove-api-keys。
+- `--install-completion [bash|zsh|fish|powershell|pwsh]`: 为指定的 shell 安装自动补全。
+- `--show-completion [bash|zsh|fish|powershell|pwsh]`: 显示指定 shell 的自动补全,使您可以复制或自定义安装。
+- `--backend-only`: 此参数默认为 False,允许仅运行后端服务器而不运行前端,也可以使用 LANGFLOW_BACKEND_ONLY 环境变量设置。
+- `--store`: 此参数默认为 True,启用存储功能,使用 --no-store 可禁用它,可以使用 LANGFLOW_STORE 环境变量配置。
+
+这些参数对于需要定制 Langflow 行为的用户尤其重要,特别是在开发或者特殊部署场景中。
+
+### 环境变量
+
+您可以使用环境变量配置许多 CLI 参数选项。这些变量可以在操作系统中导出,或添加到 .env 文件中,并使用 --env-file 参数加载。
+
+项目中包含一个名为 .env.example 的示例 .env 文件。将此文件复制为新文件 .env,并用实际设置值替换示例值。如果同时在操作系统和 .env 文件中设置值,则 .env 设置优先。
+
+# 👋 贡献
+
+我们欢迎各级开发者为我们的 GitHub 开源项目做出贡献,并帮助 Langflow 更加易用,如果您想参与贡献,请查看我们的贡献指南 [contributing guidelines](./CONTRIBUTING.md) 。
+
+---
+
+[](https://star-history.com/#langflow-ai/langflow&Date)
+
+# 🌟 贡献者
+
+[](https://github.com/langflow-ai/langflow/graphs/contributors)
+
+# 📄 许可证
+
+Langflow 以 MIT 许可证发布。有关详细信息,请参阅 [LICENSE](LICENSE) 文件。
diff --git a/docker/build_and_push.Dockerfile b/docker/build_and_push.Dockerfile
index 3a34db188..29f9294a6 100644
--- a/docker/build_and_push.Dockerfile
+++ b/docker/build_and_push.Dockerfile
@@ -1,21 +1,21 @@
-
-
# syntax=docker/dockerfile:1
# Keep this syntax directive! It's used to enable Docker BuildKit
-# Based on https://github.com/python-poetry/poetry/discussions/1879?sort=top#discussioncomment-216865
-# but I try to keep it updated (see history)
+FROM node:20-bookworm-slim as builder-node
+WORKDIR /app
+COPY src/frontend/package.json src/frontend/package-lock.json ./
+RUN npm install
+COPY src/frontend/ ./
+RUN npm run build
+
################################
-# PYTHON-BASE
-# Sets up all our shared environment variables
+# BUILDER-BASE
+# Used to build deps + create our virtual environment
################################
-FROM python:3.12-slim as python-base
+FROM python:3.12-slim as builder-base
-# python
-ENV PYTHONUNBUFFERED=1 \
- # prevents python creating .pyc files
- PYTHONDONTWRITEBYTECODE=1 \
+ENV PYTHONDONTWRITEBYTECODE=1 \
\
# pip
PIP_DISABLE_PIP_VERSION_CHECK=on \
@@ -37,56 +37,48 @@ ENV PYTHONUNBUFFERED=1 \
PYSETUP_PATH="/opt/pysetup" \
VENV_PATH="/opt/pysetup/.venv"
-
-# prepend poetry and venv to path
-ENV PATH="$POETRY_HOME/bin:$VENV_PATH/bin:$PATH"
-
-
-################################
-# BUILDER-BASE
-# Used to build deps + create our virtual environment
-################################
-FROM python-base as builder-base
-
RUN apt-get update \
&& apt-get install --no-install-recommends -y \
# deps for installing poetry
curl \
# deps for building python deps
- build-essential \
- # npm
- npm \
+ build-essential npm \
# gcc
gcc \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
-
-
-# Now we need to copy the entire project into the image
-WORKDIR /app
-COPY pyproject.toml poetry.lock ./
-COPY src ./src
-COPY scripts ./scripts
-COPY Makefile ./
-COPY README.md ./
RUN --mount=type=cache,target=/root/.cache \
curl -sSL https://install.python-poetry.org | python3 -
-RUN useradd -m -u 1000 user && \
- mkdir -p /app/langflow && \
- chown -R user:user /app && \
- chmod -R u+w /app/langflow
-# Update PATH with home/user/.local/bin
-ENV PATH="/home/user/.local/bin:${PATH}"
-RUN python -m pip install requests && cd ./scripts && python update_dependencies.py
-RUN $POETRY_HOME/bin/poetry lock
-RUN $POETRY_HOME/bin/poetry build
+WORKDIR /app
+COPY pyproject.toml poetry.lock README.md ./
+COPY src/ ./src
+COPY scripts/ ./scripts
+RUN python -m pip install requests --user && cd ./scripts && python update_dependencies.py
+COPY --from=builder-node /app/build ./src/backend/base/langflow/frontend
+RUN $POETRY_HOME/bin/poetry lock --no-update \
+ && $POETRY_HOME/bin/poetry build -f wheel \
+ && $POETRY_HOME/bin/poetry run pip install dist/*.whl --force-reinstall
+
+################################
+# RUNTIME
+# Setup user, utilities and copy the virtual environment only
+################################
+FROM python:3.12-slim as runtime
+
+LABEL org.opencontainers.image.title=langflow
+LABEL org.opencontainers.image.authors=['Langflow']
+LABEL org.opencontainers.image.licenses=MIT
+LABEL org.opencontainers.image.url=https://github.com/langflow-ai/langflow
+LABEL org.opencontainers.image.source=https://github.com/langflow-ai/langflow
+
+RUN useradd user -u 1000 -g 0 --no-create-home --home-dir /app/data
+COPY --from=builder-base --chown=1000 /app/.venv /app/.venv
+ENV PATH="/app/.venv/bin:${PATH}"
-# Copy virtual environment and built .tar.gz from builder base
USER user
-# Install the package from the .tar.gz
-RUN python -m pip install /app/dist/*.tar.gz --user
+WORKDIR /app
ENTRYPOINT ["python", "-m", "langflow", "run"]
-CMD ["--host", "0.0.0.0", "--port", "7860"]
+CMD ["--host", "0.0.0.0", "--port", "7860"]
\ No newline at end of file
diff --git a/docker/build_and_push_backend.Dockerfile b/docker/build_and_push_backend.Dockerfile
new file mode 100644
index 000000000..8b82da524
--- /dev/null
+++ b/docker/build_and_push_backend.Dockerfile
@@ -0,0 +1,8 @@
+# syntax=docker/dockerfile:1
+# Keep this syntax directive! It's used to enable Docker BuildKit
+
+ARG LANGFLOW_IMAGE
+FROM $LANGFLOW_IMAGE
+
+RUN rm -rf /app/.venv/langflow/frontend
+CMD ["--host", "0.0.0.0", "--port", "7860", "--backend-only"]
diff --git a/docker/frontend/build_and_push_frontend.Dockerfile b/docker/frontend/build_and_push_frontend.Dockerfile
new file mode 100644
index 000000000..e954a801e
--- /dev/null
+++ b/docker/frontend/build_and_push_frontend.Dockerfile
@@ -0,0 +1,27 @@
+# syntax=docker/dockerfile:1
+# Keep this syntax directive! It's used to enable Docker BuildKit
+
+################################
+# BUILDER-BASE
+################################
+FROM node:lts-bookworm-slim as builder-base
+COPY src/frontend /frontend
+
+RUN cd /frontend && npm install && npm run build
+
+################################
+# RUNTIME
+################################
+FROM nginxinc/nginx-unprivileged:stable-bookworm-perl as runtime
+
+LABEL org.opencontainers.image.title=langflow-frontend
+LABEL org.opencontainers.image.authors=['Langflow']
+LABEL org.opencontainers.image.licenses=MIT
+LABEL org.opencontainers.image.url=https://github.com/langflow-ai/langflow
+LABEL org.opencontainers.image.source=https://github.com/langflow-ai/langflow
+
+COPY --from=builder-base --chown=nginx /frontend/build /usr/share/nginx/html
+COPY --chown=nginx ./docker/frontend/nginx.conf /etc/nginx/conf.d/default.conf
+COPY --chown=nginx ./docker/frontend/start-nginx.sh /start-nginx.sh
+RUN chmod +x /start-nginx.sh
+ENTRYPOINT ["/start-nginx.sh"]
\ No newline at end of file
diff --git a/docker/frontend/nginx.conf b/docker/frontend/nginx.conf
new file mode 100644
index 000000000..d5ecfce43
--- /dev/null
+++ b/docker/frontend/nginx.conf
@@ -0,0 +1,22 @@
+server {
+ gzip on;
+ gzip_comp_level 2;
+ gzip_min_length 1000;
+ gzip_types text/xml text/css;
+ gzip_http_version 1.1;
+ gzip_vary on;
+ gzip_disable "MSIE [4-6] \.";
+
+ listen 80;
+
+ location / {
+ root /usr/share/nginx/html;
+ index index.html index.htm;
+ try_files $uri $uri/ /index.html =404;
+ }
+ location /api {
+ proxy_pass __BACKEND_URL__;
+ }
+
+ include /etc/nginx/extra-conf.d/*.conf;
+}
diff --git a/docker/frontend/start-nginx.sh b/docker/frontend/start-nginx.sh
new file mode 100644
index 000000000..3607adf7d
--- /dev/null
+++ b/docker/frontend/start-nginx.sh
@@ -0,0 +1,16 @@
+#!/bin/sh
+set -e
+trap 'kill -TERM $PID' TERM INT
+if [ -z "$BACKEND_URL" ]; then
+ BACKEND_URL="$1"
+fi
+if [ -z "$BACKEND_URL" ]; then
+ echo "BACKEND_URL must be set as an environment variable or as first parameter. (e.g. http://localhost:7860)"
+ exit 1
+fi
+sed -i "s|__BACKEND_URL__|$BACKEND_URL|g" /etc/nginx/conf.d/default.conf
+cat /etc/nginx/conf.d/default.conf
+
+
+# Start nginx
+exec nginx -g 'daemon off;'
diff --git a/docker/render.pre-release.Dockerfile b/docker/render.pre-release.Dockerfile
new file mode 100644
index 000000000..d3aa9cbde
--- /dev/null
+++ b/docker/render.pre-release.Dockerfile
@@ -0,0 +1 @@
+FROM langflowai/langflow:1.0-alpha
diff --git a/docs/docs/administration/api.mdx b/docs/docs/administration/api.mdx
index 103c43f81..115cdc666 100644
--- a/docs/docs/administration/api.mdx
+++ b/docs/docs/administration/api.mdx
@@ -10,8 +10,7 @@ Langflow provides an API key functionality that allows users to access their ind
The default user and password are set using the LANGFLOW_SUPERUSER and
LANGFLOW_SUPERUSER_PASSWORD environment variables.
-The default values are
-langflow and langflow, respectively.
+The default values are `langflow` and `langflow`, respectively.
diff --git a/docs/docs/administration/cli.mdx b/docs/docs/administration/cli.mdx
index a2a41adcd..41bc76de3 100644
--- a/docs/docs/administration/cli.mdx
+++ b/docs/docs/administration/cli.mdx
@@ -1,62 +1,51 @@
# Command Line Interface (CLI)
-## Overview
-
Langflow's Command Line Interface (CLI) is a powerful tool that allows you to interact with the Langflow server from the command line. The CLI provides a wide range of commands to help you shape Langflow to your needs.
-Running the CLI without any arguments will display a list of available commands and options.
+The available commands are below. Navigate to their individual sections of this page to see the parameters.
+
+- [langflow](#overview)
+- [langflow api-key](#langflow-api-key)
+- [langflow copy-db](#langflow-copy-db)
+- [langflow migration](#langflow-migration)
+- [langflow run](#langflow-run)
+- [langflow superuser](#langflow-superuser)
+
+## Overview
+
+Running the CLI without any arguments displays a list of available options and commands.
```bash
-python -m langflow run --help
+langflow
# or
-python -m langflow run
+langflow --help
+# or
+python -m langflow
```
-Each option for `run` command are detailed below:
+| Command | Description |
+| ----------- | ---------------------------------------------------------------------- |
+| `api-key` | Creates an API key for the default superuser if AUTO_LOGIN is enabled. |
+| `copy-db` | Copy the database files to the current directory (`which langflow`). |
+| `migration` | Run or test migrations. |
+| `run` | Run the Langflow. |
+| `superuser` | Create a superuser. |
-- `--help`: Displays all available options.
-- `--host`: Defines the host to bind the server to. Can be set using the `LANGFLOW_HOST` environment variable. The default is `127.0.0.1`.
-- `--workers`: Sets the number of worker processes. Can be set using the `LANGFLOW_WORKERS` environment variable. The default is `1`.
-- `--timeout`: Sets the worker timeout in seconds. The default is `60`.
-- `--port`: Sets the port to listen on. Can be set using the `LANGFLOW_PORT` environment variable. The default is `7860`.
-- `--env-file`: Specifies the path to the .env file containing environment variables. The default is `.env`.
-- `--log-level`: Defines the logging level. Can be set using the `LANGFLOW_LOG_LEVEL` environment variable. The default is `critical`.
-- `--components-path`: Specifies the path to the directory containing custom components. Can be set using the `LANGFLOW_COMPONENTS_PATH` environment variable. The default is `langflow/components`.
-- `--log-file`: Specifies the path to the log file. Can be set using the `LANGFLOW_LOG_FILE` environment variable. The default is `logs/langflow.log`.
-- `--cache`: Select the type of cache to use. Options are `InMemoryCache` and `SQLiteCache`. Can be set using the `LANGFLOW_LANGCHAIN_CACHE` environment variable. The default is `SQLiteCache`.
-- `--dev/--no-dev`: Toggles the development mode. The default is `no-dev`.
-- `--path`: Specifies the path to the frontend directory containing build files. This option is for development purposes only. Can be set using the `LANGFLOW_FRONTEND_PATH` environment variable.
-- `--open-browser/--no-open-browser`: Toggles the option to open the browser after starting the server. Can be set using the `LANGFLOW_OPEN_BROWSER` environment variable. The default is `open-browser`.
-- `--remove-api-keys/--no-remove-api-keys`: Toggles the option to remove API keys from the projects saved in the database. Can be set using the `LANGFLOW_REMOVE_API_KEYS` environment variable. The default is `no-remove-api-keys`.
-- `--install-completion [bash|zsh|fish|powershell|pwsh]`: Installs completion for the specified shell.
-- `--show-completion [bash|zsh|fish|powershell|pwsh]`: Shows completion for the specified shell, allowing you to copy it or customize the installation.
-- `--backend-only`: This parameter, with a default value of `False`, allows running only the backend server without the frontend. It can also be set using the `LANGFLOW_BACKEND_ONLY` environment variable.
-- `--store`: This parameter, with a default value of `True`, enables the store features, use `--no-store` to deactivate it. It can be configured using the `LANGFLOW_STORE` environment variable.
+### Options
-These parameters are important for users who need to customize the behavior of Langflow, especially in development or specialized deployment scenarios.
+| Option | Description |
+| ---------------------- | -------------------------------------------------------------------------------- |
+| `--install-completion` | Install completion for the current shell. |
+| `--show-completion` | Show completion for the current shell, to copy it or customize the installation. |
+| `--help` | Show this message and exit. |
-### API Key Command
+## langflow api-key
-The `api-key` command allows you to create an API key for accessing Langflow's API when `LANGFLOW_AUTO_LOGIN` is set to `True`.
-
-```bash
-python -m langflow api-key --help
-
- Usage: langflow api-key [OPTIONS]
-
- Creates an API key for the default superuser if AUTO_LOGIN is enabled.
- Args: log_level (str, optional): Logging level. Defaults to "error".
- Returns: None
-
-╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
-│ --log-level TEXT Logging level. [env var: LANGFLOW_LOG_LEVEL] [default: error] │
-│ --help Show this message and exit. │
-╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
-```
-
-Once you run the `api-key` command, it will create an API key for the default superuser if `LANGFLOW_AUTO_LOGIN` is set to `True`.
+Run the `api-key` command to create an API key for the default superuser if `LANGFLOW_AUTO_LOGIN` is set to `True`.
```bash
+langflow api-key
+# or
python -m langflow api-key
╭─────────────────────────────────────────────────────────────────────╮
│ API Key Created Successfully: │
@@ -67,11 +56,98 @@ python -m langflow api-key
│ Make sure to store it in a secure location. │
│ │
│ The API key has been copied to your clipboard. Cmd + V to paste it. │
-╰─────────────────────────────────────────────────────────────────────╯
+╰──────────────────────────────
```
-### Environment Variables
+### Options
+
+| Option | Type | Description |
+| ----------- | ---- | ------------------------------------------------------------- |
+| --log-level | TEXT | Logging level. [env var: LANGFLOW_LOG_LEVEL] [default: error] |
+| --help | | Show this message and exit. |
+
+## langflow copy-db
+
+Run the `copy-db` command to copy the cached `langflow.db` and `langflow-pre.db` database files to the current directory.
+
+If the files exist in the cache directory, they will be copied to the same directory as `__main__.py`, which can be found with `which langflow`.
+
+### Options
+
+None.
+
+## langflow migration
+
+Run or test migrations with the [Alembic](https://pypi.org/project/alembic/) database tool.
+
+```bash
+langflow migration
+# or
+python -m langflow migration
+```
+
+### Options
+
+| Option | Description |
+| ------------------- | -------------------------------------------------------------------------------------------------------------------------- |
+| `--test, --no-test` | Run migrations in test mode. [default: test] |
+| `--fix, --no-fix` | Fix migrations. This is a destructive operation, and should only be used if you know what you are doing. [default: no-fix] |
+| `--help` | Show this message and exit. |
+
+## langflow run
+
+Run Langflow.
+
+```bash
+langflow run
+# or
+python -m langflow run
+```
+
+### Options
+
+| Option | Description |
+| ---------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `--help` | Displays all available options. |
+| `--host` | Defines the host to bind the server to. Can be set using the `LANGFLOW_HOST` environment variable. The default is `127.0.0.1`. |
+| `--workers` | Sets the number of worker processes. Can be set using the `LANGFLOW_WORKERS` environment variable. The default is `1`. |
+| `--timeout` | Sets the worker timeout in seconds. The default is `60`. |
+| `--port` | Sets the port to listen on. Can be set using the `LANGFLOW_PORT` environment variable. The default is `7860`. |
+| `--env-file` | Specifies the path to the .env file containing environment variables. The default is `.env`. |
+| `--log-level` | Defines the logging level. Can be set using the `LANGFLOW_LOG_LEVEL` environment variable. The default is `critical`. |
+| `--components-path` | Specifies the path to the directory containing custom components. Can be set using the `LANGFLOW_COMPONENTS_PATH` environment variable. The default is `langflow/components`. |
+| `--log-file` | Specifies the path to the log file. Can be set using the `LANGFLOW_LOG_FILE` environment variable. The default is `logs/langflow.log`. |
+| `--cache` | Select the type of cache to use. Options are `InMemoryCache` and `SQLiteCache`. Can be set using the `LANGFLOW_LANGCHAIN_CACHE` environment variable. The default is `SQLiteCache`. |
+| `--dev`/`--no-dev` | Toggles the development mode. The default is `no-dev`. |
+| `--path` | Specifies the path to the frontend directory containing build files. This option is for development purposes only. Can be set using the `LANGFLOW_FRONTEND_PATH` environment variable. |
+| `--open-browser`/`--no-open-browser` | Toggles the option to open the browser after starting the server. Can be set using the `LANGFLOW_OPEN_BROWSER` environment variable. The default is `open-browser`. |
+| `--remove-api-keys`/`--no-remove-api-keys` | Toggles the option to remove API keys from the projects saved in the database. Can be set using the `LANGFLOW_REMOVE_API_KEYS` environment variable. The default is `no-remove-api-keys`. |
+| `--install-completion [bash\|zsh\|fish\|powershell\|pwsh]` | Installs completion for the specified shell. |
+| `--show-completion [bash\|zsh\|fish\|powershell\|pwsh]` | Shows completion for the specified shell, allowing you to copy it or customize the installation. |
+| `--backend-only` | This parameter, with a default value of `False`, allows running only the backend server without the frontend. It can also be set using the `LANGFLOW_BACKEND_ONLY` environment variable. For more, see [Backend-only](../deployment/backend-only.md). |
+| `--store` | This parameter, with a default value of `True`, enables the store features, use `--no-store` to deactivate it. It can be configured using the `LANGFLOW_STORE` environment variable. |
+
+#### Environment Variables
You can configure many of the CLI options using environment variables. These can be exported in your operating system or added to a `.env` file and loaded using the `--env-file` option.
A sample `.env` file named `.env.example` is included with the project. Copy this file to a new file named `.env` and replace the example values with your actual settings. If you're setting values in both your OS and the `.env` file, the `.env` settings will take precedence.
+
+## langflow superuser
+
+Create a superuser for Langflow.
+
+```bash
+langflow superuser
+# or
+python -m langflow superuser
+```
+
+### Options
+
+| Option | Type | Description |
+| ------------- | ---- | ------------------------------------------------------------- |
+| `--username` | TEXT | Username for the superuser. [default: None] [required] |
+| `--password` | TEXT | Password for the superuser. [default: None] [required] |
+| `--log-level` | TEXT | Logging level. [env var: LANGFLOW_LOG_LEVEL] [default: error] |
+| `--help` | | Show this message and exit. |
diff --git a/docs/docs/administration/custom-component.mdx b/docs/docs/administration/custom-component.mdx
index e82c56851..02a137d07 100644
--- a/docs/docs/administration/custom-component.mdx
+++ b/docs/docs/administration/custom-component.mdx
@@ -74,11 +74,6 @@ class DocumentProcessor(CustomComponent):
-
- Check out [FlowRunner Component](../examples/flow-runner) for a more complex
- example.
-
-
---
## Rules
diff --git a/docs/docs/administration/global-env.mdx b/docs/docs/administration/global-env.mdx
index c23ca8dd1..51e5d633e 100644
--- a/docs/docs/administration/global-env.mdx
+++ b/docs/docs/administration/global-env.mdx
@@ -1,31 +1,39 @@
+import ThemedImage from "@theme/ThemedImage";
+import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
-import Admonition from "@theme/Admonition";
import ReactPlayer from "react-player";
+import Admonition from "@theme/Admonition";
-# Global Environment Variables
+# Global Variables
-Langflow 1.0 alpha includes the option to add **Global Environment Variables** for your application.
+Global Variables are a useful feature of Langflow, allowing you to define reusable variables accessed from any Text field in your project.
-## Add a global variable to a project
+## TL;DR
-In this example, you'll add the `openai_api_key` credential as a global environment variable to the **Basic Prompting** starter project.
+- Global Variables are reusable variables accessible from any Text field in your project.
+- To create one, click the 🌐 button in a Text field and then **+ Add New Variable**.
+- Define the **Name**, **Type**, and **Value** of the variable.
+- Click **Save Variable** to create it.
+- All Credential Global Variables are encrypted and accessible only by you.
+- Set _`LANGFLOW_STORE_ENVIRONMENT_VARIABLES`_ to _`true`_ in your `.env` file to add all variables in _`LANGFLOW_VARIABLES_TO_GET_FROM_ENVIRONMENT`_ to your user's Global Variables.
-For more information on the starter flow, see [Basic prompting](../starter-projects/basic-prompting.mdx).
+## Creating and Adding a Global Variable
-1. From the Langflow dashboard, click **New Project**.
-2. Select **Basic Prompting**.
+To create and add a global variable, click the 🌐 button in a Text field, and then click **+ Add New Variable**.
-The **Basic Prompting** flow is created.
+Text fields are where you write text without opening a Text area, and are identified with the 🌐 icon.
-3. To create an environment variable for the **OpenAI** component:
- 1. In the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
- 2. In the **Variable Name** field, enter `openai_api_key`.
- 3. In the **Value** field, paste your OpenAI API Key (`sk-...`).
- 4. For the variable **Type**, select **Credential**.
- 5. In the **Apply to Fields** field, select **OpenAI API Key** to apply this variable to all fields named **OpenAI API Key**.
- 6. Click **Save Variable**.
+For example, to create an environment variable for the **OpenAI** component:
+
+1. In the **OpenAI API Key** text field, click the 🌐 button, then **Add New Variable**.
+2. Enter `openai_api_key` in the **Variable Name** field.
+3. Paste your OpenAI API Key (`sk-...`) in the **Value** field.
+4. Select **Credential** for the **Type**.
+5. Choose **OpenAI API Key** in the **Apply to Fields** field to apply this variable to all fields named **OpenAI API Key**.
+6. Click **Save Variable**.
You now have a `openai_api_key` global environment variable for your Langflow project.
+Subsequently, clicking the 🌐 button in a Text field will display the new variable in the dropdown.
You can also create global variables in **Settings** > **Variables and
@@ -41,10 +49,55 @@ You now have a `openai_api_key` global environment variable for your Langflow pr
style={{ width: "40%", margin: "20px auto" }}
/>
-4. To view and manage your project's global environment variables, visit **Settings** > **Variables and Secrets**.
+To view and manage your project's global environment variables, visit **Settings** > **Variables and Secrets**.
For more on variables in HuggingFace Spaces, see [Managing Secrets](https://huggingface.co/docs/hub/spaces-overview#managing-secrets).
+{/* All variables are encrypted */}
+
+
+ All Credential Global Variables are encrypted and accessible only by you.
+
+
+## Configuring Environment Variables in your .env file
+
+Setting `LANGFLOW_STORE_ENVIRONMENT_VARIABLES` to `true` in your `.env` file (default) adds all variables in `LANGFLOW_VARIABLES_TO_GET_FROM_ENVIRONMENT` to your user's Global Variables.
+
+These variables are accessible like any other Global Variable.
+
+
+ To prevent this behavior, set `LANGFLOW_STORE_ENVIRONMENT_VARIABLES` to
+ `false` in your `.env` file.
+
+
+You can specify variables to get from the environment by listing them in `LANGFLOW_VARIABLES_TO_GET_FROM_ENVIRONMENT`.
+
+Specify variables as a comma-separated list (e.g., _`"VARIABLE1, VARIABLE2"`_) or a JSON-encoded string (e.g., _`'["VARIABLE1", "VARIABLE2"]'`_).
+
+The default list of variables includes:
+
+- ANTHROPIC_API_KEY
+- ASTRA_DB_API_ENDPOINT
+- ASTRA_DB_APPLICATION_TOKEN
+- AZURE_OPENAI_API_KEY
+- AZURE_OPENAI_API_DEPLOYMENT_NAME
+- AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
+- AZURE_OPENAI_API_INSTANCE_NAME
+- AZURE_OPENAI_API_VERSION
+- COHERE_API_KEY
+- GOOGLE_API_KEY
+- GROQ_API_KEY
+- HUGGINGFACEHUB_API_TOKEN
+- OPENAI_API_KEY
+- PINECONE_API_KEY
+- SEARCHAPI_API_KEY
+- SERPAPI_API_KEY
+- UPSTASH_VECTOR_REST_URL
+- UPSTASH_VECTOR_REST_TOKEN
+- VECTARA_CUSTOMER_ID
+- VECTARA_CORPUS_ID
+- VECTARA_API_KEY
+
## Video
- Read the [Custom Component Guidelines](../administration/custom-component) for detailed information on custom components.
+ Read the [Custom Component Guidelines](../administration/custom-component) for
+ detailed information on custom components.
Custom components let you extend Langflow by creating reusable and configurable components from a Python script.
@@ -31,57 +32,60 @@ This class is the foundation for creating custom components. It allows users to
The following types are supported in the build method:
-| Supported Types |
-| --------------------------------------------------------- |
-| _`str`_, _`int`_, _`float`_, _`bool`_, _`list`_, _`dict`_ |
-| _`langflow.field_typing.NestedDict`_ |
-| _`langflow.field_typing.Prompt`_ |
-| _`langchain.chains.base.Chain`_ |
-| _`langchain.PromptTemplate`_ |
+| Supported Types |
+| ----------------------------------------------------------------- |
+| _`str`_, _`int`_, _`float`_, _`bool`_, _`list`_, _`dict`_ |
+| _`langflow.field_typing.NestedDict`_ |
+| _`langflow.field_typing.Prompt`_ |
+| _`langchain.chains.base.Chain`_ |
+| _`langchain.PromptTemplate`_ |
| _`from langchain.schema.language_model import BaseLanguageModel`_ |
-| _`langchain.Tool`_ |
-| _`langchain.document_loaders.base.BaseLoader`_ |
-| _`langchain.schema.Document`_ |
-| _`langchain.text_splitters.TextSplitter`_ |
-| _`langchain.vectorstores.base.VectorStore`_ |
-| _`langchain.embeddings.base.Embeddings`_ |
-| _`langchain.schema.BaseRetriever`_ |
+| _`langchain.Tool`_ |
+| _`langchain.document_loaders.base.BaseLoader`_ |
+| _`langchain.schema.Document`_ |
+| _`langchain.text_splitters.TextSplitter`_ |
+| _`langchain.vectorstores.base.VectorStore`_ |
+| _`langchain.embeddings.base.Embeddings`_ |
+| _`langchain.schema.BaseRetriever`_ |
The difference between _`dict`_ and _`langflow.field_typing.NestedDict`_ is that one adds a simple key-value pair field, while the other opens a more robust dictionary editor.
- Use the `Prompt` type by adding **kwargs to the build method.
- If you want to add the values of the variables to the template you defined, format the `PromptTemplate` inside the `CustomComponent` class.
+ Use the `Prompt` type by adding **kwargs to the build method. If you want to
+ add the values of the variables to the template you defined, format the
+ `PromptTemplate` inside the `CustomComponent` class.
- Use base Python types without a handle by default. To add handles, use the `input_types` key in the `build_config` method.
+ Use base Python types without a handle by default. To add handles, use the
+ `input_types` key in the `build_config` method.
**build_config:** Defines the configuration fields of the component. This method returns a dictionary where each key represents a field name and each value defines the field's behavior.
Supported keys for configuring fields:
-| Key | Description |
-| --------------------- | --------------------------------------------------- |
-| `is_list` | Boolean indicating if the field can hold multiple values. |
-| `options` | Dropdown menu options. |
-| `multiline` | Boolean indicating if a field allows multiline input. |
-| `input_types` | Allows connection handles for string fields. |
-| `display_name` | Field name displayed in the UI. |
-| `advanced` | Hides the field in the default UI view. |
-| `password` | Masks input, useful for sensitive data. |
-| `required` | Overrides the default behavior to make a field mandatory. |
-| `info` | Tooltip for the field. |
-| `file_types` | Accepted file types, useful for file fields. |
-| `range_spec` | Defines valid ranges for float fields. |
-| `title_case` | Boolean that controls field name capitalization. |
-| `refresh_button` | Adds a refresh button that updates field values. |
-| `real_time_refresh` | Updates the configuration as field values change. |
-| `field_type` | Automatically set based on the build method's type hint. |
+| Key | Description |
+| ------------------- | --------------------------------------------------------- |
+| `is_list` | Boolean indicating if the field can hold multiple values. |
+| `options` | Dropdown menu options. |
+| `multiline` | Boolean indicating if a field allows multiline input. |
+| `input_types` | Allows connection handles for string fields. |
+| `display_name` | Field name displayed in the UI. |
+| `advanced` | Hides the field in the default UI view. |
+| `password` | Masks input, useful for sensitive data. |
+| `required` | Overrides the default behavior to make a field mandatory. |
+| `info` | Tooltip for the field. |
+| `file_types` | Accepted file types, useful for file fields. |
+| `range_spec` | Defines valid ranges for float fields. |
+| `title_case` | Boolean that controls field name capitalization. |
+| `refresh_button` | Adds a refresh button that updates field values. |
+| `real_time_refresh` | Updates the configuration as field values change. |
+| `field_type` | Automatically set based on the build method's type hint. |
- Use the `update_build_config` method to dynamically update configurations based on field values.
+ Use the `update_build_config` method to dynamically update configurations
+ based on field values.
## Additional methods and attributes
@@ -99,8 +103,3 @@ The `CustomComponent` class also provides helpful methods for specific tasks (e.
- `status`: Shows values from the `build` method, useful for debugging.
- `field_order`: Controls the display order of fields.
- `icon`: Sets the canvas display icon.
-
-
- Check out the [FlowRunner](../examples/flow-runner) example to understand how to call a flow from a custom component.
-
-
diff --git a/docs/docs/components/inputs-and-outputs.mdx b/docs/docs/components/inputs-and-outputs.mdx
new file mode 100644
index 000000000..2a624221a
--- /dev/null
+++ b/docs/docs/components/inputs-and-outputs.mdx
@@ -0,0 +1,161 @@
+import Admonition from "@theme/Admonition";
+import ZoomableImage from "/src/theme/ZoomableImage.js";
+
+# Inputs and Outputs
+
+TL;DR: Inputs and Outputs are a category of components that are used to define where data comes in and out of your flow.
+They also dynamically change the Playground and can be renamed to facilitate building and maintaining your flows.
+
+## Inputs
+
+Inputs are components used to define where data enters your flow. They can receive data from the user, a database, or any other source that can be converted to Text or Record.
+
+The difference between Chat Input and other Input components is the output format, the number of configurable fields, and the way they are displayed in the Playground.
+
+Chat Input components can output `Text` or `Record`. When you want to pass the sender name or sender to the next component, use the `Record` output. To pass only the message, use the `Text` output, useful when saving the message to a database or memory system like Zep.
+
+You can find out more about Chat Input and other Inputs [here](#chat-input).
+
+### Chat Input
+
+This component collects user input from the chat.
+
+**Parameters**
+
+- **Sender Type:** Specifies the sender type. Defaults to `User`. Options are `Machine` and `User`.
+- **Sender Name:** Specifies the name of the sender. Defaults to `User`.
+- **Message:** Specifies the message text. It is a multiline text input.
+- **Session ID:** Specifies the session ID of the chat history. If provided, the message will be saved in the Message History.
+
+
+
+ If `As Record` is `true` and the `Message` is a `Record`, the data of the
+ `Record` will be updated with the `Sender`, `Sender Name`, and `Session ID`.
+
+
+
+
+
+One significant capability of the Chat Input component is its ability to transform the Playground into a chat window. This feature is particularly valuable for scenarios requiring user input to initiate or influence the flow.
+
+
+
+### Text Input
+
+The **Text Input** component adds an **Input** field on the Playground. This enables you to define parameters while running and testing your flow.
+
+**Parameters**
+
+- **Value:** Specifies the text input value. This is where the user inputs text data that will be passed to the next component in the sequence. If no value is provided, it defaults to an empty string.
+- **Record Template:** Specifies how a `Record` should be converted into `Text`.
+
+The **Record Template** field is used to specify how a `Record` should be converted into `Text`. This is particularly useful when you want to extract specific information from a `Record` and pass it as text to the next component in the sequence.
+
+For example, if you have a `Record` with the following structure:
+
+```json
+{
+ "name": "John Doe",
+ "age": 30,
+ "email": "johndoe@email.com"
+}
+```
+
+A template with `Name: {name}, Age: {age}` will convert the `Record` into a text string of `Name: John Doe, Age: 30`.
+
+If you pass more than one `Record`, the text will be concatenated with a new line separator.
+
+
+
+## Outputs
+
+Outputs are components that are used to define where data comes out of your flow. They can be used to send data to the user, to the Playground, or to define how the data will be displayed in the Playground.
+
+The Chat Output works similarly to the Chat Input but does not have a field that allows for written input. It is used as an Output definition and can be used to send data to the user.
+
+You can find out more about it and the other Outputs [here](#chat-output).
+
+### Chat Output
+
+This component sends a message to the chat.
+
+**Parameters**
+
+- **Sender Type:** Specifies the sender type. Default is `"Machine"`. Options are `"Machine"` and `"User"`.
+
+- **Sender Name:** Specifies the sender's name. Default is `"AI"`.
+
+- **Session ID:** Specifies the session ID of the chat history. If provided, messages are saved in the Message History.
+
+- **Message:** Specifies the text of the message.
+
+
+
+ If `As Record` is `true` and the `Message` is a `Record`, the data in the
+ `Record` is updated with the `Sender`, `Sender Name`, and `Session ID`.
+
+
+
+### Text Output
+
+This component displays text data to the user. It is useful when you want to show text without sending it to the chat.
+
+**Parameters**
+
+- **Value:** Specifies the text data to be displayed. Defaults to an empty string.
+
+The `TextOutput` component provides a simple way to display text data. It allows textual data to be visible in the chat window during your interaction flow.
+
+## Prompts
+
+A prompt is the input provided to a language model, consisting of multiple components and can be parameterized using prompt templates. A prompt template offers a reproducible method for generating prompts, enabling easy customization through input variables.
+
+### Prompt
+
+This component creates a prompt template with dynamic variables. This is useful for structuring prompts and passing dynamic data to a language model.
+
+**Parameters**
+
+- **Template:** The template for the prompt. This field allows you to create other fields dynamically by using curly brackets `{}`. For example, if you have a template like `Hello {name}, how are you?`, a new field called `name` will be created. Prompt variables can be created with any name inside curly brackets, e.g. `{variable_name}`.
+
+
+
+### PromptTemplate
+
+The `PromptTemplate` component enables users to create prompts and define variables that control how the model is instructed. Users can input a set of variables which the template uses to generate the prompt when a conversation starts.
+
+
+ After defining a variable in the prompt template, it acts as its own component
+ input. See [Prompt Customization](../administration/prompt-customization) for
+ more details.
+
+
+- **template:** The template used to format an individual request.
diff --git a/docs/docs/components/inputs.mdx b/docs/docs/components/inputs.mdx
deleted file mode 100644
index 854f7fee3..000000000
--- a/docs/docs/components/inputs.mdx
+++ /dev/null
@@ -1,99 +0,0 @@
-import Admonition from '@theme/Admonition';
-import ZoomableImage from "/src/theme/ZoomableImage.js";
-
-# Inputs
-
-## Chat Input
-
-This component obtains user input from the chat.
-
-**Parameters**
-
-- **Sender Type:** Specifies the sender type. Defaults to `User`. Options are `Machine` and `User`.
-- **Sender Name:** Specifies the name of the sender. Defaults to `User`.
-- **Message:** Specifies the message text. It is a multiline text input.
-- **Session ID:** Specifies the session ID of the chat history. If provided, the message will be saved in the Message History.
-
-
-
- If `As Record` is `true` and the `Message` is a `Record`, the data
- of the `Record` will be updated with the `Sender`, `Sender Name`, and
- `Session ID`.
-
-
-
-
-
-One significant capability of the Chat Input component is its ability to transform the Playground into a chat window. This feature is particularly valuable for scenarios requiring user input to initiate or influence the flow.
-
-
-
----
-
-## Prompt
-
-This component creates a prompt template with dynamic variables. This is useful for structuring prompts and passing dynamic data to a language model.
-
-**Parameters**
-
-- **Template:** The template for the prompt. This field allows you to create other fields dynamically by using curly brackets `{}`. For example, if you have a template like `Hello {name}, how are you?`, a new field called `name` will be created. Prompt variables can be created with any name inside curly brackets, e.g. `{variable_name}`.
-
-
-
----
-
-## Text Input
-
-The **Text Input** component adds an **Input** field on the Playground. This enables you to define parameters while running and testing your flow.
-
-**Parameters**
-
-- **Value:** Specifies the text input value. This is where the user inputs text data that will be passed to the next component in the sequence. If no value is provided, it defaults to an empty string.
-- **Record Template:** Specifies how a `Record` should be converted into `Text`.
-
-The **Record Template** field is used to specify how a `Record` should be converted into `Text`. This is particularly useful when you want to extract specific information from a `Record` and pass it as text to the next component in the sequence.
-
-For example, if you have a `Record` with the following structure:
-
-```json
-{
- "name": "John Doe",
- "age": 30,
- "email": "johndoe@email.com"
-}
-```
-
-A template with `Name: {name}, Age: {age}` will convert the `Record` into a text string of `Name: John Doe, Age: 30`.
-
-If you pass more than one `Record`, the text will be concatenated with a new line separator.
-
-
-
diff --git a/docs/docs/components/outputs.mdx b/docs/docs/components/outputs.mdx
deleted file mode 100644
index a8947e60e..000000000
--- a/docs/docs/components/outputs.mdx
+++ /dev/null
@@ -1,34 +0,0 @@
-import Admonition from '@theme/Admonition';
-
-# Outputs
-
-## Chat Output
-
-This component sends a message to the chat.
-
-**Parameters**
-
-- **Sender Type:** Specifies the sender type. Default is `"Machine"`. Options are `"Machine"` and `"User"`.
-
-- **Sender Name:** Specifies the sender's name. Default is `"AI"`.
-
-- **Session ID:** Specifies the session ID of the chat history. If provided, messages are saved in the Message History.
-
-- **Message:** Specifies the text of the message.
-
-
-
- If `As Record` is `true` and the `Message` is a `Record`, the data in the `Record` is updated with the `Sender`, `Sender Name`, and `Session ID`.
-
-
-
-## Text Output
-
-This component displays text data to the user. It is useful when you want to show text without sending it to the chat.
-
-**Parameters**
-
-- **Value:** Specifies the text data to be displayed. Defaults to an empty string.
-
-
-The `TextOutput` component provides a simple way to display text data. It allows textual data to be visible in the chat window during your interaction flow.
diff --git a/docs/docs/components/prompts.mdx b/docs/docs/components/prompts.mdx
deleted file mode 100644
index 19fdedf11..000000000
--- a/docs/docs/components/prompts.mdx
+++ /dev/null
@@ -1,25 +0,0 @@
-import Admonition from "@theme/Admonition";
-
-# Prompts
-
-
-
- Thank you for your patience as we refine our documentation. It may
- still have some areas under development. Please share your feedback or report any issues to help us improve!
-
-
-
-A prompt is the input provided to a language model, consisting of multiple components and can be parameterized using prompt templates. A prompt template offers a reproducible method for generating prompts, enabling easy customization through input variables.
-
----
-
-### PromptTemplate
-
-The `PromptTemplate` component enables users to create prompts and define variables that control how the model is instructed. Users can input a set of variables which the template uses to generate the prompt when a conversation starts.
-
-
- After defining a variable in the prompt template, it acts as its own component
- input. See [Prompt Customization](../administration/prompt-customization) for more details.
-
-
-- **template:** The template used to format an individual request.
diff --git a/docs/docs/components/text-and-record.mdx b/docs/docs/components/text-and-record.mdx
new file mode 100644
index 000000000..24c16e4aa
--- /dev/null
+++ b/docs/docs/components/text-and-record.mdx
@@ -0,0 +1,49 @@
+# Text and Record
+
+In Langflow 1.0, we added two main input and output types: `Text` and `Record`.
+
+`Text` is a simple string input and output type, while `Record` is a structure very similar to a dictionary in Python. It is a key-value pair data structure.
+
+We've created a few components to help you work with these types. Let's see how a few of them work.
+
+## Records To Text
+
+This is a component that takes in Records and outputs a `Text`. It does this using a template string and concatenating the values of the `Record`, one per line.
+
+If we have the following Records:
+
+```json
+{
+ "sender_name": "Alice",
+ "message": "Hello!"
+}
+{
+ "sender_name": "John",
+ "message": "Hi!"
+}
+```
+
+And the template string is: _`{sender_name}: {message}`_
+
+The output is:
+
+```
+Alice: Hello!
+John: Hi!
+```
+
+## Create Record
+
+This component allows you to create a `Record` from a number of inputs. You can add as many key-value pairs as you want (as long as it is less than 15). Once you've picked that number you'll need to write the name of the Key and can pass `Text` values from other components to it.
+
+## Documents To Records
+
+This component takes in a LangChain `Document` and outputs a `Record`. It does this by extracting the `page_content` and the `metadata` from the `Document` and adding them to the `Record` as text and data respectively.
+
+## Why is this useful?
+
+The idea was to create a unified way to work with complex data in Langflow and to make it easier to work with data that is not just a simple string. This way you can create more complex workflows and use the data in more ways.
+
+## What's next?
+
+We are planning to integrate an array of modalities to Langflow, such as images, audio, and video. This will allow you to create even more complex workflows and use cases. Stay tuned for more updates! 🚀
diff --git a/docs/docs/components/vector-stores.mdx b/docs/docs/components/vector-stores.mdx
index 7e21f1021..6072abe29 100644
--- a/docs/docs/components/vector-stores.mdx
+++ b/docs/docs/components/vector-stores.mdx
@@ -1,6 +1,6 @@
import Admonition from "@theme/Admonition";
-# Vector Stores Documentation
+# Vector Stores
### Astra DB
diff --git a/docs/docs/contributing/community.md b/docs/docs/contributing/community.md
index 604487133..5c95718ec 100644
--- a/docs/docs/contributing/community.md
+++ b/docs/docs/contributing/community.md
@@ -10,7 +10,7 @@ Langflow [Discord](https://discord.gg/EqksyE2EX9) server.
---
-## 🐦 Stay tunned for **Langflow** on Twitter
+## 🐦 Stay tuned for **Langflow** on Twitter
Follow [@langflow_ai](https://twitter.com/langflow_ai) on **Twitter** to get the latest news about **Langflow**.
diff --git a/docs/docs/deployment/backend-only.md b/docs/docs/deployment/backend-only.md
new file mode 100644
index 000000000..9c408ad17
--- /dev/null
+++ b/docs/docs/deployment/backend-only.md
@@ -0,0 +1,123 @@
+# Backend-only
+
+You can run Langflow in `--backend-only` mode to expose your Langflow app as an API, without running the frontend UI.
+
+Start langflow in backend-only mode with `python3 -m langflow run --backend-only`.
+
+The terminal prints ` Welcome to ⛓ Langflow `, and a blank window opens at `http://127.0.0.1:7864/all`.
+Langflow will now serve requests to its API without the frontend running.
+
+## Prerequisites
+
+- [Langflow installed](../getting-started/install-langflow.mdx)
+
+- [OpenAI API key](https://platform.openai.com)
+
+- [A Langflow flow created](../starter-projects/basic-prompting.mdx)
+
+## Download your flow's curl call
+
+1. Click API.
+2. Click **curl** > **Copy code** and save the code to your local machine.
+ It will look something like this:
+
+```curl
+curl -X POST \
+ "http://127.0.0.1:7864/api/v1/run/ef7e0554-69e5-4e3e-ab29-ee83bcd8d9ef?stream=false" \
+ -H 'Content-Type: application/json'\
+ -d '{"input_value": "message",
+ "output_type": "chat",
+ "input_type": "chat",
+ "tweaks": {
+ "Prompt-kvo86": {},
+ "OpenAIModel-MilkD": {},
+ "ChatOutput-ktwdw": {},
+ "ChatInput-xXC4F": {}
+}}'
+```
+
+Note the flow ID of `ef7e0554-69e5-4e3e-ab29-ee83bcd8d9ef`. You can find this ID in the UI as well to ensure you're querying the right flow.
+
+## Start Langflow in backend-only mode
+
+1. Stop Langflow with Ctrl+C.
+2. Start langflow in backend-only mode with `python3 -m langflow run --backend-only`.
+ The terminal prints ` Welcome to ⛓ Langflow `, and a blank window opens at `http://127.0.0.1:7864/all`.
+ Langflow will now serve requests to its API.
+3. Run the curl code you copied from the UI.
+ You should get a result like this:
+
+```bash
+{"session_id":"ef7e0554-69e5-4e3e-ab29-ee83bcd8d9ef:bf81d898868ac87e1b4edbd96c131c5dee801ea2971122cc91352d144a45b880","outputs":[{"inputs":{"input_value":"hi, are you there?"},"outputs":[{"results":{"result":"Arrr, ahoy matey! Aye, I be here. What be ye needin', me hearty?"},"artifacts":{"message":"Arrr, ahoy matey! Aye, I be here. What be ye needin', me hearty?","sender":"Machine","sender_name":"AI"},"messages":[{"message":"Arrr, ahoy matey! Aye, I be here. What be ye needin', me hearty?","sender":"Machine","sender_name":"AI","component_id":"ChatOutput-ktwdw"}],"component_display_name":"Chat Output","component_id":"ChatOutput-ktwdw","used_frozen_result":false}]}]}%
+```
+
+Again, note that the flow ID matches.
+Langflow is receiving your POST request, running the flow, and returning the result, all without running the frontend. Cool!
+
+## Download your flow's Python API call
+
+Instead of using curl, you can download your flow as a Python API call instead.
+
+1. Click API.
+2. Click **Python API** > **Copy code** and save the code to your local machine.
+ The code will look something like this:
+
+```python
+import requests
+from typing import Optional
+
+BASE_API_URL = "http://127.0.0.1:7864/api/v1/run"
+FLOW_ID = "ef7e0554-69e5-4e3e-ab29-ee83bcd8d9ef"
+# You can tweak the flow by adding a tweaks dictionary
+# e.g {"OpenAI-XXXXX": {"model_name": "gpt-4"}}
+
+def run_flow(message: str,
+ flow_id: str,
+ output_type: str = "chat",
+ input_type: str = "chat",
+ tweaks: Optional[dict] = None,
+ api_key: Optional[str] = None) -> dict:
+ """
+ Run a flow with a given message and optional tweaks.
+
+ :param message: The message to send to the flow
+ :param flow_id: The ID of the flow to run
+ :param tweaks: Optional tweaks to customize the flow
+ :return: The JSON response from the flow
+ """
+ api_url = f"{BASE_API_URL}/{flow_id}"
+
+ payload = {
+ "input_value": message,
+ "output_type": output_type,
+ "input_type": input_type,
+ }
+ headers = None
+ if tweaks:
+ payload["tweaks"] = tweaks
+ if api_key:
+ headers = {"x-api-key": api_key}
+ response = requests.post(api_url, json=payload, headers=headers)
+ return response.json()
+
+# Setup any tweaks you want to apply to the flow
+message = "message"
+
+print(run_flow(message=message, flow_id=FLOW_ID))
+```
+
+3. Run your Python app:
+
+```python
+python3 app.py
+```
+
+The result is similar to the curl call:
+
+```bash
+{'session_id': 'ef7e0554-69e5-4e3e-ab29-ee83bcd8d9ef:bf81d898868ac87e1b4edbd96c131c5dee801ea2971122cc91352d144a45b880', 'outputs': [{'inputs': {'input_value': 'message'}, 'outputs': [{'results': {'result': "Arrr matey! What be yer message for this ol' pirate? Speak up or walk the plank!"}, 'artifacts': {'message': "Arrr matey! What be yer message for this ol' pirate? Speak up or walk the plank!", 'sender': 'Machine', 'sender_name': 'AI'}, 'messages': [{'message': "Arrr matey! What be yer message for this ol' pirate? Speak up or walk the plank!", 'sender': 'Machine', 'sender_name': 'AI', 'component_id': 'ChatOutput-ktwdw'}], 'component_display_name': 'Chat Output', 'component_id': 'ChatOutput-ktwdw', 'used_frozen_result': False}]}]}
+```
+
+Your Python app POSTs to your Langflow server, and the server runs the flow and returns the result.
+
+See [API](../administration/api.mdx) for more ways to interact with your headless Langflow server.
diff --git a/docs/docs/deployment/docker.md b/docs/docs/deployment/docker.md
new file mode 100644
index 000000000..1ebb5746e
--- /dev/null
+++ b/docs/docs/deployment/docker.md
@@ -0,0 +1,65 @@
+# Docker
+
+This guide will help you get LangFlow up and running using Docker and Docker Compose.
+
+## Prerequisites
+
+- Docker
+- Docker Compose
+
+## Steps
+
+1. Clone the LangFlow repository:
+
+ ```sh
+ git clone https://github.com/langflow-ai/langflow.git
+ ```
+
+2. Navigate to the `docker_example` directory:
+
+ ```sh
+ cd langflow/docker_example
+ ```
+
+3. Run the Docker Compose file:
+
+ ```sh
+ docker compose up
+ ```
+
+LangFlow will now be accessible at [http://localhost:7860/](http://localhost:7860/).
+
+## Docker Compose Configuration
+
+The Docker Compose configuration spins up two services: `langflow` and `postgres`.
+
+### LangFlow Service
+
+The `langflow` service uses the `langflowai/langflow:latest` Docker image and exposes port 7860. It depends on the `postgres` service.
+
+Environment variables:
+
+- `LANGFLOW_DATABASE_URL`: The connection string for the PostgreSQL database.
+- `LANGFLOW_CONFIG_DIR`: The directory where LangFlow stores logs, file storage, monitor data, and secret keys.
+
+Volumes:
+
+- `langflow-data`: This volume is mapped to `/var/lib/langflow` in the container.
+
+### PostgreSQL Service
+
+The `postgres` service uses the `postgres:16` Docker image and exposes port 5432.
+
+Environment variables:
+
+- `POSTGRES_USER`: The username for the PostgreSQL database.
+- `POSTGRES_PASSWORD`: The password for the PostgreSQL database.
+- `POSTGRES_DB`: The name of the PostgreSQL database.
+
+Volumes:
+
+- `langflow-postgres`: This volume is mapped to `/var/lib/postgresql/data` in the container.
+
+## Switching to a Specific LangFlow Version
+
+If you want to use a specific version of LangFlow, you can modify the `image` field under the `langflow` service in the Docker Compose file. For example, to use version 1.0-alpha, change `langflowai/langflow:latest` to `langflowai/langflow:1.0-alpha`.
diff --git a/docs/docs/examples/buffer-memory.mdx b/docs/docs/examples/buffer-memory.mdx
deleted file mode 100644
index b196f9031..000000000
--- a/docs/docs/examples/buffer-memory.mdx
+++ /dev/null
@@ -1,35 +0,0 @@
-import Admonition from "@theme/Admonition";
-
-# Buffer Memory
-
-For certain applications, retaining past interactions is crucial. For that, chains and agents may accept a memory component as one of their input parameters. The `ConversationBufferMemory` component is one of them. It stores messages and extracts them into variables.
-
-## ⛓️ Langflow Example
-
-import ThemedImage from "@theme/ThemedImage";
-import useBaseUrl from "@docusaurus/useBaseUrl";
-import ZoomableImage from "/src/theme/ZoomableImage.js";
-
-
-
-#### Download Flow
-
-
-
-- [`ConversationBufferMemory`](https://python.langchain.com/docs/modules/memory/types/buffer)
-- [`ConversationChain`](https://python.langchain.com/docs/modules/chains/)
-- [`ChatOpenAI`](https://python.langchain.com/docs/modules/model_io/models/chat/integrations/openai)
-
-
diff --git a/docs/docs/examples/chat-memory.mdx b/docs/docs/examples/chat-memory.mdx
new file mode 100644
index 000000000..88dbbca2b
--- /dev/null
+++ b/docs/docs/examples/chat-memory.mdx
@@ -0,0 +1,17 @@
+import ThemedImage from "@theme/ThemedImage";
+import useBaseUrl from "@docusaurus/useBaseUrl";
+import ZoomableImage from "/src/theme/ZoomableImage.js";
+import ReactPlayer from "react-player";
+import Admonition from "@theme/Admonition";
+
+# Chat Memory
+
+The **Chat Memory** component restores previous messages given a Session ID, which can be any string.
+
+This component is available under the **Helpers** tab of the Langflow preview.
+
+
+
+
diff --git a/docs/docs/examples/combine-text.mdx b/docs/docs/examples/combine-text.mdx
new file mode 100644
index 000000000..5a4e86cf0
--- /dev/null
+++ b/docs/docs/examples/combine-text.mdx
@@ -0,0 +1,21 @@
+import ThemedImage from "@theme/ThemedImage";
+import useBaseUrl from "@docusaurus/useBaseUrl";
+import ZoomableImage from "/src/theme/ZoomableImage.js";
+import ReactPlayer from "react-player";
+import Admonition from "@theme/Admonition";
+
+# Combine Text
+
+With LLM pipelines, combining text from different sources may be as important as splitting text.
+
+The **Combine Text** component concatenates two text inputs into a single chunk using a specified delimiter, such as whitespace or a newline.
+
+Also, check out **Combine Texts (Unsorted)** as a similar alternative.
+
+This component is available under the **Helpers** tab of the Langflow preview.
+
+
+
+
diff --git a/docs/docs/examples/conversation-chain.mdx b/docs/docs/examples/conversation-chain.mdx
deleted file mode 100644
index 294d1b440..000000000
--- a/docs/docs/examples/conversation-chain.mdx
+++ /dev/null
@@ -1,41 +0,0 @@
-import Admonition from "@theme/Admonition";
-
-# Conversation Chain
-
-This example shows how to instantiate a simple `ConversationChain` component using a Language Model (LLM). Once the Node Status turns green 🟢, the chat will be ready to take in user messages. Here, we used `ChatOpenAI` to act as the required LLM input, but you can use any LLM for this purpose.
-
-
-
-Make sure to always get the API key from the provider.
-
-
-
-## ⛓️ Langflow Example
-
-import ThemedImage from "@theme/ThemedImage";
-import useBaseUrl from "@docusaurus/useBaseUrl";
-import ZoomableImage from "/src/theme/ZoomableImage.js";
-
-
-
-#### Download Flow
-
-
-
-- [`ConversationChain`](https://python.langchain.com/docs/modules/chains/)
-- [`ChatOpenAI`](https://python.langchain.com/docs/modules/model_io/models/chat/integrations/openai)
-
-
diff --git a/docs/docs/examples/create-record.mdx b/docs/docs/examples/create-record.mdx
new file mode 100644
index 000000000..aa7a886f4
--- /dev/null
+++ b/docs/docs/examples/create-record.mdx
@@ -0,0 +1,17 @@
+import ThemedImage from "@theme/ThemedImage";
+import useBaseUrl from "@docusaurus/useBaseUrl";
+import ZoomableImage from "/src/theme/ZoomableImage.js";
+import ReactPlayer from "react-player";
+import Admonition from "@theme/Admonition";
+
+# Create Record
+
+In Langflow, a `Record` has a structure very similar to a Python dictionary. It is a key-value pair data structure.
+
+The **Create Record** component allows you to dynamically create a `Record` from a specified number of inputs. You can add as many key-value pairs as you want (as long as it is less than 15 😅). Once you've chosen the number of `Records`, add keys and fill up values, or pass on values from other components to the component using the input handles.
+
+
+
+
diff --git a/docs/docs/examples/csv-loader.mdx b/docs/docs/examples/csv-loader.mdx
deleted file mode 100644
index 25f3bb444..000000000
--- a/docs/docs/examples/csv-loader.mdx
+++ /dev/null
@@ -1,57 +0,0 @@
-import Admonition from "@theme/Admonition";
-
-# CSV Loader
-
-The `VectoStoreAgent` component retrieves information from one or more vector stores. This example shows a `VectoStoreAgent` connected to a CSV file through the `Chroma` vector store. Process description:
-
-- The `CSVLoader` loads a CSV file into a list of documents.
-- The extracted data is then processed by the `CharacterTextSplitter`, which splits the text into small, meaningful chunks (usually sentences).
-- These chunks feed the `Chroma` vector store, which converts them into vectors and stores them for fast indexing.
-- Finally, the agent accesses the information of the vector store through the `VectorStoreInfo` tool.
-
-
- The vector store is used for efficient semantic search, while
- `VectorStoreInfo` carries information about it, such as its name and
- description. Embeddings are a way to represent words, phrases, or any entities
- in a vector space. Learn more about them
- [here](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings).
-
-
-
- Once you build this flow, ask questions about the data in the chat interface
- (e.g., number of rows or columns).
-
-
-## ⛓️ Langflow Example
-
-import ThemedImage from "@theme/ThemedImage";
-import useBaseUrl from "@docusaurus/useBaseUrl";
-import ZoomableImage from "/src/theme/ZoomableImage.js";
-
-
-
-#### Download Flow
-
-
-
-- [`CSVLoader`](https://python.langchain.com/docs/integrations/document_loaders/csv)
-- [`CharacterTextSplitter`](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/character_text_splitter)
-- [`OpenAIEmbedding`](https://python.langchain.com/docs/integrations/text_embedding/openai)
-- [`Chroma`](https://python.langchain.com/docs/integrations/vectorstores/chroma)
-- [`VectorStoreInfo`](https://python.langchain.com/docs/modules/data_connection/vectorstores/)
-- [`OpenAI`](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/openai)
-- [`VectorStoreAgent`](https://js.langchain.com/docs/modules/agents/tools/how_to/agents_with_vectorstores)
-
-
diff --git a/docs/docs/examples/flow-runner.mdx b/docs/docs/examples/flow-runner.mdx
deleted file mode 100644
index fda7a8d39..000000000
--- a/docs/docs/examples/flow-runner.mdx
+++ /dev/null
@@ -1,368 +0,0 @@
----
-description: Custom Components
-hide_table_of_contents: true
----
-
-# FlowRunner Component
-
-The CustomComponent class allows us to create components that interact with Langflow itself. In this example, we will make a component that runs other flows available in "My Collection".
-
-
-
-We will cover how to:
-
-- List Collection flows using the _`list_flows`_ method.
-- Load a flow using the _`load_flow`_ method.
-- Configure a dropdown input field using the _`options`_ parameter.
-
-
-
-Example Code
-
-```python
-from langflow.custom import CustomComponent
-from langchain.schema import Document
-
-class FlowRunner(CustomComponent):
- display_name = "Flow Runner"
- description = "Run other flows using a document as input."
-
- def build_config(self):
- flows = self.list_flows()
- flow_names = [f.name for f in flows]
- return {"flow_name": {"options": flow_names,
- "display_name": "Flow Name",
- },
- "document": {"display_name": "Document"}
- }
-
-
- def build(self, flow_name: str, document: Document) -> Document:
- # List the flows
- flows = self.list_flows()
- # Get the flow that matches the selected name
- # You can also get the flow by id
- # using self.get_flow(flow_id=flow_id)
- tweaks = {}
- flow = self.get_flow(flow_name=flow_name, tweaks=tweaks)
- # Get the page_content from the document
- if document and isinstance(document, list):
- document = document[0]
- page_content = document.page_content
- # Use it in the flow
- result = flow(page_content)
- return Document(page_content=str(result))
-
-```
-
-
-
-
-
-```python
-from langflow.custom import CustomComponent
-
-
-class MyComponent(CustomComponent):
- display_name = "Custom Component"
- description = "This is a custom component"
-
- def build_config(self):
- ...
-
- def build(self):
- ...
-
-```
-
-The typical structure of a Custom Component is composed of _`display_name`_ and _`description`_ attributes, _`build`_ and _`build_config`_ methods.
-
----
-
-```python
-from langflow.custom import CustomComponent
-
-
-# focus
-class FlowRunner(CustomComponent):
- # focus
- display_name = "Flow Runner"
- # focus
- description = "Run other flows"
-
- def build_config(self):
- ...
-
- def build(self):
- ...
-
-```
-
-Let's start by defining our component's _`display_name`_ and _`description`_.
-
----
-
-```python
-from langflow.custom import CustomComponent
-# focus
-from langchain.schema import Document
-
-
-class FlowRunner(CustomComponent):
- display_name = "Flow Runner"
- description = "Run other flows using a document as input."
-
- def build_config(self):
- ...
-
- def build(self):
- ...
-
-```
-
-Second, we will import _`Document`_ from the [_langchain.schema_](https://docs.langchain.com/docs/components/schema/) module. This will be the return type of the _`build`_ method.
-
----
-
-```python
-from langflow.custom import CustomComponent
-# focus
-from langchain.schema import Document
-
-
-class FlowRunner(CustomComponent):
- display_name = "Flow Runner"
- description = "Run other flows using a document as input."
-
- def build_config(self):
- ...
-
- # focus
- def build(self, flow_name: str, document: Document) -> Document:
- ...
-
-```
-
-Now, let's add the [parameters](focus://11[20:55]) and the [return type](focus://11[60:69]) to the _`build`_ method. The parameters added are:
-
-- _`flow_name`_ is the name of the flow we want to run.
-- _`document`_ is the input document to be passed to that flow.
- - Since _`Document`_ is a Langchain type, it will add an input [handle](../administration/components) to the component ([see more](../components/custom)).
-
----
-
-```python focus=13:14
-from langflow.custom import CustomComponent
-from langchain.schema import Document
-
-
-class FlowRunner(CustomComponent):
- display_name = "Flow Runner"
- description = "Run other flows using a document as input."
-
- def build_config(self):
- ...
-
- def build(self, flow_name: str, document: Document) -> Document:
- # List the flows
- flows = self.list_flows()
-
-```
-
-We can now start writing the _`build`_ method. Let's list available flows in "My Collection" using the _`list_flows`_ method.
-
----
-
-```python focus=15:18
-from langflow.custom import CustomComponent
-from langchain.schema import Document
-
-
-class FlowRunner(CustomComponent):
- display_name = "Flow Runner"
- description = "Run other flows using a document as input."
-
- def build_config(self):
- ...
-
- def build(self, flow_name: str, document: Document) -> Document:
- # List the flows
- flows = self.list_flows()
- # Get the flow that matches the selected name
- # You can also get the flow by id
- # using self.get_flow(flow_id=flow_id)
- tweaks = {}
- flow = self.get_flow(flow_name=flow_name, tweaks=tweaks)
-
-```
-
-And retrieve a flow that matches the selected name (we'll make a dropdown input field for the user to choose among flow names).
-
-
- From version 0.4.0, names are unique, which was not the case in previous
- versions. This might lead to unexpected results if using flows with the same
- name.
-
-
----
-
-```python
-from langflow.custom import CustomComponent
-from langchain.schema import Document
-
-
-class FlowRunner(CustomComponent):
- display_name = "Flow Runner"
- description = "Run other flows using a document as input."
-
- def build_config(self):
- ...
-
- def build(self, flow_name: str, document: Document) -> Document:
- # List the flows
- flows = self.list_flows()
- # Get the flow that matches the selected name
- # You can also get the flow by id
- # using self.get_flow(flow_id=flow_id)
- tweaks = {}
- flow = self.get_flow(flow_name=flow_name, tweaks=tweaks)
-
-
-```
-
-You can load this flow using _`get_flow`_ and set a _`tweaks`_ dictionary to customize it. Find more about tweaks in our [features guidelines](../administration/features#code).
-
----
-
-```python
-from langflow.custom import CustomComponent
-from langchain.schema import Document
-
-
-class FlowRunner(CustomComponent):
- display_name = "Flow Runner"
- description = "Run other flows using a document as input."
-
- def build_config(self):
- ...
-
- def build(self, flow_name: str, document: Document) -> Document:
- # List the flows
- flows = self.list_flows()
- # Get the flow that matches the selected name
- # You can also get the flow by id
- # using self.get_flow(flow_id=flow_id)
- tweaks = {}
- flow = self.get_flow(flow_name=flow_name, tweaks=tweaks)
- # Get the page_content from the document
- if document and isinstance(document, list):
- document = document[0]
- page_content = document.page_content
- # Use it in the flow
- result = flow(page_content)
- return Document(page_content=str(result))
-```
-
-We are using a _`Document`_ as input because it is a straightforward way to pass text data in Langflow (specifically because you can connect it to many [loaders](../components/loaders)).
-Generally, a flow will take a string or a dictionary as input because that's what LangChain components expect.
-In case you are passing a dictionary, you need to build it according to the needs of the flow you are using.
-
-The content of a document can be extracted using the _`page_content`_ attribute, which is a string, and passed as an argument to the selected flow.
-
----
-
-```python focus=9:16
-from langflow.custom import CustomComponent
-from langchain.schema import Document
-
-
-class FlowRunner(CustomComponent):
- display_name = "Flow Runner"
- description = "Run other flows using a document as input."
-
- def build_config(self):
- flows = self.list_flows()
- flow_names = [f.name for f in flows]
- return {"flow_name": {"options": flow_names,
- "display_name": "Flow Name",
- },
- "document": {"display_name": "Document"}
- }
-
- def build(self, flow_name: str, document: Document) -> Document:
- # List the flows
- flows = self.list_flows()
- # Get the flow that matches the selected name
- # You can also get the flow by id
- # using self.get_flow(flow_id=flow_id)
- tweaks = {}
- flow = self.get_flow(flow_name=flow_name, tweaks=tweaks)
- # Get the page_content from the document
- if document and isinstance(document, list):
- document = document[0]
- page_content = document.page_content
- # Use it in the flow
- result = flow(page_content)
- return Document(page_content=str(result))
-```
-
-Finally, we can add field customizations through the _`build_config`_ method. Here we added the _`options`_ key to make the _`flow_name`_ field a dropdown menu. Check out the [custom component reference](../components/custom) for a list of available keys.
-
-
- Make sure that the field type is _`str`_ and _`options`_ values are strings.
-
-
-
-
-Done! This is what our script and custom component looks like:
-
-
-
-
-
-
-
-
-
-import ZoomableImage from "/src/theme/ZoomableImage.js";
-import Admonition from "@theme/Admonition";
diff --git a/docs/docs/examples/pass.mdx b/docs/docs/examples/pass.mdx
new file mode 100644
index 000000000..ddfe35cca
--- /dev/null
+++ b/docs/docs/examples/pass.mdx
@@ -0,0 +1,17 @@
+import ThemedImage from "@theme/ThemedImage";
+import useBaseUrl from "@docusaurus/useBaseUrl";
+import ZoomableImage from "/src/theme/ZoomableImage.js";
+import ReactPlayer from "react-player";
+import Admonition from "@theme/Admonition";
+
+# Pass
+
+Sometimes all you need to do is… nothing!
+
+The **Pass** component enables you to ignore one input and move forward with another one. This is super helpful to swap routes for A/B testing!
+
+
+
+
diff --git a/docs/docs/examples/python-function.mdx b/docs/docs/examples/python-function.mdx
deleted file mode 100644
index 2bb4b93e1..000000000
--- a/docs/docs/examples/python-function.mdx
+++ /dev/null
@@ -1,62 +0,0 @@
-import Admonition from "@theme/Admonition";
-
-# Python Function
-
-Langflow allows you to create a customized tool using the `PythonFunction` connected to a `Tool` component. In this example, Regex is used in Python to validate a pattern.
-
-```python
-import re
-
-def is_brazilian_zipcode(zipcode: str) -> bool:
- pattern = r"\d{5}-?\d{3}"
-
- # Check if the zip code matches the pattern
- if re.match(pattern, zipcode):
- return True
-
- return False
-```
-
-
- When a tool is called, it is often desirable to have its output returned
- directly to the user. You can do this by setting the **return_direct** flag
- for a tool to be True.
-
-
-The `AgentInitializer` component is a quick way to construct an agent from the model and tools.
-
-
- The `PythonFunction` is a custom component that uses the LangChain 🦜🔗 tool
- decorator. Learn more about it
- [here](https://python.langchain.com/docs/modules/agents/tools/custom_tools).
-
-
-## ⛓️ Langflow Example
-
-import ThemedImage from "@theme/ThemedImage";
-import useBaseUrl from "@docusaurus/useBaseUrl";
-import ZoomableImage from "/src/theme/ZoomableImage.js";
-
-
-
-#### Download Flow
-
-
-
-- [`PythonFunctionTool`](https://python.langchain.com/docs/modules/agents/tools/custom_tools)
-- [`ChatOpenAI`](https://python.langchain.com/docs/modules/model_io/models/chat/integrations/openai)
-- [`AgentInitializer`](https://python.langchain.com/docs/modules/agents/)
-
-
diff --git a/docs/docs/examples/searchapi-tool.mdx b/docs/docs/examples/searchapi-tool.mdx
deleted file mode 100644
index d3cb4734a..000000000
--- a/docs/docs/examples/searchapi-tool.mdx
+++ /dev/null
@@ -1,52 +0,0 @@
-import Admonition from "@theme/Admonition";
-
-# SearchApi Tool
-
-The [SearchApi](https://www.searchapi.io/) allows developers to retrieve results from search engines such as Google, Google Scholar, YouTube, YouTube transcripts, and more, and can be used as in Langflow through the `SearchApi` tool.
-
-
- To use the SearchApi, you must first obtain an API key by registering at [SearchApi's website](https://www.searchapi.io/).
-
-
-In the given example, we specify `engine` as `youtube_transcripts` and provide a `video_id`.
-
-
- All engines and parameters can be found in [SearchApi documentation](https://www.searchapi.io/docs/google).
-
-
-The `RetrievalQA` chain processes a `Document` along with a user's question to return an answer.
-
-
- In this example, we used [`ChatOpenAI`](https://platform.openai.com/) as the
- LLM, but feel free to experiment with other Language Models!
-
-
-The `RetrievalQA` takes `CombineDocsChain` and `SearchApi` tool as inputs, using the tool as a `Document` to answer questions.
-
-
- Learn more about the SearchApi
- [here](https://python.langchain.com/docs/integrations/tools/searchapi).
-
-
-## ⛓️ Langflow Example
-
-import ThemedImage from "@theme/ThemedImage";
-import useBaseUrl from "@docusaurus/useBaseUrl";
-import ZoomableImage from "/src/theme/ZoomableImage.js";
-
-
-
-#### Download Flow
-
-
-
-- [`OpenAI`](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/openai)
-- [`SearchApiAPIWrapper`](https://python.langchain.com/docs/integrations/providers/searchapi#wrappers)
-- [`ZeroShotAgent`](https://python.langchain.com/docs/modules/agents/how_to/custom_mrkl_agent)
-
-
\ No newline at end of file
diff --git a/docs/docs/examples/serp-api-tool.mdx b/docs/docs/examples/serp-api-tool.mdx
deleted file mode 100644
index 175b6f1be..000000000
--- a/docs/docs/examples/serp-api-tool.mdx
+++ /dev/null
@@ -1,58 +0,0 @@
-import Admonition from "@theme/Admonition";
-
-# Serp API Tool
-
-The [Serp API](https://serpapi.com/) (Search Engine Results Page) allows developers to scrape results from search engines such as Google, Bing and Yahoo, and can be used as in Langflow through the `Search` component.
-
-
- To use the Serp API, you first need to sign up [Serp
- API](https://serpapi.com/) for an API key on the provider's website.
-
-
-Here, the `ZeroShotPrompt` component specifies a prompt template for the `ZeroShotAgent`. Set a _Prefix_ and _Suffix_ with rules for the agent to obey. In the example, we used default templates.
-
-The `LLMChain` is a simple chain that takes in a prompt template, formats it with the user input, and returns the response from an LLM.
-
-
- In this example, we used [`ChatOpenAI`](https://platform.openai.com/) as the
- LLM, but feel free to experiment with other Language Models!
-
-
-The `ZeroShotAgent` takes the `LLMChain` and the `Search` tool as inputs, using the tool to find information when necessary.
-
-
- Learn more about the Serp API
- [here](https://python.langchain.com/docs/integrations/providers/serpapi ).
-
-
-## ⛓️ Langflow Example
-
-import ThemedImage from "@theme/ThemedImage";
-import useBaseUrl from "@docusaurus/useBaseUrl";
-import ZoomableImage from "/src/theme/ZoomableImage.js";
-
-
-
-#### Download Flow
-
-
-
-- [`ZeroShotPrompt`](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/)
-- [`OpenAI`](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/openai)
-- [`LLMChain`](https://python.langchain.com/docs/modules/chains/foundational/llm_chain)
-- [`Search`](https://python.langchain.com/docs/integrations/providers/serpapi)
-- [`ZeroShotAgent`](https://python.langchain.com/docs/modules/agents/how_to/custom_mrkl_agent)
-
-
diff --git a/docs/docs/examples/store-message.mdx b/docs/docs/examples/store-message.mdx
new file mode 100644
index 000000000..75ff0bd46
--- /dev/null
+++ b/docs/docs/examples/store-message.mdx
@@ -0,0 +1,17 @@
+import ThemedImage from "@theme/ThemedImage";
+import useBaseUrl from "@docusaurus/useBaseUrl";
+import ZoomableImage from "/src/theme/ZoomableImage.js";
+import ReactPlayer from "react-player";
+import Admonition from "@theme/Admonition";
+
+# Store Message
+
+The **Store Message** component allows you to save information under a specified Session ID and sender type.
+
+The **Message History** component can then be used to retrieve stored messages.
+
+
+
+
diff --git a/docs/docs/examples/sub-flow.mdx b/docs/docs/examples/sub-flow.mdx
new file mode 100644
index 000000000..d2b9674ad
--- /dev/null
+++ b/docs/docs/examples/sub-flow.mdx
@@ -0,0 +1,15 @@
+import ThemedImage from "@theme/ThemedImage";
+import useBaseUrl from "@docusaurus/useBaseUrl";
+import ZoomableImage from "/src/theme/ZoomableImage.js";
+import ReactPlayer from "react-player";
+import Admonition from "@theme/Admonition";
+
+# Sub Flow
+
+The **Sub Flow** component enables a user to select a previously built flow and dynamically generate a component out of it.
+
+
+
+
diff --git a/docs/docs/examples/text-operator.mdx b/docs/docs/examples/text-operator.mdx
new file mode 100644
index 000000000..50d52fdbf
--- /dev/null
+++ b/docs/docs/examples/text-operator.mdx
@@ -0,0 +1,15 @@
+import ThemedImage from "@theme/ThemedImage";
+import useBaseUrl from "@docusaurus/useBaseUrl";
+import ZoomableImage from "/src/theme/ZoomableImage.js";
+import ReactPlayer from "react-player";
+import Admonition from "@theme/Admonition";
+
+# Text Operator
+
+The **Text Operator** component simplifies logic. It evaluates the results from another component (for example, if the input text exactly equals `Tuna`) and runs another component based on the results. Basically, the text operator is an if/else component for your flow.
+
+
+
+
diff --git a/docs/docs/getting-started/canvas.mdx b/docs/docs/getting-started/canvas.mdx
index b16807b66..5974f245b 100644
--- a/docs/docs/getting-started/canvas.mdx
+++ b/docs/docs/getting-started/canvas.mdx
@@ -56,7 +56,8 @@ Components are the building blocks of flows. They consist of inputs, outputs, an
During the flow creation process, you will notice handles (colored circles)
attached to one or both sides of a component. These handles represent the
- availability to connect to other components. Hover over a handle to see connection details.
+ availability to connect to other components. Hover over a handle to see
+ connection details.
@@ -85,6 +86,7 @@ Build the flow by clicking the **Playgr
Once the validation is complete, the status of each validated component should turn green ().
To debug, hover over the component status to see the outputs.
+
---
@@ -196,6 +198,7 @@ curl -X POST \
```
Result:
+
```
{"session_id":"f2eefd80-bb91-4190-9279-0d6ffafeaac4:53856a772b8e1cfcb3dd2e71576b5215399e95bae318d3c02101c81b7c252da3","outputs":[{"inputs":{"input_value":"is anybody there?"},"outputs":[{"results":{"result":"Arrr, me hearties! Aye, this be Captain [Your Name] speakin'. What be ye needin', matey?"},"artifacts":{"message":"Arrr, me hearties! Aye, this be Captain [Your Name] speakin'. What be ye needin', matey?","sender":"Machine","sender_name":"AI"},"messages":[{"message":"Arrr, me hearties! Aye, this be Captain [Your Name] speakin'. What be ye needin', matey?","sender":"Machine","sender_name":"AI","component_id":"ChatOutput-njtka"}],"component_display_name":"Chat Output","component_id":"ChatOutput-njtka"}]}]}%
```
@@ -231,9 +234,10 @@ A collection is a snapshot of flows available in a database.
Collections can be downloaded to local storage and uploaded for future use.
-
-
+
+
## Project
@@ -276,9 +280,3 @@ To see options for your project, in the upper left corner of the canvas, select
**Export** - Download your current project to your local machine as a `.json` file.
**Undo** or **Redo** - Undo or redo your last action.
-
-
-
-
-
-
diff --git a/docs/docs/getting-started/flows-components-collections.mdx b/docs/docs/getting-started/flows-components-collections.mdx
index 586f08192..335fb5c12 100644
--- a/docs/docs/getting-started/flows-components-collections.mdx
+++ b/docs/docs/getting-started/flows-components-collections.mdx
@@ -1,7 +1,7 @@
-import ThemedImage from '@theme/ThemedImage';
-import useBaseUrl from '@docusaurus/useBaseUrl';
-import ZoomableImage from '/src/theme/ZoomableImage.js';
-import ReactPlayer from 'react-player';
+import ThemedImage from "@theme/ThemedImage";
+import useBaseUrl from "@docusaurus/useBaseUrl";
+import ZoomableImage from "/src/theme/ZoomableImage.js";
+import ReactPlayer from "react-player";
# 🖥️ Flows, components, collections, and projects
@@ -17,10 +17,4 @@ A [project](#project) can be a component or a flow. Projects are saved as part o
For example, the **OpenAI LLM** is a **component** of the **Basic prompting** flow, and the **flow** is stored in a **collection**.
-
-
## Component
-
-
-
-
diff --git a/docs/docs/getting-started/install-langflow.mdx b/docs/docs/getting-started/install-langflow.mdx
index d78514909..4beb5e362 100644
--- a/docs/docs/getting-started/install-langflow.mdx
+++ b/docs/docs/getting-started/install-langflow.mdx
@@ -6,33 +6,40 @@ import Admonition from "@theme/Admonition";
# 📦 Install Langflow
- Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space using this link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true), to create your own Langflow workspace in minutes.
+ Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
+ using this
+ link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true),
+ to create your own Langflow workspace in minutes.
-Langflow requires [Python 3.10](https://www.python.org/downloads/release/python-3100/) and [pip](https://pypi.org/project/pip/) or [pipx](https://pipx.pypa.io/stable/installation/) to be installed on your system.
+Langflow requires [Python >=3.10](https://www.python.org/downloads/release/python-3100/) and [pip](https://pypi.org/project/pip/) or [pipx](https://pipx.pypa.io/stable/installation/) to be installed on your system.
Install Langflow with pip:
+
```bash
python -m pip install langflow -U
```
Install Langflow with pipx:
+
```bash
pipx install langflow --python python3.10 --fetch-missing-python
```
-Pipx can fetch the missing Python version for you with `--fetch-missing-python`, but you can also install the Python version manually.
+Pipx can fetch the missing Python version for you with `--fetch-missing-python`, but you can also install the Python version manually.
## Install Langflow pre-release
To install a pre-release version of Langflow:
pip:
+
```bash
python -m pip install langflow --pre --force-reinstall
```
pipx:
+
```bash
pipx install langflow --python python3.10 --fetch-missing-python --pip-args="--pre --force-reinstall"
```
@@ -52,11 +59,13 @@ python -m langflow --help
## ⛓️ Run Langflow
1. To run Langflow, enter the following command.
+
```bash
python -m langflow run
```
2. Confirm that a local Langflow instance starts by visiting `http://127.0.0.1:7860` in a Chromium-based browser.
+
```bash
│ Welcome to ⛓ Langflow │
│ │
@@ -83,4 +92,4 @@ You'll be presented with the following screen:
style={{ width: "100%", margin: "20px auto" }}
/>
-Name your Space, define the visibility (Public or Private), and click on **Duplicate Space** to start the installation process. When installation is finished, you'll be redirected to the Space's main page to start using Langflow right away!
\ No newline at end of file
+Name your Space, define the visibility (Public or Private), and click on **Duplicate Space** to start the installation process. When installation is finished, you'll be redirected to the Space's main page to start using Langflow right away!
diff --git a/docs/docs/getting-started/quickstart.mdx b/docs/docs/getting-started/quickstart.mdx
index ef7d373a6..3f02db27f 100644
--- a/docs/docs/getting-started/quickstart.mdx
+++ b/docs/docs/getting-started/quickstart.mdx
@@ -10,12 +10,15 @@ This guide demonstrates how to build a basic prompt flow and modify that prompt
## Prerequisites
-* [Langflow installed and running](./install-langflow.mdx)
+- [Langflow installed and running](./install-langflow.mdx)
-* [OpenAI API key](https://platform.openai.com)
+- [OpenAI API key](https://platform.openai.com)
- Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space using this link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) to create your own Langflow workspace in minutes.
+ Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
+ using this
+ link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
+ to create your own Langflow workspace in minutes.
## Hello World - Basic Prompting
@@ -44,25 +47,25 @@ Examine the **Prompt** component. The **Template** field instructs the LLM to `A
This should be interesting...
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
- 1. In the **Variable Name** field, enter `openai_api_key`.
- 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
- 3. Click **Save Variable**.
+ 1. In the **Variable Name** field, enter `openai_api_key`.
+ 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
+ 3. Click **Save Variable**.
## Run the basic prompting flow
1. Click the **Run** button.
-The **Interaction Panel** opens, where you can chat with your bot.
+ The **Interaction Panel** opens, where you can chat with your bot.
2. Type a message and press Enter.
-And... Ahoy! 🏴☠️
-The bot responds in a piratical manner!
+ And... Ahoy! 🏴☠️
+ The bot responds in a piratical manner!
## Modify the prompt for a different result
1. To modify your prompt results, in the **Prompt** template, click the **Template** field.
-The **Edit Prompt** window opens.
+ The **Edit Prompt** window opens.
2. Change `Answer the user as if you were a pirate` to a different character, perhaps `Answer the user as if you were Harold Abelson.`
3. Run the basic prompting flow again.
-The response will be markedly different.
+ The response will be markedly different.
## Next steps
@@ -72,8 +75,6 @@ By adding Langflow components to your flow, you can create all sorts of interest
Here are a couple of examples:
-* [Memory chatbot](/starter-projects/memory-chatbot.mdx)
-* [Blog writer](/starter-projects/blog-writer.mdx)
-* [Document QA](/starter-projects/document-qa.mdx)
-
-
+- [Memory chatbot](/starter-projects/memory-chatbot.mdx)
+- [Blog writer](/starter-projects/blog-writer.mdx)
+- [Document QA](/starter-projects/document-qa.mdx)
diff --git a/docs/docs/index.mdx b/docs/docs/index.mdx
index 7fb912e98..e762142f0 100644
--- a/docs/docs/index.mdx
+++ b/docs/docs/index.mdx
@@ -14,8 +14,8 @@ Its intuitive interface allows for easy manipulation of AI building blocks, enab
@@ -29,7 +29,10 @@ Its intuitive interface allows for easy manipulation of AI building blocks, enab
- [Langflow Canvas](/getting-started/canvas) - Learn more about the Langflow canvas.
- Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space using this link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) to create your own Langflow workspace in minutes.
+ Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
+ using this
+ link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
+ to create your own Langflow workspace in minutes.
## Learn more about Langflow 1.0
diff --git a/docs/docs/integrations/notion/add-content-to-page.md b/docs/docs/integrations/notion/add-content-to-page.md
index 83b395fd0..ace43e103 100644
--- a/docs/docs/integrations/notion/add-content-to-page.md
+++ b/docs/docs/integrations/notion/add-content-to-page.md
@@ -9,14 +9,11 @@ The `AddContentToPage` component converts markdown text to Notion blocks and app
[Notion Reference](https://developers.notion.com/reference/patch-block-children)
-
-
The `AddContentToPage` component enables you to:
- Convert markdown text to Notion blocks.
- Append the converted blocks to a specified Notion page.
- Seamlessly integrate Notion content creation into Langflow workflows.
-
## Component Usage
@@ -100,23 +97,19 @@ class NotionPageCreator(CustomComponent):
## Example Usage
-
-
Example of using the `AddContentToPage` component in a Langflow flow using Markdown as input:
In this example, the `AddContentToPage` component connects to a `MarkdownLoader` component to provide the markdown text input. The converted Notion blocks are appended to the specified Notion page using the provided `block_id` and `notion_secret`.
-
-
## Best Practices
When using the `AddContentToPage` component:
@@ -131,8 +124,8 @@ The `AddContentToPage` component is a powerful tool for integrating Notion conte
## Troubleshooting
If you encounter any issues while using the `AddContentToPage` component, consider the following:
+
- Verify the Notion integration token’s validity and permissions.
- Check the Notion API documentation for updates.
- Ensure markdown text is properly formatted.
- Double-check the `block_id` for correctness.
-
diff --git a/docs/docs/integrations/notion/intro.md b/docs/docs/integrations/notion/intro.md
index ec8738dc7..293038d4f 100644
--- a/docs/docs/integrations/notion/intro.md
+++ b/docs/docs/integrations/notion/intro.md
@@ -8,12 +8,12 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
The Notion integration in Langflow enables seamless connectivity with Notion databases, pages, and users, facilitating automation and improving productivity.
#### Download Notion Components Bundle
diff --git a/docs/docs/integrations/notion/list-database-properties.md b/docs/docs/integrations/notion/list-database-properties.md
index 830ea3324..c41159893 100644
--- a/docs/docs/integrations/notion/list-database-properties.md
+++ b/docs/docs/integrations/notion/list-database-properties.md
@@ -41,7 +41,7 @@ class NotionDatabaseProperties(CustomComponent):
description = "Retrieve properties of a Notion database."
documentation: str = "https://docs.langflow.org/integrations/notion/list-database-properties"
icon = "NotionDirectoryLoader"
-
+
def build_config(self):
return {
"database_id": {
@@ -80,6 +80,7 @@ class NotionDatabaseProperties(CustomComponent):
```
## Example Usage
+
Here's an example of how you can use the `NotionDatabaseProperties` component in a Langflow flow:
@@ -110,6 +111,7 @@ Feel free to explore the capabilities of the `NotionDatabaseProperties` componen
## Troubleshooting
If you encounter any issues while using the `NotionDatabaseProperties` component, consider the following:
+
- Verify that the Notion integration token is valid and has the required permissions.
- Check the database ID to ensure it matches the intended Notion database.
-- Inspect the response from the Notion API for any error messages or status codes that may indicate the cause of the issue.
\ No newline at end of file
+- Inspect the response from the Notion API for any error messages or status codes that may indicate the cause of the issue.
diff --git a/docs/docs/integrations/notion/list-pages.md b/docs/docs/integrations/notion/list-pages.md
index 3e219870e..ea1b04950 100644
--- a/docs/docs/integrations/notion/list-pages.md
+++ b/docs/docs/integrations/notion/list-pages.md
@@ -140,16 +140,17 @@ class NotionListPages(CustomComponent):
## Example Usage
+
Here's an example of how you can use the `NotionListPages` component in a Langflow flow and passing to the Prompt component:
In this example, the `NotionListPages` component is used to retrieve specific pages from a Notion database based on the provided filters and sorting options. The retrieved data can then be processed further in the subsequent components of the flow.
@@ -157,7 +158,7 @@ In this example, the `NotionListPages` component is used to retrieve specific pa
## Best Practices
- When using the `NotionListPages
+When using the `NotionListPages
` component, consider the following best practices:
- Ensure that you have a valid Notion integration token with the necessary permissions to query the desired database.
@@ -171,7 +172,7 @@ We encourage you to explore the capabilities of the `NotionListPages
## Troubleshooting
- If you encounter any issues while using the `NotionListPages` component, consider the following:
+If you encounter any issues while using the `NotionListPages` component, consider the following:
- Double-check that the `notion_secret` and `database_id` are correct and valid.
- Verify that the `query_payload` JSON string is properly formatted and contains valid filtering and sorting options.
diff --git a/docs/docs/integrations/notion/list-users.md b/docs/docs/integrations/notion/list-users.md
index 90761239a..0eb8236f5 100644
--- a/docs/docs/integrations/notion/list-users.md
+++ b/docs/docs/integrations/notion/list-users.md
@@ -9,13 +9,11 @@ The `NotionUserList` component retrieves users from Notion. It provides a conven
[Notion Reference](https://developers.notion.com/reference/get-users)
-
- The `NotionUserList` component enables you to:
+The `NotionUserList` component enables you to:
- Retrieve user data from Notion
- Access user information such as ID, type, name, and avatar URL
- Integrate Notion user data seamlessly into your Langflow workflows
-
## Component Usage
@@ -94,34 +92,31 @@ class NotionUserList(CustomComponent):
```
## Example Usage
-
+
Here's an example of how you can use the `NotionUserList` component in a Langflow flow and passing the outputs to the Prompt component:
-
-
## Best Practices
- When using the `NotionUserList` component, consider the following best practices:
+When using the `NotionUserList` component, consider the following best practices:
- Ensure that you have a valid Notion integration token with the necessary permissions to retrieve user data.
- Handle the retrieved user data securely and in compliance with Notion's API usage guidelines.
The `NotionUserList` component provides a seamless way to integrate Notion user data into your Langflow workflows. By leveraging this component, you can easily retrieve and utilize user information from Notion, enhancing the capabilities of your Langflow applications. Feel free to explore and experiment with the `NotionUserList` component to unlock new possibilities in your Langflow projects!
-
## Troubleshooting
- If you encounter any issues while using the `NotionUserList` component, consider the following:
+If you encounter any issues while using the `NotionUserList` component, consider the following:
- Double-check that your Notion integration token is valid and has the required permissions.
- Verify that you have installed the necessary dependencies (`requests`) for the component to function properly.
-- Check the Notion API documentation for any updates or changes that may affect the component's functionality.
\ No newline at end of file
+- Check the Notion API documentation for any updates or changes that may affect the component's functionality.
diff --git a/docs/docs/integrations/notion/page-content-viewer.md b/docs/docs/integrations/notion/page-content-viewer.md
index a38c05fd0..f4eeba052 100644
--- a/docs/docs/integrations/notion/page-content-viewer.md
+++ b/docs/docs/integrations/notion/page-content-viewer.md
@@ -11,7 +11,7 @@ The `NotionPageContent` component retrieves the content of a Notion page as plai
- The `NotionPageContent` component enables you to:
+The `NotionPageContent` component enables you to:
- Retrieve the content of a Notion page as plain text
- Extract text from various block types, including paragraphs, headings, lists, and more
@@ -114,18 +114,18 @@ class NotionPageContent(CustomComponent):
Here's an example of how you can use the `NotionPageContent` component in a Langflow flow:
## Best Practices
- When using the `NotionPageContent` component, consider the following best practices:
+When using the `NotionPageContent` component, consider the following best practices:
- Ensure that you have the necessary permissions to access the Notion page you want to retrieve.
- Keep your Notion integration token secure and avoid sharing it publicly.
@@ -135,7 +135,7 @@ The `NotionPageContent` component provides a seamless way to integrate Notion pa
## Troubleshooting
- If you encounter any issues while using the `NotionPageContent` component, consider the following:
+If you encounter any issues while using the `NotionPageContent` component, consider the following:
- Double-check that you have provided the correct Notion page ID.
- Verify that your Notion integration token is valid and has the necessary permissions.
diff --git a/docs/docs/integrations/notion/page-create.md b/docs/docs/integrations/notion/page-create.md
index 0269096b9..f942f257b 100644
--- a/docs/docs/integrations/notion/page-create.md
+++ b/docs/docs/integrations/notion/page-create.md
@@ -97,16 +97,17 @@ class NotionPageCreator(CustomComponent):
```
## Example Usage
+
Here's an example of how to use the `NotionPageCreator` component in a Langflow flow:
@@ -124,6 +125,7 @@ The `NotionPageCreator` component simplifies the process of creating pages in a
## Troubleshooting
If you encounter any issues while using the `NotionPageCreator` component, consider the following:
+
- Double-check that the `database_id` and `notion_secret` inputs are correct and valid.
- Verify that the `properties` input is properly formatted as a JSON string and matches the structure of your Notion database.
-- Check the Notion API documentation for any updates or changes that may affect the component's functionality.
\ No newline at end of file
+- Check the Notion API documentation for any updates or changes that may affect the component's functionality.
diff --git a/docs/docs/integrations/notion/page-update.md b/docs/docs/integrations/notion/page-update.md
index 3389f64d3..0370a2b3a 100644
--- a/docs/docs/integrations/notion/page-update.md
+++ b/docs/docs/integrations/notion/page-update.md
@@ -109,12 +109,12 @@ Let's break down the key parts of this component:
Here's an example of how to use the `NotionPageUpdate` component in a Langflow flow using:
@@ -128,7 +128,6 @@ When using the `NotionPageUpdate` component, consider the following best practic
By leveraging the `NotionPageUpdate` component in Langflow, you can easily integrate updating Notion page properties into your language model workflows and build powerful applications that extend Langflow's capabilities.
-
## Troubleshooting
If you encounter any issues while using the `NotionPageUpdate` component, consider the following:
diff --git a/docs/docs/integrations/notion/search.md b/docs/docs/integrations/notion/search.md
index 3ff7472dc..a972bffc0 100644
--- a/docs/docs/integrations/notion/search.md
+++ b/docs/docs/integrations/notion/search.md
@@ -146,16 +146,17 @@ class NotionSearch(CustomComponent):
```
## Example Usage
+
Here's an example of how you can use the `NotionSearch` component in a Langflow flow:
In this example, the `NotionSearch` component is used to search for pages and databases in Notion based on the provided query and filter criteria. The retrieved data can then be processed further in the subsequent components of the flow.
diff --git a/docs/docs/integrations/notion/setup.md b/docs/docs/integrations/notion/setup.md
index 9511d9c81..72bb8f3b4 100644
--- a/docs/docs/integrations/notion/setup.md
+++ b/docs/docs/integrations/notion/setup.md
@@ -76,4 +76,3 @@ Refer to the individual component documentation for more details on how to use e
- [Notion Integration Capabilities](https://developers.notion.com/reference/capabilities)
If you encounter any issues or have questions, please reach out to our support team or consult the Langflow community forums.
-
diff --git a/docs/docs/migration/api.mdx b/docs/docs/migration/api.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/component-status-and-data-passing.mdx b/docs/docs/migration/component-status-and-data-passing.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/connecting-output-components.mdx b/docs/docs/migration/connecting-output-components.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/custom-component.mdx b/docs/docs/migration/custom-component.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/experimental-components.mdx b/docs/docs/migration/experimental-components.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/flow-of-data.mdx b/docs/docs/migration/flow-of-data.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/global-variables.mdx b/docs/docs/migration/global-variables.mdx
deleted file mode 100644
index 3430ef405..000000000
--- a/docs/docs/migration/global-variables.mdx
+++ /dev/null
@@ -1,116 +0,0 @@
-import ZoomableImage from "/src/theme/ZoomableImage.js";
-import Admonition from "@theme/Admonition";
-
-# Global Variables
-
-## TLDR;
-
-- Global Variables are reusable variables that can be accessed from any Text field in your project.
-- To create a Global Variable, click on the 🌐 button in a Text field and then **+ Add New Variable**.
-- Define the **Name**, **Type**, and **Value** of the variable.
-- Click on **Save Variable** to create the variable.
-- All Credential Global Variables are encrypted and cannot be accessed by anyone but you.
-- Set _`LANGFLOW_STORE_ENVIRONMENT_VARIABLES`_ to _`true`_ in your `.env` file to add all variables in _`LANGFLOW_VARIABLES_TO_GET_FROM_ENVIRONMENT`_ to your user's Global Variables.
-
-Global Variables are a really useful feature of Langflow.
-They allow you to define reusable variables that can be accessed from any Text field in your project.
-
-The first thing you need to do is find a **Text field** in a Component, so let's talk about what a Text field is.
-
-## Text Fields
-
-Text fields are the fields in a Component where you can write text but that does not allow you to open a Text Area.
-
-The easiest way to find fields that are Text fields, though, is to look for fields that have a 🌐 button.
-
-
-
-## Creating a Global Variable
-
-To create a Global Variable, you need to click on the 🌐 button in a Text field and that will open a dropdown showing your currently available variables and at the end of it **+ Add New Variable**.
-
-
-
-Click on **+ Add New Variable** and a window will open where you can define your new Global Variable.
-
-In it, you can define the **Name** of the variable, the optional **Type** of the variable, and the **Value** of the variable.
-
-The **Name** is the name that you will use to refer to the variable in your Text fields.
-
-The **Type** is optional for now but will be used in the future to allow for more advanced features.
-
-The **Value** is the value that the variable will have.
-{/* say that all variables are encrypted */}
-
-
- All Credential Global Variables are encrypted and cannot be accessed by anyone
- but you.
-
-
-
-
-After you have defined your variable, click on **Save Variable** and your variable will be created.
-
-After that, once you click on the 🌐 button in a Text field, you will see your new variable in the dropdown.
-
-## Environment Variables
-
-If you set _`LANGFLOW_STORE_ENVIRONMENT_VARIABLES`_ to _`true`_ (which is the default value) in your `.env` file, all variables in _`LANGFLOW_VARIABLES_TO_GET_FROM_ENVIRONMENT`_ will be added to your user's Global Variables.
-
-All of these variables can be used in your project as any other Global Variable.
-
-
- You can set _`LANGFLOW_STORE_ENVIRONMENT_VARIABLES`_ to _`false`_ in your
- `.env` file to prevent this behavior.
-
-
-You can also set _`LANGFLOW_VARIABLES_TO_GET_FROM_ENVIRONMENT`_ to a list of variables that you want to get from the environment.
-
-The default list at the moment is:
-
-- ANTHROPIC_API_KEY
-- ASTRA_DB_API_ENDPOINT
-- ASTRA_DB_APPLICATION_TOKEN
-- AZURE_OPENAI_API_KEY
-- AZURE_OPENAI_API_DEPLOYMENT_NAME
-- AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
-- AZURE_OPENAI_API_INSTANCE_NAME
-- AZURE_OPENAI_API_VERSION
-- COHERE_API_KEY
-- GOOGLE_API_KEY
-- GROQ_API_KEY
-- HUGGINGFACEHUB_API_TOKEN
-- OPENAI_API_KEY
-- PINECONE_API_KEY
-- SEARCHAPI_API_KEY
-- SERPAPI_API_KEY
-- VECTARA_CUSTOMER_ID
-- VECTARA_CORPUS_ID
-- VECTARA_API_KEY
-
-
- Set _`LANGFLOW_VARIABLES_TO_GET_FROM_ENVIRONMENT`_ as a comma-separated list
- of variables (e.g. _`"VARIABLE1, VARIABLE2"`_) or as a JSON-encoded string
- (e.g. _`'["VARIABLE1", "VARIABLE2"]'`_).
-
diff --git a/docs/docs/migration/inputs-and-outputs.mdx b/docs/docs/migration/inputs-and-outputs.mdx
deleted file mode 100644
index 1e1745347..000000000
--- a/docs/docs/migration/inputs-and-outputs.mdx
+++ /dev/null
@@ -1,36 +0,0 @@
-# Inputs and Outputs
-
-TL;DR: Inputs and Outputs are a category of components that are used to define where data comes in and out of your flow. They also
-dynamically change the Playground and can be renamed to make it easier to build and maintain your flows.
-
-## Introduction
-
-Langflow 1.0 introduces new categories of components called Inputs and Outputs. They are used to make it easier to understand and interact with your flows.
-
-Let's start with what they have in common:
-
-- Components in these categories connect to components that have Text or Record inputs or outputs. Some can connect to both but you have to pick what type of data you want to output or input.
-- They can be renamed to help you identify them more easily in the Playground and while using the API.
-- They dynamically change the Playground to make it easier to understand and interact with your flows.
-
-Native Langflow Components were created to be powerful tools that work around Langflow's features. They are designed to be easy to use and understand, and to help you build your flows faster.
-
-Let's dive into Inputs and Outputs.
-
-## Inputs
-
-Inputs are components that are used to define where data comes into your flow. They can be used to receive data from the user, from a database, or from any other source that can be converted to Text or Record.
-
-The difference between Chat Input and other Input components is the format of the output, the number of configurable fields, and the way they are displayed in the Playground.
-
-Chat Input components can output Text or Record. When you want to pass the sender name, or sender to the next component, you can use the Record output, and when you want to pass the message only you can use the Text output. This is useful when saving the message to a database or a memory system like Zep.
-
-You can find out more about it and the other Inputs [here](../components/inputs).
-
-## Outputs
-
-Outputs are components that are used to define where data comes out of your flow. They can be used to send data to the user, to the Playground, or to define how the data will be displayed in the Playground.
-
-The Chat Output works similarly to the Chat Input but does not have a field that allows for written input. It is used as an Output definition and can be used to send data to the user.
-
-You can find out more about it and the other Outputs [here](../components/outputs).
diff --git a/docs/docs/migration/migrating-to-one-point-zero.mdx b/docs/docs/migration/migrating-to-one-point-zero.mdx
index 827f0e118..973393606 100644
--- a/docs/docs/migration/migrating-to-one-point-zero.mdx
+++ b/docs/docs/migration/migrating-to-one-point-zero.mdx
@@ -41,7 +41,7 @@ We have a special channel in our Discord server dedicated to Langflow 1.0 migrat
Langflow 1.0 introduces adds the concept of Inputs and Outputs to flows, allowing a clear definition of the data flow between components. Discover how to use Inputs and Outputs to pass data between components and create more dynamic flows.
-[Learn more about Inputs and Outputs of Components](../migration/inputs-and-outputs)
+[Learn more about Inputs and Outputs of Components](../components/inputs-and-outputs)
## To Compose or Not to Compose: the choice is yours
@@ -71,7 +71,7 @@ Langflow 1.0 introduces many new native categories, including Inputs, Outputs, H
With the introduction of Text and Record types connections between Components are more intuitive and easier to understand. This is the first step in a series of improvements to the way you interact with Langflow. Learn how to use Text, and Record and how they help you build better flows.
-[Learn more about Text and Record](../migration/text-and-record)
+[Learn more about Text and Record](../components/text-and-record)
## CustomComponent for All Components
@@ -119,7 +119,7 @@ Things got a whole lot easier. You can now pass tweaks and inputs in the API by
Global Variables can be used in any Text Field across your projects. Learn how to define and utilize Global Variables to streamline your workflow.
-[Learn more about Global Variables](../migration/global-variables)
+[Learn more about Global Variables](../administration/global-env.mdx)
## Experimental Components
diff --git a/docs/docs/migration/multiple-flows.mdx b/docs/docs/migration/multiple-flows.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/new-categories-and-components.mdx b/docs/docs/migration/new-categories-and-components.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/passing-tweaks-and-inputs.mdx b/docs/docs/migration/passing-tweaks-and-inputs.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/possible-installation-issues.mdx b/docs/docs/migration/possible-installation-issues.mdx
index 2590d0b8a..a012a1c09 100644
--- a/docs/docs/migration/possible-installation-issues.mdx
+++ b/docs/docs/migration/possible-installation-issues.mdx
@@ -25,11 +25,11 @@ ModuleNotFoundError: No module named 'langflow.__main__'
There are two possible reasons for this error:
1. You've installed Langflow using _`pip install langflow`_ but you already had a previous version of Langflow installed in your system.
- In this case, you might be running the wrong executable.
- To solve this issue, run the correct executable by running _`python -m langflow run`_ instead of _`langflow run`_.
- If that doesn't work, try uninstalling and reinstalling Langflow with _`python -m pip install langflow --pre -U`_.
+ In this case, you might be running the wrong executable.
+ To solve this issue, run the correct executable by running _`python -m langflow run`_ instead of _`langflow run`_.
+ If that doesn't work, try uninstalling and reinstalling Langflow with _`python -m pip install langflow --pre -U`_.
2. Some version conflicts might have occurred during the installation process.
- Run _`python -m pip install langflow --pre -U --force-reinstall`_ to reinstall Langflow and its dependencies.
+ Run _`python -m pip install langflow --pre -U --force-reinstall`_ to reinstall Langflow and its dependencies.
## _`Something went wrong running migrations. Please, run 'langflow migration --fix'`_
@@ -45,4 +45,3 @@ There are two possible reasons for this error:
This error can occur during Langflow upgrades when the new version can't override `langflow-pre.db` in `.cache/langflow/`. Clearing the cache removes this file but will also erase your settings.
If you wish to retain your files, back them up before clearing the folder.
-
diff --git a/docs/docs/migration/renaming-and-editing-components.mdx b/docs/docs/migration/renaming-and-editing-components.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/sidebar-and-interaction-panel.mdx b/docs/docs/migration/sidebar-and-interaction-panel.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/state-management.mdx b/docs/docs/migration/state-management.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/supported-frameworks.mdx b/docs/docs/migration/supported-frameworks.mdx
deleted file mode 100644
index e69de29bb..000000000
diff --git a/docs/docs/migration/text-and-record.mdx b/docs/docs/migration/text-and-record.mdx
deleted file mode 100644
index cdfb26b6c..000000000
--- a/docs/docs/migration/text-and-record.mdx
+++ /dev/null
@@ -1,45 +0,0 @@
-# Text and Record
-
-In Langflow 1.0 we added two main input and output types: Text and Record. Text is a simple string input and output type, while Record is a structure very similar to a dictionary in Python. It is a key-value pair data structure.
-
-We've created a few components to help you work with these types. Let's see how a few of them work.
-
-### Records To Text
-
-This is a Component that takes in Records and outputs a Text. It does this using a template string and concatenating the values of the Record, one per line.
-
-If we have the following Records:
-
-```json
-{
- "sender_name": "Alice",
- "message": "Hello!"
-}
-{
- "sender_name": "John",
- "message": "Hi!"
-}
-```
-
-And the template string is: _`{sender_name}: {message}`_
-
-```
-Alice: Hello!
-John: Hi!
-```
-
-### Create Record
-
-This Component allows you to create a Record from a number of inputs. You can add as many key-value pairs as you want (as long as it is less than 15 😅). Once you've picked that number you'll need to write the name of the Key and can pass Text values from other components to it.
-
-### Documents To Records
-
-This Component takes in a [LangChain](https://langchain.com) Document and outputs a Record. It does this by extracting the _`page_content`_ and the _`metadata`_ from the Document and adding them to the Record as _`text`_ and _`data`_ respectively.
-
-## Why is this useful?
-
-The idea was to create a unified way to work with complex data in Langflow, and to make it easier to work with data that is not just a simple string. This way you can create more complex workflows and use the data in more ways.
-
-## What's next?
-
-We are planning to integrate an array of modalities to Langflow, such as images, audio, and video. This will allow you to create even more complex workflows and use cases. Stay tuned for more updates! 🚀
diff --git a/docs/docs/starter-projects/basic-prompting.mdx b/docs/docs/starter-projects/basic-prompting.mdx
index 6fb7391e2..26b054bcc 100644
--- a/docs/docs/starter-projects/basic-prompting.mdx
+++ b/docs/docs/starter-projects/basic-prompting.mdx
@@ -14,12 +14,15 @@ This article demonstrates how to use Langflow's prompt tools to issue basic prom
## Prerequisites
-* [Langflow installed and running](../getting-started/install-langflow.mdx)
+- [Langflow installed and running](../getting-started/install-langflow.mdx)
-* [OpenAI API key created](https://platform.openai.com)
+- [OpenAI API key created](https://platform.openai.com)
- Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space using this link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) to create your own Langflow workspace in minutes.
+ Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
+ using this
+ link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
+ to create your own Langflow workspace in minutes.
## Create the basic prompting project
@@ -42,25 +45,21 @@ Examine the **Prompt** component. The **Template** field instructs the LLM to `A
This should be interesting...
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
- 1. In the **Variable Name** field, enter `openai_api_key`.
- 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
- 3. Click **Save Variable**.
+ 1. In the **Variable Name** field, enter `openai_api_key`.
+ 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
+ 3. Click **Save Variable**.
## Run the basic prompting flow
1. Click the **Run** button.
-The **Interaction Panel** opens, where you can converse with your bot.
+ The **Interaction Panel** opens, where you can converse with your bot.
2. Type a message and press Enter.
-The bot responds in a markedly piratical manner!
+ The bot responds in a markedly piratical manner!
## Modify the prompt for a different result
1. To modify your prompt results, in the **Prompt** template, click the **Template** field.
-The **Edit Prompt** window opens.
+ The **Edit Prompt** window opens.
2. Change `Answer the user as if you were a pirate` to a different character, perhaps `Answer the user as if you were Harold Abelson.`
3. Run the basic prompting flow again.
-The response will be markedly different.
-
-
-
-
+ The response will be markedly different.
diff --git a/docs/docs/starter-projects/blog-writer.mdx b/docs/docs/starter-projects/blog-writer.mdx
index 0e8047fd6..9380bf114 100644
--- a/docs/docs/starter-projects/blog-writer.mdx
+++ b/docs/docs/starter-projects/blog-writer.mdx
@@ -10,12 +10,15 @@ Build a blog writer with OpenAI that uses URLs for reference content.
## Prerequisites
-* [Langflow installed and running](../getting-started/install-langflow.mdx)
+- [Langflow installed and running](../getting-started/install-langflow.mdx)
-* [OpenAI API key created](https://platform.openai.com)
+- [OpenAI API key created](https://platform.openai.com)
- Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space using this link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) to create your own Langflow workspace in minutes.
+ Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
+ using this
+ link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
+ to create your own Langflow workspace in minutes.
## Create the Blog Writer project
@@ -36,6 +39,7 @@ Build a blog writer with OpenAI that uses URLs for reference content.
This flow creates a one-shot prompt flow with **Prompt**, **OpenAI**, and **Chat Output** components, and augments the flow with reference content and instructions from the **URL** and **Instructions** components.
The **Prompt** component's default **Template** field looks like this:
+
```bash
Reference 1:
@@ -59,16 +63,16 @@ The `{instructions}` value is received from the **Value** field of the **Instruc
The `reference_1` and `reference_2` values are received from the **URL** fields of the **URL** components.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
- 1. In the **Variable Name** field, enter `openai_api_key`.
- 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
- 3. Click **Save Variable**.
+ 1. In the **Variable Name** field, enter `openai_api_key`.
+ 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
+ 3. Click **Save Variable**.
## Run the Blog Writer flow
1. Click the **Run** button.
-The **Interaction Panel** opens, where you can run your one-shot flow.
+ The **Interaction Panel** opens, where you can run your one-shot flow.
2. Click the **Lighting Bolt** icon to run your flow.
3. The **OpenAI** component constructs a blog post with the **URL** items as context.
-The default **URL** values are for web pages at `promptingguide.ai`, so your blog post will be about prompting LLMs.
+ The default **URL** values are for web pages at `promptingguide.ai`, so your blog post will be about prompting LLMs.
-To write about something different, change the values in the **URL** components, and see what the LLM constructs.
\ No newline at end of file
+To write about something different, change the values in the **URL** components, and see what the LLM constructs.
diff --git a/docs/docs/starter-projects/document-qa.mdx b/docs/docs/starter-projects/document-qa.mdx
index 5e5377355..ddbcd901a 100644
--- a/docs/docs/starter-projects/document-qa.mdx
+++ b/docs/docs/starter-projects/document-qa.mdx
@@ -10,12 +10,15 @@ Build a question-and-answer chatbot with a document loaded from local memory.
## Prerequisites
-* [Langflow installed and running](../getting-started/install-langflow.mdx)
+- [Langflow installed and running](../getting-started/install-langflow.mdx)
-* [OpenAI API key created](https://platform.openai.com)
+- [OpenAI API key created](https://platform.openai.com)
- Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space using this link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) to create your own Langflow workspace in minutes.
+ Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
+ using this
+ link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
+ to create your own Langflow workspace in minutes.
## Create the Document QA project
@@ -39,24 +42,27 @@ The **Prompt** component is instructed to answer questions based on the contents
Including a file with the prompt gives the **OpenAI** component context it may not otherwise have access to.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
- 1. In the **Variable Name** field, enter `openai_api_key`.
- 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
- 3. Click **Save Variable**.
+
+ 1. In the **Variable Name** field, enter `openai_api_key`.
+ 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
+ 3. Click **Save Variable**.
5. To select a document to load, in the **Files** component, click within the **Path** field.
- 1. Select a local file, and then click **Open**.
- 2. The file name appears in the field.
-
- The file must be of an extension type listed [here](https://github.com/langflow-ai/langflow/blob/dev/src/backend/base/langflow/base/data/utils.py#L13).
-
+ 1. Select a local file, and then click **Open**.
+ 2. The file name appears in the field.
+
+ The file must be of an extension type listed
+ [here](https://github.com/langflow-ai/langflow/blob/dev/src/backend/base/langflow/base/data/utils.py#L13).
+
## Run the Document QA flow
1. Click the **Run** button.
-The **Interaction Panel** opens, where you can converse with your bot.
+ The **Interaction Panel** opens, where you can converse with your bot.
2. Type a message and press Enter.
-For this example, we loaded an error log `.txt` file and asked, "What went wrong?"
-The bot responded:
+ For this example, we loaded an error log `.txt` file and asked, "What went wrong?"
+ The bot responded:
+
```
The issue occurred during the execution of migrations in the application. Specifically, an error was raised by the Alembic library, indicating that new upgrade operations were detected that had not been accounted for in the existing migration scripts. The operation in question involved modifying the nullable property of a column (apikey, created_at) in the database, with details about the existing type (DATETIME()), existing server default, and other properties.
```
diff --git a/docs/docs/starter-projects/memory-chatbot.mdx b/docs/docs/starter-projects/memory-chatbot.mdx
index 86c64d368..8e38ca3e0 100644
--- a/docs/docs/starter-projects/memory-chatbot.mdx
+++ b/docs/docs/starter-projects/memory-chatbot.mdx
@@ -10,12 +10,15 @@ This flow extends the [basic prompting flow](./basic-prompting.mdx) to include c
## Prerequisites
-* [Langflow installed and running](../getting-started/install-langflow.mdx)
+- [Langflow installed and running](../getting-started/install-langflow.mdx)
-* [OpenAI API key created](https://platform.openai.com)
+- [OpenAI API key created](https://platform.openai.com)
- Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space using this link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) to create your own Langflow workspace in minutes.
+ Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
+ using this
+ link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
+ to create your own Langflow workspace in minutes.
## Create the memory chatbot project
@@ -43,16 +46,16 @@ This chatbot is augmented with the **Chat Memory** component, which stores messa
The **Chat History** component gives the **OpenAI** component a memory of previous questions.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
- 1. In the **Variable Name** field, enter `openai_api_key`.
- 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
- 3. Click **Save Variable**.
+ 1. In the **Variable Name** field, enter `openai_api_key`.
+ 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
+ 3. Click **Save Variable**.
## Run the memory chatbot flow
1. Click the **Run** button.
-The **Interaction Panel** opens, where you can converse with your bot.
+ The **Interaction Panel** opens, where you can converse with your bot.
2. Type a message and press Enter.
-The bot will respond according to the template in the **Prompt** component.
+ The bot will respond according to the template in the **Prompt** component.
3. Type more questions. In the **Outputs** log, your queries are logged in order. Up to 5 queries are stored by default. Try asking `What is the first subject I asked you about?` to see where the LLM's memory disappears.
## Modify the Session ID field to have multiple conversations
@@ -65,11 +68,11 @@ You can demonstrate this by modifying the **Session ID** value to switch between
1. In the **Session ID** field of the **Chat Memory** and **Chat Input** components, change the **Session ID** value from `MySessionID` to `AnotherSessionID`.
2. Click the **Run** button to run your flow.
-In the **Interaction Panel**, you will have a new conversation. (You may need to clear the cache with the **Eraser** button).
+ In the **Interaction Panel**, you will have a new conversation. (You may need to clear the cache with the **Eraser** button).
3. Type a few questions to your bot.
4. In the **Session ID** field of the **Chat Memory** and **Chat Input** components, change the **Session ID** value back to `MySessionID`.
5. Run your flow.
-The **Outputs** log of the **Interaction Panel** displays the history from your initial chat with `MySessionID`.
+ The **Outputs** log of the **Interaction Panel** displays the history from your initial chat with `MySessionID`.
## Store Session ID as a Langflow variable
@@ -79,4 +82,3 @@ To store **Session ID** as a Langflow variable, in the **Session ID** field, cli
2. In the **Value** field, enter a value like `1B5EBD79-6E9C-4533-B2C8-7E4FF29E983B`.
3. Click **Save Variable**.
4. Apply this variable to **Chat Input**.
-
diff --git a/docs/docs/starter-projects/vector-store-rag.mdx b/docs/docs/starter-projects/vector-store-rag.mdx
index ddb0a1d46..d0054e6c4 100644
--- a/docs/docs/starter-projects/vector-store-rag.mdx
+++ b/docs/docs/starter-projects/vector-store-rag.mdx
@@ -17,16 +17,19 @@ We've chosen [Astra DB](https://astra.datastax.com/signup?utm_source=langflow-pr
## Prerequisites
- Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space using this link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) to create your own Langflow workspace in minutes.
+ Langflow v1.0 alpha is also available in HuggingFace Spaces. [Clone the space
+ using this
+ link](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
+ to create your own Langflow workspace in minutes.
-* [Langflow installed and running](../getting-started/install-langflow.mdx)
+- [Langflow installed and running](../getting-started/install-langflow.mdx)
-* [OpenAI API key](https://platform.openai.com)
+- [OpenAI API key](https://platform.openai.com)
-* [An Astra DB vector database created](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html) with:
- * Application token (`AstraCS:WSnyFUhRxsrg…`)
- * API endpoint (`https://ASTRA_DB_ID-ASTRA_DB_REGION.apps.astra.datastax.com`)
+- [An Astra DB vector database created](https://docs.datastax.com/en/astra-db-serverless/get-started/quickstart.html) with:
+ - Application token (`AstraCS:WSnyFUhRxsrg…`)
+ - API endpoint (`https://ASTRA_DB_ID-ASTRA_DB_REGION.apps.astra.datastax.com`)
## Create the vector store RAG project
@@ -49,38 +52,40 @@ The **ingestion** flow (bottom of the screen) populates the vector store with da
It ingests data from a file (**File**), splits it into chunks (**Recursive Character Text Splitter**), indexes it in Astra DB (**Astra DB**), and computes embeddings for the chunks (**OpenAI Embeddings**).
This forms a "brain" for the query flow.
-The **query** flow (top of the screen) allows users to chat with the embedded vector store data. It's a little more complex:
+The **query** flow (top of the screen) allows users to chat with the embedded vector store data. It's a little more complex:
-* **Chat Input** component defines where to put the user input coming from the Playground.
-* **OpenAI Embeddings** component generates embeddings from the user input.
-* **Astra DB Search** component retrieves the most relevant Records from the Astra DB database.
-* **Text Output** component turns the Records into Text by concatenating them and also displays it in the Playground.
-* **Prompt** component takes in the user input and the retrieved Records as text and builds a prompt for the OpenAI model.
-* **OpenAI** component generates a response to the prompt.
-* **Chat Output** component displays the response in the Playground.
+- **Chat Input** component defines where to put the user input coming from the Playground.
+- **OpenAI Embeddings** component generates embeddings from the user input.
+- **Astra DB Search** component retrieves the most relevant Records from the Astra DB database.
+- **Text Output** component turns the Records into Text by concatenating them and also displays it in the Playground.
+- **Prompt** component takes in the user input and the retrieved Records as text and builds a prompt for the OpenAI model.
+- **OpenAI** component generates a response to the prompt.
+- **Chat Output** component displays the response in the Playground.
4. To create an environment variable for the **OpenAI** component, in the **OpenAI API Key** field, click the **Globe** button, and then click **Add New Variable**.
- 1. In the **Variable Name** field, enter `openai_api_key`.
- 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
- 3. Click **Save Variable**.
-4. To create environment variables for the **Astra DB** and **Astra DB Search** components:
- 1. In the **Token** field, click the **Globe** button, and then click **Add New Variable**.
- 2. In the **Variable Name** field, enter `astra_token`.
- 3. In the **Value** field, paste your Astra application token (`AstraCS:WSnyFUhRxsrg…`).
- 4. Click **Save Variable**.
- 5. Repeat the above steps for the **API Endpoint** field, pasting your Astra API Endpoint instead (`https://ASTRA_DB_ID-ASTRA_DB_REGION.apps.astra.datastax.com`).
- 6. Add the global variable to both the **Astra DB** and **Astra DB Search** components.
+ 1. In the **Variable Name** field, enter `openai_api_key`.
+ 2. In the **Value** field, paste your OpenAI API Key (`sk-...`).
+ 3. Click **Save Variable**.
+
+5. To create environment variables for the **Astra DB** and **Astra DB Search** components:
+ 1. In the **Token** field, click the **Globe** button, and then click **Add New Variable**.
+ 2. In the **Variable Name** field, enter `astra_token`.
+ 3. In the **Value** field, paste your Astra application token (`AstraCS:WSnyFUhRxsrg…`).
+ 4. Click **Save Variable**.
+ 5. Repeat the above steps for the **API Endpoint** field, pasting your Astra API Endpoint instead (`https://ASTRA_DB_ID-ASTRA_DB_REGION.apps.astra.datastax.com`).
+ 6. Add the global variable to both the **Astra DB** and **Astra DB Search** components.
## Run the vector store RAG flow
1. Click the **Playground** button.
-The **Playground** opens, where you can chat with your data.
+ The **Playground** opens, where you can chat with your data.
2. Type a message and press Enter. (Try something like "What topics do you know about?")
3. The bot will respond with a summary of the data you've embedded.
For example, we embedded a PDF of an engine maintenance manual and asked, "How do I change the oil?"
The bot responds:
+
```
To change the oil in the engine, follow these steps:
@@ -102,7 +107,3 @@ You should use a 3/8 inch wrench to remove the oil drain cap.
```
This is the size the engine manual lists as well. This confirms our flow works, because the query returns the unique knowledge we embedded from the Astra vector store.
-
-
-
-
diff --git a/docs/docs/whats-new/a-new-chapter-langflow.mdx b/docs/docs/whats-new/a-new-chapter-langflow.mdx
index 3ff74ffb2..bdc0f178b 100644
--- a/docs/docs/whats-new/a-new-chapter-langflow.mdx
+++ b/docs/docs/whats-new/a-new-chapter-langflow.mdx
@@ -41,7 +41,7 @@ By having a clear definition of Inputs and Outputs, we could build the experienc
When building a project testing and debugging is crucial. The Playground is a tool that changes dynamically based on the Inputs and Outputs you defined in your project.
For example, let's say you are building a simple RAG application. Generally, you have an Input, some references that come from a Vector Store Search, a Prompt and the answer.
-Now, you could plug the output of your Prompt into a [Text Output](../components/outputs#Text-Output), rename that to "Prompt Result" and see the output of your Prompt in the Playground.
+Now, you could plug the output of your Prompt into a [Text Output](../components/inputs-and-outputs), rename that to "Prompt Result" and see the output of your Prompt in the Playground.
{/* Add image here of the described above */}
diff --git a/docs/sidebars.js b/docs/sidebars.js
index 2b891b589..04d81d475 100644
--- a/docs/sidebars.js
+++ b/docs/sidebars.js
@@ -49,8 +49,8 @@ module.exports = {
label: "Core Components",
collapsed: false,
items: [
- "components/inputs",
- "components/outputs",
+ "components/inputs-and-outputs",
+ "components/text-and-record",
"components/data",
"components/models",
"components/helpers",
@@ -80,26 +80,23 @@ module.exports = {
label: "Example Components",
collapsed: true,
items: [
- "examples/flow-runner",
- "examples/conversation-chain",
- "examples/buffer-memory",
- "examples/csv-loader",
- "examples/searchapi-tool",
- "examples/serp-api-tool",
- "examples/python-function",
+ "examples/chat-memory",
+ "examples/combine-text",
+ "examples/create-record",
+ "examples/pass",
+ "examples/store-message",
+ "examples/sub-flow",
+ "examples/text-operator",
],
},
{
type: "category",
- label: "Migration Guides",
+ label: "Migration",
collapsed: false,
items: [
"migration/possible-installation-issues",
"migration/migrating-to-one-point-zero",
- "migration/inputs-and-outputs",
- "migration/text-and-record",
"migration/compatibility",
- "migration/global-variables",
],
},
{
@@ -116,7 +113,11 @@ module.exports = {
type: "category",
label: "Deployment",
collapsed: true,
- items: ["deployment/gcp-deployment"],
+ items: [
+ "deployment/docker",
+ "deployment/backend-only",
+ "deployment/gcp-deployment",
+ ],
},
{
type: "category",
diff --git a/docs/static/data/AstraDB-RAG-Flows.json b/docs/static/data/AstraDB-RAG-Flows.json
index 10dafa85f..d8bd23eb2 100644
--- a/docs/static/data/AstraDB-RAG-Flows.json
+++ b/docs/static/data/AstraDB-RAG-Flows.json
@@ -1,3403 +1,3147 @@
{
- "id": "51e2b78a-199b-4054-9f32-e288eef6924c",
- "data": {
- "nodes": [
- {
- "id": "ChatInput-yxMKE",
- "type": "genericNode",
- "position": {
- "x": 1195.5276981160775,
- "y": 209.421875
- },
- "data": {
- "type": "ChatInput",
- "node": {
- "template": {
- "code": {
- "type": "code",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "from typing import Optional, Union\n\nfrom langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.schema import Record\n\n\nclass ChatInput(ChatComponent):\n display_name = \"Chat Input\"\n description = \"Get chat inputs from the Playground.\"\n icon = \"ChatInput\"\n\n def build_config(self):\n build_config = super().build_config()\n build_config[\"input_value\"] = {\n \"input_types\": [],\n \"display_name\": \"Message\",\n \"multiline\": True,\n }\n\n return build_config\n\n def build(\n self,\n sender: Optional[str] = \"User\",\n sender_name: Optional[str] = \"User\",\n input_value: Optional[str] = None,\n session_id: Optional[str] = None,\n return_record: Optional[bool] = False,\n ) -> Union[Text, Record]:\n return super().build(\n sender=sender,\n sender_name=sender_name,\n input_value=input_value,\n session_id=session_id,\n return_record=return_record,\n )\n",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "code",
- "advanced": true,
- "dynamic": true,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "input_value": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "input_value",
- "display_name": "Message",
- "advanced": false,
- "input_types": [],
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "value": "what is a line"
- },
- "return_record": {
- "type": "bool",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "return_record",
- "display_name": "Return Record",
- "advanced": true,
- "dynamic": false,
- "info": "Return the message as a record containing the sender, sender_name, and session_id.",
- "load_from_db": false,
- "title_case": false
- },
- "sender": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "value": "User",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "options": [
- "Machine",
- "User"
- ],
- "name": "sender",
- "display_name": "Sender Type",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "sender_name": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": "User",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "sender_name",
- "display_name": "Sender Name",
- "advanced": false,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "session_id": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "session_id",
- "display_name": "Session ID",
- "advanced": true,
- "dynamic": false,
- "info": "If provided, the message will be stored in the memory.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "_type": "CustomComponent"
- },
- "description": "Get chat inputs from the Playground.",
- "icon": "ChatInput",
- "base_classes": [
- "Text",
- "str",
- "object",
- "Record"
- ],
- "display_name": "Chat Input",
- "documentation": "",
- "custom_fields": {
- "sender": null,
- "sender_name": null,
- "input_value": null,
- "session_id": null,
- "return_record": null
- },
- "output_types": [
- "Text",
- "Record"
- ],
- "field_formatters": {},
- "frozen": false,
- "field_order": [],
- "beta": false
- },
- "id": "ChatInput-yxMKE"
- },
- "selected": false,
- "width": 384,
- "height": 383
+ "id": "51e2b78a-199b-4054-9f32-e288eef6924c",
+ "data": {
+ "nodes": [
+ {
+ "id": "ChatInput-yxMKE",
+ "type": "genericNode",
+ "position": {
+ "x": 1195.5276981160775,
+ "y": 209.421875
+ },
+ "data": {
+ "type": "ChatInput",
+ "node": {
+ "template": {
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "from typing import Optional, Union\n\nfrom langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.schema import Record\n\n\nclass ChatInput(ChatComponent):\n display_name = \"Chat Input\"\n description = \"Get chat inputs from the Playground.\"\n icon = \"ChatInput\"\n\n def build_config(self):\n build_config = super().build_config()\n build_config[\"input_value\"] = {\n \"input_types\": [],\n \"display_name\": \"Message\",\n \"multiline\": True,\n }\n\n return build_config\n\n def build(\n self,\n sender: Optional[str] = \"User\",\n sender_name: Optional[str] = \"User\",\n input_value: Optional[str] = None,\n session_id: Optional[str] = None,\n return_record: Optional[bool] = False,\n ) -> Union[Text, Record]:\n return super().build(\n sender=sender,\n sender_name=sender_name,\n input_value=input_value,\n session_id=session_id,\n return_record=return_record,\n )\n",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "input_value": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "input_value",
+ "display_name": "Message",
+ "advanced": false,
+ "input_types": [],
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "value": "what is a line"
+ },
+ "return_record": {
+ "type": "bool",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "return_record",
+ "display_name": "Return Record",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Return the message as a record containing the sender, sender_name, and session_id.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "sender": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "value": "User",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "options": ["Machine", "User"],
+ "name": "sender",
+ "display_name": "Sender Type",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "sender_name": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": "User",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "sender_name",
+ "display_name": "Sender Name",
+ "advanced": false,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "session_id": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "session_id",
+ "display_name": "Session ID",
+ "advanced": true,
+ "dynamic": false,
+ "info": "If provided, the message will be stored in the memory.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "_type": "CustomComponent"
},
- {
- "id": "TextOutput-BDknO",
- "type": "genericNode",
- "position": {
- "x": 2322.600672827879,
- "y": 604.9467307442569
- },
- "data": {
- "type": "TextOutput",
- "node": {
- "template": {
- "input_value": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": "",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "input_value",
- "display_name": "Value",
- "advanced": false,
- "input_types": [
- "Record",
- "Text"
- ],
- "dynamic": false,
- "info": "Text or Record to be passed as output.",
- "load_from_db": false,
- "title_case": false
- },
- "code": {
- "type": "code",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "from typing import Optional\n\nfrom langflow.base.io.text import TextComponent\nfrom langflow.field_typing import Text\n\n\nclass TextOutput(TextComponent):\n display_name = \"Text Output\"\n description = \"Display a text output in the Playground.\"\n icon = \"type\"\n\n def build_config(self):\n return {\n \"input_value\": {\n \"display_name\": \"Value\",\n \"input_types\": [\"Record\", \"Text\"],\n \"info\": \"Text or Record to be passed as output.\",\n },\n \"record_template\": {\n \"display_name\": \"Record Template\",\n \"multiline\": True,\n \"info\": \"Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.\",\n \"advanced\": True,\n },\n }\n\n def build(self, input_value: Optional[Text] = \"\", record_template: str = \"\") -> Text:\n return super().build(input_value=input_value, record_template=record_template)\n",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "code",
- "advanced": true,
- "dynamic": true,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "record_template": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "{text}",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "record_template",
- "display_name": "Record Template",
- "advanced": true,
- "dynamic": false,
- "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "_type": "CustomComponent"
- },
- "description": "Display a text output in the Playground.",
- "icon": "type",
- "base_classes": [
- "object",
- "Text",
- "str"
- ],
- "display_name": "Extracted Chunks",
- "documentation": "",
- "custom_fields": {
- "input_value": null,
- "record_template": null
- },
- "output_types": [
- "Text"
- ],
- "field_formatters": {},
- "frozen": false,
- "field_order": [],
- "beta": false
- },
- "id": "TextOutput-BDknO"
- },
- "selected": false,
- "width": 384,
- "height": 289,
- "positionAbsolute": {
- "x": 2322.600672827879,
- "y": 604.9467307442569
- },
- "dragging": false
+ "description": "Get chat inputs from the Playground.",
+ "icon": "ChatInput",
+ "base_classes": ["Text", "str", "object", "Record"],
+ "display_name": "Chat Input",
+ "documentation": "",
+ "custom_fields": {
+ "sender": null,
+ "sender_name": null,
+ "input_value": null,
+ "session_id": null,
+ "return_record": null
},
- {
- "id": "OpenAIEmbeddings-ZlOk1",
- "type": "genericNode",
- "position": {
- "x": 1183.667250865064,
- "y": 687.3171828430261
- },
- "data": {
- "type": "OpenAIEmbeddings",
- "node": {
- "template": {
- "allowed_special": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": [],
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "allowed_special",
- "display_name": "Allowed Special",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "chunk_size": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": 1000,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "chunk_size",
- "display_name": "Chunk Size",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "client": {
- "type": "Any",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "client",
- "display_name": "Client",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "code": {
- "type": "code",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "from typing import Any, Dict, List, Optional\n\nfrom langchain_openai.embeddings.base import OpenAIEmbeddings\n\nfrom langflow.field_typing import Embeddings, NestedDict\nfrom langflow.interface.custom.custom_component import CustomComponent\n\n\nclass OpenAIEmbeddingsComponent(CustomComponent):\n display_name = \"OpenAI Embeddings\"\n description = \"Generate embeddings using OpenAI models.\"\n\n def build_config(self):\n return {\n \"allowed_special\": {\n \"display_name\": \"Allowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"default_headers\": {\n \"display_name\": \"Default Headers\",\n \"advanced\": True,\n \"field_type\": \"dict\",\n },\n \"default_query\": {\n \"display_name\": \"Default Query\",\n \"advanced\": True,\n \"field_type\": \"NestedDict\",\n },\n \"disallowed_special\": {\n \"display_name\": \"Disallowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"chunk_size\": {\"display_name\": \"Chunk Size\", \"advanced\": True},\n \"client\": {\"display_name\": \"Client\", \"advanced\": True},\n \"deployment\": {\"display_name\": \"Deployment\", \"advanced\": True},\n \"embedding_ctx_length\": {\n \"display_name\": \"Embedding Context Length\",\n \"advanced\": True,\n },\n \"max_retries\": {\"display_name\": \"Max Retries\", \"advanced\": True},\n \"model\": {\n \"display_name\": \"Model\",\n \"advanced\": False,\n \"options\": [\n \"text-embedding-3-small\",\n \"text-embedding-3-large\",\n \"text-embedding-ada-002\",\n ],\n },\n \"model_kwargs\": {\"display_name\": \"Model Kwargs\", \"advanced\": True},\n \"openai_api_base\": {\n \"display_name\": \"OpenAI API Base\",\n \"password\": True,\n \"advanced\": True,\n },\n \"openai_api_key\": {\"display_name\": \"OpenAI API Key\", \"password\": True},\n \"openai_api_type\": {\n \"display_name\": \"OpenAI API Type\",\n \"advanced\": True,\n \"password\": True,\n },\n \"openai_api_version\": {\n \"display_name\": \"OpenAI API Version\",\n \"advanced\": True,\n },\n \"openai_organization\": {\n \"display_name\": \"OpenAI Organization\",\n \"advanced\": True,\n },\n \"openai_proxy\": {\"display_name\": \"OpenAI Proxy\", \"advanced\": True},\n \"request_timeout\": {\"display_name\": \"Request Timeout\", \"advanced\": True},\n \"show_progress_bar\": {\n \"display_name\": \"Show Progress Bar\",\n \"advanced\": True,\n },\n \"skip_empty\": {\"display_name\": \"Skip Empty\", \"advanced\": True},\n \"tiktoken_model_name\": {\n \"display_name\": \"TikToken Model Name\",\n \"advanced\": True,\n },\n \"tiktoken_enable\": {\"display_name\": \"TikToken Enable\", \"advanced\": True},\n }\n\n def build(\n self,\n openai_api_key: str,\n default_headers: Optional[Dict[str, str]] = None,\n default_query: Optional[NestedDict] = {},\n allowed_special: List[str] = [],\n disallowed_special: List[str] = [\"all\"],\n chunk_size: int = 1000,\n client: Optional[Any] = None,\n deployment: str = \"text-embedding-ada-002\",\n embedding_ctx_length: int = 8191,\n max_retries: int = 6,\n model: str = \"text-embedding-ada-002\",\n model_kwargs: NestedDict = {},\n openai_api_base: Optional[str] = None,\n openai_api_type: Optional[str] = None,\n openai_api_version: Optional[str] = None,\n openai_organization: Optional[str] = None,\n openai_proxy: Optional[str] = None,\n request_timeout: Optional[float] = None,\n show_progress_bar: bool = False,\n skip_empty: bool = False,\n tiktoken_enable: bool = True,\n tiktoken_model_name: Optional[str] = None,\n ) -> Embeddings:\n # This is to avoid errors with Vector Stores (e.g Chroma)\n if disallowed_special == [\"all\"]:\n disallowed_special = \"all\" # type: ignore\n\n return OpenAIEmbeddings(\n tiktoken_enabled=tiktoken_enable,\n default_headers=default_headers,\n default_query=default_query,\n allowed_special=set(allowed_special),\n disallowed_special=\"all\",\n chunk_size=chunk_size,\n client=client,\n deployment=deployment,\n embedding_ctx_length=embedding_ctx_length,\n max_retries=max_retries,\n model=model,\n model_kwargs=model_kwargs,\n base_url=openai_api_base,\n api_key=openai_api_key,\n openai_api_type=openai_api_type,\n api_version=openai_api_version,\n organization=openai_organization,\n openai_proxy=openai_proxy,\n timeout=request_timeout,\n show_progress_bar=show_progress_bar,\n skip_empty=skip_empty,\n tiktoken_model_name=tiktoken_model_name,\n )\n",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "code",
- "advanced": true,
- "dynamic": true,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "default_headers": {
- "type": "dict",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "default_headers",
- "display_name": "Default Headers",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "default_query": {
- "type": "NestedDict",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": {},
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "default_query",
- "display_name": "Default Query",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "deployment": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": "text-embedding-ada-002",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "deployment",
- "display_name": "Deployment",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "disallowed_special": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": [
- "all"
- ],
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "disallowed_special",
- "display_name": "Disallowed Special",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "embedding_ctx_length": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": 8191,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "embedding_ctx_length",
- "display_name": "Embedding Context Length",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "max_retries": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": 6,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "max_retries",
- "display_name": "Max Retries",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "model": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "value": "text-embedding-ada-002",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "options": [
- "text-embedding-3-small",
- "text-embedding-3-large",
- "text-embedding-ada-002"
- ],
- "name": "model",
- "display_name": "Model",
- "advanced": false,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "model_kwargs": {
- "type": "NestedDict",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": {},
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "model_kwargs",
- "display_name": "Model Kwargs",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "openai_api_base": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": true,
- "name": "openai_api_base",
- "display_name": "OpenAI API Base",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "openai_api_key": {
- "type": "str",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": true,
- "name": "openai_api_key",
- "display_name": "OpenAI API Key",
- "advanced": false,
- "dynamic": false,
- "info": "",
- "load_from_db": true,
- "title_case": false,
- "input_types": [
- "Text"
- ],
- "value": "OPENAI_API_KEY"
- },
- "openai_api_type": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": true,
- "name": "openai_api_type",
- "display_name": "OpenAI API Type",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "openai_api_version": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "openai_api_version",
- "display_name": "OpenAI API Version",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "openai_organization": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "openai_organization",
- "display_name": "OpenAI Organization",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "openai_proxy": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "openai_proxy",
- "display_name": "OpenAI Proxy",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "request_timeout": {
- "type": "float",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "request_timeout",
- "display_name": "Request Timeout",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "rangeSpec": {
- "step_type": "float",
- "min": -1,
- "max": 1,
- "step": 0.1
- },
- "load_from_db": false,
- "title_case": false
- },
- "show_progress_bar": {
- "type": "bool",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "show_progress_bar",
- "display_name": "Show Progress Bar",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "skip_empty": {
- "type": "bool",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "skip_empty",
- "display_name": "Skip Empty",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "tiktoken_enable": {
- "type": "bool",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": true,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "tiktoken_enable",
- "display_name": "TikToken Enable",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "tiktoken_model_name": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "tiktoken_model_name",
- "display_name": "TikToken Model Name",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "_type": "CustomComponent"
- },
- "description": "Generate embeddings using OpenAI models.",
- "base_classes": [
- "Embeddings"
- ],
- "display_name": "OpenAI Embeddings",
- "documentation": "",
- "custom_fields": {
- "openai_api_key": null,
- "default_headers": null,
- "default_query": null,
- "allowed_special": null,
- "disallowed_special": null,
- "chunk_size": null,
- "client": null,
- "deployment": null,
- "embedding_ctx_length": null,
- "max_retries": null,
- "model": null,
- "model_kwargs": null,
- "openai_api_base": null,
- "openai_api_type": null,
- "openai_api_version": null,
- "openai_organization": null,
- "openai_proxy": null,
- "request_timeout": null,
- "show_progress_bar": null,
- "skip_empty": null,
- "tiktoken_enable": null,
- "tiktoken_model_name": null
- },
- "output_types": [
- "Embeddings"
- ],
- "field_formatters": {},
- "frozen": false,
- "field_order": [],
- "beta": false
- },
- "id": "OpenAIEmbeddings-ZlOk1"
- },
- "selected": false,
- "width": 384,
- "height": 383,
- "dragging": false
+ "output_types": ["Text", "Record"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false
+ },
+ "id": "ChatInput-yxMKE"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 383
+ },
+ {
+ "id": "TextOutput-BDknO",
+ "type": "genericNode",
+ "position": {
+ "x": 2322.600672827879,
+ "y": 604.9467307442569
+ },
+ "data": {
+ "type": "TextOutput",
+ "node": {
+ "template": {
+ "input_value": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": "",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "input_value",
+ "display_name": "Value",
+ "advanced": false,
+ "input_types": ["Record", "Text"],
+ "dynamic": false,
+ "info": "Text or Record to be passed as output.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "from typing import Optional\n\nfrom langflow.base.io.text import TextComponent\nfrom langflow.field_typing import Text\n\n\nclass TextOutput(TextComponent):\n display_name = \"Text Output\"\n description = \"Display a text output in the Playground.\"\n icon = \"type\"\n\n def build_config(self):\n return {\n \"input_value\": {\n \"display_name\": \"Value\",\n \"input_types\": [\"Record\", \"Text\"],\n \"info\": \"Text or Record to be passed as output.\",\n },\n \"record_template\": {\n \"display_name\": \"Record Template\",\n \"multiline\": True,\n \"info\": \"Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.\",\n \"advanced\": True,\n },\n }\n\n def build(self, input_value: Optional[Text] = \"\", record_template: str = \"\") -> Text:\n return super().build(input_value=input_value, record_template=record_template)\n",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "record_template": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "{text}",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "record_template",
+ "display_name": "Record Template",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "_type": "CustomComponent"
},
- {
- "id": "OpenAIModel-EjXlN",
- "type": "genericNode",
- "position": {
- "x": 3410.117202077183,
- "y": 431.2038048137648
- },
- "data": {
- "type": "OpenAIModel",
- "node": {
- "template": {
- "input_value": {
- "type": "str",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "input_value",
- "display_name": "Input",
- "advanced": false,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "code": {
- "type": "code",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "from typing import Optional\n\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.field_typing import NestedDict, Text\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n field_order = [\n \"max_tokens\",\n \"model_kwargs\",\n \"model_name\",\n \"openai_api_base\",\n \"openai_api_key\",\n \"temperature\",\n \"input_value\",\n \"system_message\",\n \"stream\",\n ]\n\n def build_config(self):\n return {\n \"input_value\": {\"display_name\": \"Input\"},\n \"max_tokens\": {\n \"display_name\": \"Max Tokens\",\n \"advanced\": True,\n },\n \"model_kwargs\": {\n \"display_name\": \"Model Kwargs\",\n \"advanced\": True,\n },\n \"model_name\": {\n \"display_name\": \"Model Name\",\n \"advanced\": False,\n \"options\": [\n \"gpt-4-turbo-preview\",\n \"gpt-3.5-turbo\",\n \"gpt-4-0125-preview\",\n \"gpt-4-1106-preview\",\n \"gpt-4-vision-preview\",\n \"gpt-3.5-turbo-0125\",\n \"gpt-3.5-turbo-1106\",\n ],\n \"value\": \"gpt-4-turbo-preview\",\n },\n \"openai_api_base\": {\n \"display_name\": \"OpenAI API Base\",\n \"advanced\": True,\n \"info\": (\n \"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\n\"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\"\n ),\n },\n \"openai_api_key\": {\n \"display_name\": \"OpenAI API Key\",\n \"info\": \"The OpenAI API Key to use for the OpenAI model.\",\n \"advanced\": False,\n \"password\": True,\n },\n \"temperature\": {\n \"display_name\": \"Temperature\",\n \"advanced\": False,\n \"value\": 0.1,\n },\n \"stream\": {\n \"display_name\": \"Stream\",\n \"info\": STREAM_INFO_TEXT,\n \"advanced\": True,\n },\n \"system_message\": {\n \"display_name\": \"System Message\",\n \"info\": \"System message to pass to the model.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n input_value: Text,\n openai_api_key: str,\n temperature: float,\n model_name: str,\n max_tokens: Optional[int] = 256,\n model_kwargs: NestedDict = {},\n openai_api_base: Optional[str] = None,\n stream: bool = False,\n system_message: Optional[str] = None,\n ) -> Text:\n if not openai_api_base:\n openai_api_base = \"https://api.openai.com/v1\"\n output = ChatOpenAI(\n max_tokens=max_tokens,\n model_kwargs=model_kwargs,\n model=model_name,\n base_url=openai_api_base,\n api_key=openai_api_key,\n temperature=temperature,\n )\n\n return self.get_chat_result(output, stream, input_value, system_message)\n",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "code",
- "advanced": true,
- "dynamic": true,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "max_tokens": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": 256,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "max_tokens",
- "display_name": "Max Tokens",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "model_kwargs": {
- "type": "NestedDict",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": {},
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "model_kwargs",
- "display_name": "Model Kwargs",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "model_name": {
- "type": "str",
- "required": true,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "value": "gpt-3.5-turbo",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "options": [
- "gpt-4-turbo-preview",
- "gpt-3.5-turbo",
- "gpt-4-0125-preview",
- "gpt-4-1106-preview",
- "gpt-4-vision-preview",
- "gpt-3.5-turbo-0125",
- "gpt-3.5-turbo-1106"
- ],
- "name": "model_name",
- "display_name": "Model Name",
- "advanced": false,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "openai_api_base": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "openai_api_base",
- "display_name": "OpenAI API Base",
- "advanced": true,
- "dynamic": false,
- "info": "The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\n\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "openai_api_key": {
- "type": "str",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": true,
- "name": "openai_api_key",
- "display_name": "OpenAI API Key",
- "advanced": false,
- "dynamic": false,
- "info": "The OpenAI API Key to use for the OpenAI model.",
- "load_from_db": true,
- "title_case": false,
- "input_types": [
- "Text"
- ],
- "value": "OPENAI_API_KEY"
- },
- "stream": {
- "type": "bool",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "stream",
- "display_name": "Stream",
- "advanced": true,
- "dynamic": false,
- "info": "Stream the response from the model. Streaming works only in Chat.",
- "load_from_db": false,
- "title_case": false
- },
- "system_message": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "system_message",
- "display_name": "System Message",
- "advanced": true,
- "dynamic": false,
- "info": "System message to pass to the model.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "temperature": {
- "type": "float",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": 0.1,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "temperature",
- "display_name": "Temperature",
- "advanced": false,
- "dynamic": false,
- "info": "",
- "rangeSpec": {
- "step_type": "float",
- "min": -1,
- "max": 1,
- "step": 0.1
- },
- "load_from_db": false,
- "title_case": false
- },
- "_type": "CustomComponent"
- },
- "description": "Generates text using OpenAI LLMs.",
- "icon": "OpenAI",
- "base_classes": [
- "object",
- "Text",
- "str"
- ],
- "display_name": "OpenAI",
- "documentation": "",
- "custom_fields": {
- "input_value": null,
- "openai_api_key": null,
- "temperature": null,
- "model_name": null,
- "max_tokens": null,
- "model_kwargs": null,
- "openai_api_base": null,
- "stream": null,
- "system_message": null
- },
- "output_types": [
- "Text"
- ],
- "field_formatters": {},
- "frozen": false,
- "field_order": [
- "max_tokens",
- "model_kwargs",
- "model_name",
- "openai_api_base",
- "openai_api_key",
- "temperature",
- "input_value",
- "system_message",
- "stream"
- ],
- "beta": false
- },
- "id": "OpenAIModel-EjXlN"
- },
- "selected": true,
- "width": 384,
- "height": 563,
- "positionAbsolute": {
- "x": 3410.117202077183,
- "y": 431.2038048137648
- },
- "dragging": false
+ "description": "Display a text output in the Playground.",
+ "icon": "type",
+ "base_classes": ["object", "Text", "str"],
+ "display_name": "Extracted Chunks",
+ "documentation": "",
+ "custom_fields": {
+ "input_value": null,
+ "record_template": null
},
- {
- "id": "Prompt-xeI6K",
- "type": "genericNode",
- "position": {
- "x": 2969.0261961391298,
- "y": 442.1613649809069
+ "output_types": ["Text"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false
+ },
+ "id": "TextOutput-BDknO"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 289,
+ "positionAbsolute": {
+ "x": 2322.600672827879,
+ "y": 604.9467307442569
+ },
+ "dragging": false
+ },
+ {
+ "id": "OpenAIEmbeddings-ZlOk1",
+ "type": "genericNode",
+ "position": {
+ "x": 1183.667250865064,
+ "y": 687.3171828430261
+ },
+ "data": {
+ "type": "OpenAIEmbeddings",
+ "node": {
+ "template": {
+ "allowed_special": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": [],
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "allowed_special",
+ "display_name": "Allowed Special",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "chunk_size": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": 1000,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "chunk_size",
+ "display_name": "Chunk Size",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "client": {
+ "type": "Any",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "client",
+ "display_name": "Client",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "from typing import Any, Dict, List, Optional\n\nfrom langchain_openai.embeddings.base import OpenAIEmbeddings\n\nfrom langflow.field_typing import Embeddings, NestedDict\nfrom langflow.interface.custom.custom_component import CustomComponent\n\n\nclass OpenAIEmbeddingsComponent(CustomComponent):\n display_name = \"OpenAI Embeddings\"\n description = \"Generate embeddings using OpenAI models.\"\n\n def build_config(self):\n return {\n \"allowed_special\": {\n \"display_name\": \"Allowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"default_headers\": {\n \"display_name\": \"Default Headers\",\n \"advanced\": True,\n \"field_type\": \"dict\",\n },\n \"default_query\": {\n \"display_name\": \"Default Query\",\n \"advanced\": True,\n \"field_type\": \"NestedDict\",\n },\n \"disallowed_special\": {\n \"display_name\": \"Disallowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"chunk_size\": {\"display_name\": \"Chunk Size\", \"advanced\": True},\n \"client\": {\"display_name\": \"Client\", \"advanced\": True},\n \"deployment\": {\"display_name\": \"Deployment\", \"advanced\": True},\n \"embedding_ctx_length\": {\n \"display_name\": \"Embedding Context Length\",\n \"advanced\": True,\n },\n \"max_retries\": {\"display_name\": \"Max Retries\", \"advanced\": True},\n \"model\": {\n \"display_name\": \"Model\",\n \"advanced\": False,\n \"options\": [\n \"text-embedding-3-small\",\n \"text-embedding-3-large\",\n \"text-embedding-ada-002\",\n ],\n },\n \"model_kwargs\": {\"display_name\": \"Model Kwargs\", \"advanced\": True},\n \"openai_api_base\": {\n \"display_name\": \"OpenAI API Base\",\n \"password\": True,\n \"advanced\": True,\n },\n \"openai_api_key\": {\"display_name\": \"OpenAI API Key\", \"password\": True},\n \"openai_api_type\": {\n \"display_name\": \"OpenAI API Type\",\n \"advanced\": True,\n \"password\": True,\n },\n \"openai_api_version\": {\n \"display_name\": \"OpenAI API Version\",\n \"advanced\": True,\n },\n \"openai_organization\": {\n \"display_name\": \"OpenAI Organization\",\n \"advanced\": True,\n },\n \"openai_proxy\": {\"display_name\": \"OpenAI Proxy\", \"advanced\": True},\n \"request_timeout\": {\"display_name\": \"Request Timeout\", \"advanced\": True},\n \"show_progress_bar\": {\n \"display_name\": \"Show Progress Bar\",\n \"advanced\": True,\n },\n \"skip_empty\": {\"display_name\": \"Skip Empty\", \"advanced\": True},\n \"tiktoken_model_name\": {\n \"display_name\": \"TikToken Model Name\",\n \"advanced\": True,\n },\n \"tiktoken_enable\": {\"display_name\": \"TikToken Enable\", \"advanced\": True},\n }\n\n def build(\n self,\n openai_api_key: str,\n default_headers: Optional[Dict[str, str]] = None,\n default_query: Optional[NestedDict] = {},\n allowed_special: List[str] = [],\n disallowed_special: List[str] = [\"all\"],\n chunk_size: int = 1000,\n client: Optional[Any] = None,\n deployment: str = \"text-embedding-ada-002\",\n embedding_ctx_length: int = 8191,\n max_retries: int = 6,\n model: str = \"text-embedding-ada-002\",\n model_kwargs: NestedDict = {},\n openai_api_base: Optional[str] = None,\n openai_api_type: Optional[str] = None,\n openai_api_version: Optional[str] = None,\n openai_organization: Optional[str] = None,\n openai_proxy: Optional[str] = None,\n request_timeout: Optional[float] = None,\n show_progress_bar: bool = False,\n skip_empty: bool = False,\n tiktoken_enable: bool = True,\n tiktoken_model_name: Optional[str] = None,\n ) -> Embeddings:\n # This is to avoid errors with Vector Stores (e.g Chroma)\n if disallowed_special == [\"all\"]:\n disallowed_special = \"all\" # type: ignore\n\n return OpenAIEmbeddings(\n tiktoken_enabled=tiktoken_enable,\n default_headers=default_headers,\n default_query=default_query,\n allowed_special=set(allowed_special),\n disallowed_special=\"all\",\n chunk_size=chunk_size,\n client=client,\n deployment=deployment,\n embedding_ctx_length=embedding_ctx_length,\n max_retries=max_retries,\n model=model,\n model_kwargs=model_kwargs,\n base_url=openai_api_base,\n api_key=openai_api_key,\n openai_api_type=openai_api_type,\n api_version=openai_api_version,\n organization=openai_organization,\n openai_proxy=openai_proxy,\n timeout=request_timeout,\n show_progress_bar=show_progress_bar,\n skip_empty=skip_empty,\n tiktoken_model_name=tiktoken_model_name,\n )\n",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "default_headers": {
+ "type": "dict",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "default_headers",
+ "display_name": "Default Headers",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "default_query": {
+ "type": "NestedDict",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": {},
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "default_query",
+ "display_name": "Default Query",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "deployment": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": "text-embedding-ada-002",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "deployment",
+ "display_name": "Deployment",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "disallowed_special": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": ["all"],
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "disallowed_special",
+ "display_name": "Disallowed Special",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "embedding_ctx_length": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": 8191,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "embedding_ctx_length",
+ "display_name": "Embedding Context Length",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "max_retries": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": 6,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "max_retries",
+ "display_name": "Max Retries",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "model": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "value": "text-embedding-ada-002",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "options": [
+ "text-embedding-3-small",
+ "text-embedding-3-large",
+ "text-embedding-ada-002"
+ ],
+ "name": "model",
+ "display_name": "Model",
+ "advanced": false,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "model_kwargs": {
+ "type": "NestedDict",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": {},
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "model_kwargs",
+ "display_name": "Model Kwargs",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "openai_api_base": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "openai_api_base",
+ "display_name": "OpenAI API Base",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "openai_api_key": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "openai_api_key",
+ "display_name": "OpenAI API Key",
+ "advanced": false,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": "OPENAI_API_KEY"
+ },
+ "openai_api_type": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "openai_api_type",
+ "display_name": "OpenAI API Type",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "openai_api_version": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "openai_api_version",
+ "display_name": "OpenAI API Version",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "openai_organization": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "openai_organization",
+ "display_name": "OpenAI Organization",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "openai_proxy": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "openai_proxy",
+ "display_name": "OpenAI Proxy",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "request_timeout": {
+ "type": "float",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "request_timeout",
+ "display_name": "Request Timeout",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "rangeSpec": {
+ "step_type": "float",
+ "min": -1,
+ "max": 1,
+ "step": 0.1
},
- "data": {
- "type": "Prompt",
- "node": {
- "template": {
- "code": {
- "type": "code",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "from langchain_core.prompts import PromptTemplate\n\nfrom langflow.field_typing import Prompt, TemplateField, Text\nfrom langflow.interface.custom.custom_component import CustomComponent\n\n\nclass PromptComponent(CustomComponent):\n display_name: str = \"Prompt\"\n description: str = \"Create a prompt template with dynamic variables.\"\n icon = \"prompts\"\n\n def build_config(self):\n return {\n \"template\": TemplateField(display_name=\"Template\"),\n \"code\": TemplateField(advanced=True),\n }\n\n def build(\n self,\n template: Prompt,\n **kwargs,\n ) -> Text:\n from langflow.base.prompts.utils import dict_values_to_string\n\n prompt_template = PromptTemplate.from_template(Text(template))\n kwargs = dict_values_to_string(kwargs)\n kwargs = {k: \"\\n\".join(v) if isinstance(v, list) else v for k, v in kwargs.items()}\n try:\n formated_prompt = prompt_template.format(**kwargs)\n except Exception as exc:\n raise ValueError(f\"Error formatting prompt: {exc}\") from exc\n self.status = f'Prompt:\\n\"{formated_prompt}\"'\n return formated_prompt\n",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "code",
- "advanced": true,
- "dynamic": true,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "template": {
- "type": "prompt",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": "{context}\n\n---\n\nGiven the context above, answer the question as best as possible.\n\nQuestion: {question}\n\nAnswer: ",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "template",
- "display_name": "Template",
- "advanced": false,
- "input_types": [
- "Text"
- ],
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "_type": "CustomComponent",
- "context": {
- "field_type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "context",
- "display_name": "context",
- "advanced": false,
- "input_types": [
- "Document",
- "BaseOutputParser",
- "Record",
- "Text"
- ],
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "type": "str"
- },
- "question": {
- "field_type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "question",
- "display_name": "question",
- "advanced": false,
- "input_types": [
- "Document",
- "BaseOutputParser",
- "Record",
- "Text"
- ],
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "type": "str"
- }
- },
- "description": "Create a prompt template with dynamic variables.",
- "icon": "prompts",
- "is_input": null,
- "is_output": null,
- "is_composition": null,
- "base_classes": [
- "object",
- "Text",
- "str"
- ],
- "name": "",
- "display_name": "Prompt",
- "documentation": "",
- "custom_fields": {
- "template": [
- "context",
- "question"
- ]
- },
- "output_types": [
- "Text"
- ],
- "full_path": null,
- "field_formatters": {},
- "frozen": false,
- "field_order": [],
- "beta": false,
- "error": null
- },
- "id": "Prompt-xeI6K",
- "description": "Create a prompt template with dynamic variables.",
- "display_name": "Prompt"
- },
- "selected": false,
- "width": 384,
- "height": 477,
- "positionAbsolute": {
- "x": 2969.0261961391298,
- "y": 442.1613649809069
- },
- "dragging": false
+ "load_from_db": false,
+ "title_case": false
+ },
+ "show_progress_bar": {
+ "type": "bool",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "show_progress_bar",
+ "display_name": "Show Progress Bar",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "skip_empty": {
+ "type": "bool",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "skip_empty",
+ "display_name": "Skip Empty",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "tiktoken_enable": {
+ "type": "bool",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": true,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "tiktoken_enable",
+ "display_name": "TikToken Enable",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "tiktoken_model_name": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "tiktoken_model_name",
+ "display_name": "TikToken Model Name",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "_type": "CustomComponent"
},
- {
- "id": "ChatOutput-Q39I8",
- "type": "genericNode",
- "position": {
- "x": 3887.2073667611485,
- "y": 588.4801225794856
- },
- "data": {
- "type": "ChatOutput",
- "node": {
- "template": {
- "code": {
- "type": "code",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "from typing import Optional, Union\n\nfrom langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.schema import Record\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n def build(\n self,\n sender: Optional[str] = \"Machine\",\n sender_name: Optional[str] = \"AI\",\n input_value: Optional[str] = None,\n session_id: Optional[str] = None,\n return_record: Optional[bool] = False,\n record_template: Optional[str] = \"{text}\",\n ) -> Union[Text, Record]:\n return super().build(\n sender=sender,\n sender_name=sender_name,\n input_value=input_value,\n session_id=session_id,\n return_record=return_record,\n record_template=record_template,\n )\n",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "code",
- "advanced": true,
- "dynamic": true,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "input_value": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "input_value",
- "display_name": "Message",
- "advanced": false,
- "input_types": [
- "Text"
- ],
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "record_template": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "{text}",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "record_template",
- "display_name": "Record Template",
- "advanced": true,
- "dynamic": false,
- "info": "In case of Message being a Record, this template will be used to convert it to text.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "return_record": {
- "type": "bool",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "return_record",
- "display_name": "Return Record",
- "advanced": true,
- "dynamic": false,
- "info": "Return the message as a record containing the sender, sender_name, and session_id.",
- "load_from_db": false,
- "title_case": false
- },
- "sender": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "value": "Machine",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "options": [
- "Machine",
- "User"
- ],
- "name": "sender",
- "display_name": "Sender Type",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "sender_name": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": "AI",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "sender_name",
- "display_name": "Sender Name",
- "advanced": false,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "session_id": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "session_id",
- "display_name": "Session ID",
- "advanced": true,
- "dynamic": false,
- "info": "If provided, the message will be stored in the memory.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "_type": "CustomComponent"
- },
- "description": "Display a chat message in the Playground.",
- "icon": "ChatOutput",
- "base_classes": [
- "object",
- "Text",
- "Record",
- "str"
- ],
- "display_name": "Chat Output",
- "documentation": "",
- "custom_fields": {
- "sender": null,
- "sender_name": null,
- "input_value": null,
- "session_id": null,
- "return_record": null,
- "record_template": null
- },
- "output_types": [
- "Text",
- "Record"
- ],
- "field_formatters": {},
- "frozen": false,
- "field_order": [],
- "beta": false
- },
- "id": "ChatOutput-Q39I8"
- },
- "selected": false,
- "width": 384,
- "height": 383,
- "positionAbsolute": {
- "x": 3887.2073667611485,
- "y": 588.4801225794856
- },
- "dragging": false
+ "description": "Generate embeddings using OpenAI models.",
+ "base_classes": ["Embeddings"],
+ "display_name": "OpenAI Embeddings",
+ "documentation": "",
+ "custom_fields": {
+ "openai_api_key": null,
+ "default_headers": null,
+ "default_query": null,
+ "allowed_special": null,
+ "disallowed_special": null,
+ "chunk_size": null,
+ "client": null,
+ "deployment": null,
+ "embedding_ctx_length": null,
+ "max_retries": null,
+ "model": null,
+ "model_kwargs": null,
+ "openai_api_base": null,
+ "openai_api_type": null,
+ "openai_api_version": null,
+ "openai_organization": null,
+ "openai_proxy": null,
+ "request_timeout": null,
+ "show_progress_bar": null,
+ "skip_empty": null,
+ "tiktoken_enable": null,
+ "tiktoken_model_name": null
},
- {
- "id": "File-t0a6a",
- "type": "genericNode",
- "position": {
- "x": 2257.233450682836,
- "y": 1747.5389618367233
+ "output_types": ["Embeddings"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false
+ },
+ "id": "OpenAIEmbeddings-ZlOk1"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 383,
+ "dragging": false
+ },
+ {
+ "id": "OpenAIModel-EjXlN",
+ "type": "genericNode",
+ "position": {
+ "x": 3410.117202077183,
+ "y": 431.2038048137648
+ },
+ "data": {
+ "type": "OpenAIModel",
+ "node": {
+ "template": {
+ "input_value": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "input_value",
+ "display_name": "Input",
+ "advanced": false,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "from typing import Optional\n\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.field_typing import NestedDict, Text\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n field_order = [\n \"max_tokens\",\n \"model_kwargs\",\n \"model_name\",\n \"openai_api_base\",\n \"openai_api_key\",\n \"temperature\",\n \"input_value\",\n \"system_message\",\n \"stream\",\n ]\n\n def build_config(self):\n return {\n \"input_value\": {\"display_name\": \"Input\"},\n \"max_tokens\": {\n \"display_name\": \"Max Tokens\",\n \"advanced\": True,\n },\n \"model_kwargs\": {\n \"display_name\": \"Model Kwargs\",\n \"advanced\": True,\n },\n \"model_name\": {\n \"display_name\": \"Model Name\",\n \"advanced\": False,\n \"options\": [\n \"gpt-4-turbo-preview\",\n \"gpt-3.5-turbo\",\n \"gpt-4-0125-preview\",\n \"gpt-4-1106-preview\",\n \"gpt-4-vision-preview\",\n \"gpt-3.5-turbo-0125\",\n \"gpt-3.5-turbo-1106\",\n ],\n \"value\": \"gpt-4-turbo-preview\",\n },\n \"openai_api_base\": {\n \"display_name\": \"OpenAI API Base\",\n \"advanced\": True,\n \"info\": (\n \"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\n\"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\"\n ),\n },\n \"openai_api_key\": {\n \"display_name\": \"OpenAI API Key\",\n \"info\": \"The OpenAI API Key to use for the OpenAI model.\",\n \"advanced\": False,\n \"password\": True,\n },\n \"temperature\": {\n \"display_name\": \"Temperature\",\n \"advanced\": False,\n \"value\": 0.1,\n },\n \"stream\": {\n \"display_name\": \"Stream\",\n \"info\": STREAM_INFO_TEXT,\n \"advanced\": True,\n },\n \"system_message\": {\n \"display_name\": \"System Message\",\n \"info\": \"System message to pass to the model.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n input_value: Text,\n openai_api_key: str,\n temperature: float,\n model_name: str,\n max_tokens: Optional[int] = 256,\n model_kwargs: NestedDict = {},\n openai_api_base: Optional[str] = None,\n stream: bool = False,\n system_message: Optional[str] = None,\n ) -> Text:\n if not openai_api_base:\n openai_api_base = \"https://api.openai.com/v1\"\n output = ChatOpenAI(\n max_tokens=max_tokens,\n model_kwargs=model_kwargs,\n model=model_name,\n base_url=openai_api_base,\n api_key=openai_api_key,\n temperature=temperature,\n )\n\n return self.get_chat_result(output, stream, input_value, system_message)\n",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "max_tokens": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": 256,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "max_tokens",
+ "display_name": "Max Tokens",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "model_kwargs": {
+ "type": "NestedDict",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": {},
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "model_kwargs",
+ "display_name": "Model Kwargs",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "model_name": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "value": "gpt-3.5-turbo",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "options": [
+ "gpt-4-turbo-preview",
+ "gpt-3.5-turbo",
+ "gpt-4-0125-preview",
+ "gpt-4-1106-preview",
+ "gpt-4-vision-preview",
+ "gpt-3.5-turbo-0125",
+ "gpt-3.5-turbo-1106"
+ ],
+ "name": "model_name",
+ "display_name": "Model Name",
+ "advanced": false,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "openai_api_base": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "openai_api_base",
+ "display_name": "OpenAI API Base",
+ "advanced": true,
+ "dynamic": false,
+ "info": "The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\n\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "openai_api_key": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "openai_api_key",
+ "display_name": "OpenAI API Key",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The OpenAI API Key to use for the OpenAI model.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": "OPENAI_API_KEY"
+ },
+ "stream": {
+ "type": "bool",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "stream",
+ "display_name": "Stream",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Stream the response from the model. Streaming works only in Chat.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "system_message": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "system_message",
+ "display_name": "System Message",
+ "advanced": true,
+ "dynamic": false,
+ "info": "System message to pass to the model.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "temperature": {
+ "type": "float",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": 0.1,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "temperature",
+ "display_name": "Temperature",
+ "advanced": false,
+ "dynamic": false,
+ "info": "",
+ "rangeSpec": {
+ "step_type": "float",
+ "min": -1,
+ "max": 1,
+ "step": 0.1
},
- "data": {
- "type": "File",
- "node": {
- "template": {
- "path": {
- "type": "file",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [
- ".txt",
- ".md",
- ".mdx",
- ".csv",
- ".json",
- ".yaml",
- ".yml",
- ".xml",
- ".html",
- ".htm",
- ".pdf",
- ".docx"
- ],
- "file_path": "51e2b78a-199b-4054-9f32-e288eef6924c/Langflow conversation.pdf",
- "password": false,
- "name": "path",
- "display_name": "Path",
- "advanced": false,
- "dynamic": false,
- "info": "Supported file types: txt, md, mdx, csv, json, yaml, yml, xml, html, htm, pdf, docx",
- "load_from_db": false,
- "title_case": false,
- "value": ""
- },
- "code": {
- "type": "code",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "from pathlib import Path\nfrom typing import Any, Dict\n\nfrom langflow.base.data.utils import TEXT_FILE_TYPES, parse_text_file_to_record\nfrom langflow.interface.custom.custom_component import CustomComponent\nfrom langflow.schema import Record\n\n\nclass FileComponent(CustomComponent):\n display_name = \"File\"\n description = \"A generic file loader.\"\n icon = \"file-text\"\n\n def build_config(self) -> Dict[str, Any]:\n return {\n \"path\": {\n \"display_name\": \"Path\",\n \"field_type\": \"file\",\n \"file_types\": TEXT_FILE_TYPES,\n \"info\": f\"Supported file types: {', '.join(TEXT_FILE_TYPES)}\",\n },\n \"silent_errors\": {\n \"display_name\": \"Silent Errors\",\n \"advanced\": True,\n \"info\": \"If true, errors will not raise an exception.\",\n },\n }\n\n def load_file(self, path: str, silent_errors: bool = False) -> Record:\n resolved_path = self.resolve_path(path)\n path_obj = Path(resolved_path)\n extension = path_obj.suffix[1:].lower()\n if extension == \"doc\":\n raise ValueError(\"doc files are not supported. Please save as .docx\")\n if extension not in TEXT_FILE_TYPES:\n raise ValueError(f\"Unsupported file type: {extension}\")\n record = parse_text_file_to_record(resolved_path, silent_errors)\n self.status = record if record else \"No data\"\n return record or Record()\n\n def build(\n self,\n path: str,\n silent_errors: bool = False,\n ) -> Record:\n record = self.load_file(path, silent_errors)\n self.status = record\n return record\n",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "code",
- "advanced": true,
- "dynamic": true,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "silent_errors": {
- "type": "bool",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "silent_errors",
- "display_name": "Silent Errors",
- "advanced": true,
- "dynamic": false,
- "info": "If true, errors will not raise an exception.",
- "load_from_db": false,
- "title_case": false
- },
- "_type": "CustomComponent"
- },
- "description": "A generic file loader.",
- "icon": "file-text",
- "base_classes": [
- "Record"
- ],
- "display_name": "File",
- "documentation": "",
- "custom_fields": {
- "path": null,
- "silent_errors": null
- },
- "output_types": [
- "Record"
- ],
- "field_formatters": {},
- "frozen": false,
- "field_order": [],
- "beta": false
- },
- "id": "File-t0a6a"
- },
- "selected": false,
- "width": 384,
- "height": 281,
- "positionAbsolute": {
- "x": 2257.233450682836,
- "y": 1747.5389618367233
- },
- "dragging": false
+ "load_from_db": false,
+ "title_case": false
+ },
+ "_type": "CustomComponent"
},
- {
- "id": "RecursiveCharacterTextSplitter-tR9QM",
- "type": "genericNode",
- "position": {
- "x": 2791.013514133929,
- "y": 1462.9588953494142
- },
- "data": {
- "type": "RecursiveCharacterTextSplitter",
- "node": {
- "template": {
- "inputs": {
- "type": "Document",
- "required": true,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "inputs",
- "display_name": "Input",
- "advanced": false,
- "input_types": [
- "Document",
- "Record"
- ],
- "dynamic": false,
- "info": "The texts to split.",
- "load_from_db": false,
- "title_case": false
- },
- "chunk_overlap": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": 200,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "chunk_overlap",
- "display_name": "Chunk Overlap",
- "advanced": false,
- "dynamic": false,
- "info": "The amount of overlap between chunks.",
- "load_from_db": false,
- "title_case": false
- },
- "chunk_size": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": 1000,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "chunk_size",
- "display_name": "Chunk Size",
- "advanced": false,
- "dynamic": false,
- "info": "The maximum length of each chunk.",
- "load_from_db": false,
- "title_case": false
- },
- "code": {
- "type": "code",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "from typing import Optional\n\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom langchain_core.documents import Document\n\nfrom langflow.interface.custom.custom_component import CustomComponent\nfrom langflow.schema import Record\nfrom langflow.utils.util import build_loader_repr_from_records, unescape_string\n\n\nclass RecursiveCharacterTextSplitterComponent(CustomComponent):\n display_name: str = \"Recursive Character Text Splitter\"\n description: str = \"Split text into chunks of a specified length.\"\n documentation: str = \"https://docs.langflow.org/components/text-splitters#recursivecharactertextsplitter\"\n\n def build_config(self):\n return {\n \"inputs\": {\n \"display_name\": \"Input\",\n \"info\": \"The texts to split.\",\n \"input_types\": [\"Document\", \"Record\"],\n },\n \"separators\": {\n \"display_name\": \"Separators\",\n \"info\": 'The characters to split on.\\nIf left empty defaults to [\"\\\\n\\\\n\", \"\\\\n\", \" \", \"\"].',\n \"is_list\": True,\n },\n \"chunk_size\": {\n \"display_name\": \"Chunk Size\",\n \"info\": \"The maximum length of each chunk.\",\n \"field_type\": \"int\",\n \"value\": 1000,\n },\n \"chunk_overlap\": {\n \"display_name\": \"Chunk Overlap\",\n \"info\": \"The amount of overlap between chunks.\",\n \"field_type\": \"int\",\n \"value\": 200,\n },\n \"code\": {\"show\": False},\n }\n\n def build(\n self,\n inputs: list[Document],\n separators: Optional[list[str]] = None,\n chunk_size: Optional[int] = 1000,\n chunk_overlap: Optional[int] = 200,\n ) -> list[Record]:\n \"\"\"\n Split text into chunks of a specified length.\n\n Args:\n separators (list[str]): The characters to split on.\n chunk_size (int): The maximum length of each chunk.\n chunk_overlap (int): The amount of overlap between chunks.\n length_function (function): The function to use to calculate the length of the text.\n\n Returns:\n list[str]: The chunks of text.\n \"\"\"\n\n if separators == \"\":\n separators = None\n elif separators:\n # check if the separators list has escaped characters\n # if there are escaped characters, unescape them\n separators = [unescape_string(x) for x in separators]\n\n # Make sure chunk_size and chunk_overlap are ints\n if isinstance(chunk_size, str):\n chunk_size = int(chunk_size)\n if isinstance(chunk_overlap, str):\n chunk_overlap = int(chunk_overlap)\n splitter = RecursiveCharacterTextSplitter(\n separators=separators,\n chunk_size=chunk_size,\n chunk_overlap=chunk_overlap,\n )\n documents = []\n for _input in inputs:\n if isinstance(_input, Record):\n documents.append(_input.to_lc_document())\n else:\n documents.append(_input)\n docs = splitter.split_documents(documents)\n records = self.to_records(docs)\n self.repr_value = build_loader_repr_from_records(records)\n return records\n",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "code",
- "advanced": true,
- "dynamic": true,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "separators": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "separators",
- "display_name": "Separators",
- "advanced": false,
- "dynamic": false,
- "info": "The characters to split on.\nIf left empty defaults to [\"\\n\\n\", \"\\n\", \" \", \"\"].",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ],
- "value": [
- ""
- ]
- },
- "_type": "CustomComponent"
- },
- "description": "Split text into chunks of a specified length.",
- "base_classes": [
- "Record"
- ],
- "display_name": "Recursive Character Text Splitter",
- "documentation": "https://docs.langflow.org/components/text-splitters#recursivecharactertextsplitter",
- "custom_fields": {
- "inputs": null,
- "separators": null,
- "chunk_size": null,
- "chunk_overlap": null
- },
- "output_types": [
- "Record"
- ],
- "field_formatters": {},
- "frozen": false,
- "field_order": [],
- "beta": false
- },
- "id": "RecursiveCharacterTextSplitter-tR9QM"
- },
- "selected": false,
- "width": 384,
- "height": 501,
- "positionAbsolute": {
- "x": 2791.013514133929,
- "y": 1462.9588953494142
- },
- "dragging": false
+ "description": "Generates text using OpenAI LLMs.",
+ "icon": "OpenAI",
+ "base_classes": ["object", "Text", "str"],
+ "display_name": "OpenAI",
+ "documentation": "",
+ "custom_fields": {
+ "input_value": null,
+ "openai_api_key": null,
+ "temperature": null,
+ "model_name": null,
+ "max_tokens": null,
+ "model_kwargs": null,
+ "openai_api_base": null,
+ "stream": null,
+ "system_message": null
},
- {
- "id": "AstraDBSearch-41nRz",
- "type": "genericNode",
- "position": {
- "x": 1723.976434815103,
- "y": 277.03317407245913
- },
- "data": {
- "type": "AstraDBSearch",
- "node": {
- "template": {
- "embedding": {
- "type": "Embeddings",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "embedding",
- "display_name": "Embedding",
- "advanced": false,
- "dynamic": false,
- "info": "Embedding to use",
- "load_from_db": false,
- "title_case": false
- },
- "input_value": {
- "type": "str",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "input_value",
- "display_name": "Input Value",
- "advanced": false,
- "dynamic": false,
- "info": "Input value to search",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "api_endpoint": {
- "type": "str",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "api_endpoint",
- "display_name": "API Endpoint",
- "advanced": false,
- "dynamic": false,
- "info": "API endpoint URL for the Astra DB service.",
- "load_from_db": true,
- "title_case": false,
- "input_types": [
- "Text"
- ],
- "value": "ASTRA_DB_API_ENDPOINT"
- },
- "batch_size": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "batch_size",
- "display_name": "Batch Size",
- "advanced": true,
- "dynamic": false,
- "info": "Optional number of records to process in a single batch.",
- "load_from_db": false,
- "title_case": false
- },
- "bulk_delete_concurrency": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "bulk_delete_concurrency",
- "display_name": "Bulk Delete Concurrency",
- "advanced": true,
- "dynamic": false,
- "info": "Optional concurrency level for bulk delete operations.",
- "load_from_db": false,
- "title_case": false
- },
- "bulk_insert_batch_concurrency": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "bulk_insert_batch_concurrency",
- "display_name": "Bulk Insert Batch Concurrency",
- "advanced": true,
- "dynamic": false,
- "info": "Optional concurrency level for bulk insert operations.",
- "load_from_db": false,
- "title_case": false
- },
- "bulk_insert_overwrite_concurrency": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "bulk_insert_overwrite_concurrency",
- "display_name": "Bulk Insert Overwrite Concurrency",
- "advanced": true,
- "dynamic": false,
- "info": "Optional concurrency level for bulk insert operations that overwrite existing records.",
- "load_from_db": false,
- "title_case": false
- },
- "code": {
- "type": "code",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "from typing import List, Optional\n\nfrom langflow.components.vectorstores.AstraDB import AstraDBVectorStoreComponent\nfrom langflow.components.vectorstores.base.model import LCVectorStoreComponent\nfrom langflow.field_typing import Embeddings, Text\nfrom langflow.schema import Record\n\n\nclass AstraDBSearchComponent(LCVectorStoreComponent):\n display_name = \"Astra DB Search\"\n description = \"Searches an existing Astra DB Vector Store.\"\n icon = \"AstraDB\"\n field_order = [\"token\", \"api_endpoint\", \"collection_name\", \"input_value\", \"embedding\"]\n\n def build_config(self):\n return {\n \"search_type\": {\n \"display_name\": \"Search Type\",\n \"options\": [\"Similarity\", \"MMR\"],\n },\n \"input_value\": {\n \"display_name\": \"Input Value\",\n \"info\": \"Input value to search\",\n },\n \"embedding\": {\"display_name\": \"Embedding\", \"info\": \"Embedding to use\"},\n \"collection_name\": {\n \"display_name\": \"Collection Name\",\n \"info\": \"The name of the collection within Astra DB where the vectors will be stored.\",\n },\n \"token\": {\n \"display_name\": \"Token\",\n \"info\": \"Authentication token for accessing Astra DB.\",\n \"password\": True,\n },\n \"api_endpoint\": {\n \"display_name\": \"API Endpoint\",\n \"info\": \"API endpoint URL for the Astra DB service.\",\n },\n \"namespace\": {\n \"display_name\": \"Namespace\",\n \"info\": \"Optional namespace within Astra DB to use for the collection.\",\n \"advanced\": True,\n },\n \"metric\": {\n \"display_name\": \"Metric\",\n \"info\": \"Optional distance metric for vector comparisons in the vector store.\",\n \"advanced\": True,\n },\n \"batch_size\": {\n \"display_name\": \"Batch Size\",\n \"info\": \"Optional number of records to process in a single batch.\",\n \"advanced\": True,\n },\n \"bulk_insert_batch_concurrency\": {\n \"display_name\": \"Bulk Insert Batch Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations.\",\n \"advanced\": True,\n },\n \"bulk_insert_overwrite_concurrency\": {\n \"display_name\": \"Bulk Insert Overwrite Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations that overwrite existing records.\",\n \"advanced\": True,\n },\n \"bulk_delete_concurrency\": {\n \"display_name\": \"Bulk Delete Concurrency\",\n \"info\": \"Optional concurrency level for bulk delete operations.\",\n \"advanced\": True,\n },\n \"setup_mode\": {\n \"display_name\": \"Setup Mode\",\n \"info\": \"Configuration mode for setting up the vector store, with options like \u201cSync\u201d, \u201cAsync\u201d, or \u201cOff\u201d.\",\n \"options\": [\"Sync\", \"Async\", \"Off\"],\n \"advanced\": True,\n },\n \"pre_delete_collection\": {\n \"display_name\": \"Pre Delete Collection\",\n \"info\": \"Boolean flag to determine whether to delete the collection before creating a new one.\",\n \"advanced\": True,\n },\n \"metadata_indexing_include\": {\n \"display_name\": \"Metadata Indexing Include\",\n \"info\": \"Optional list of metadata fields to include in the indexing.\",\n \"advanced\": True,\n },\n \"metadata_indexing_exclude\": {\n \"display_name\": \"Metadata Indexing Exclude\",\n \"info\": \"Optional list of metadata fields to exclude from the indexing.\",\n \"advanced\": True,\n },\n \"collection_indexing_policy\": {\n \"display_name\": \"Collection Indexing Policy\",\n \"info\": \"Optional dictionary defining the indexing policy for the collection.\",\n \"advanced\": True,\n },\n \"number_of_results\": {\n \"display_name\": \"Number of Results\",\n \"info\": \"Number of results to return.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n embedding: Embeddings,\n collection_name: str,\n input_value: Text,\n token: str,\n api_endpoint: str,\n search_type: str = \"Similarity\",\n number_of_results: int = 4,\n namespace: Optional[str] = None,\n metric: Optional[str] = None,\n batch_size: Optional[int] = None,\n bulk_insert_batch_concurrency: Optional[int] = None,\n bulk_insert_overwrite_concurrency: Optional[int] = None,\n bulk_delete_concurrency: Optional[int] = None,\n setup_mode: str = \"Sync\",\n pre_delete_collection: bool = False,\n metadata_indexing_include: Optional[List[str]] = None,\n metadata_indexing_exclude: Optional[List[str]] = None,\n collection_indexing_policy: Optional[dict] = None,\n ) -> List[Record]:\n vector_store = AstraDBVectorStoreComponent().build(\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n try:\n return self.search_with_vector_store(input_value, search_type, vector_store, k=number_of_results)\n except KeyError as e:\n if \"content\" in str(e):\n raise ValueError(\n \"You should ingest data through Langflow (or LangChain) to query it in Langflow. Your collection does not contain a field name 'content'.\"\n )\n else:\n raise e\n",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "code",
- "advanced": true,
- "dynamic": true,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "collection_indexing_policy": {
- "type": "dict",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "collection_indexing_policy",
- "display_name": "Collection Indexing Policy",
- "advanced": true,
- "dynamic": false,
- "info": "Optional dictionary defining the indexing policy for the collection.",
- "load_from_db": false,
- "title_case": false
- },
- "collection_name": {
- "type": "str",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "collection_name",
- "display_name": "Collection Name",
- "advanced": false,
- "dynamic": false,
- "info": "The name of the collection within Astra DB where the vectors will be stored.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ],
- "value": "langflow"
- },
- "metadata_indexing_exclude": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "metadata_indexing_exclude",
- "display_name": "Metadata Indexing Exclude",
- "advanced": true,
- "dynamic": false,
- "info": "Optional list of metadata fields to exclude from the indexing.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "metadata_indexing_include": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "metadata_indexing_include",
- "display_name": "Metadata Indexing Include",
- "advanced": true,
- "dynamic": false,
- "info": "Optional list of metadata fields to include in the indexing.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "metric": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "metric",
- "display_name": "Metric",
- "advanced": true,
- "dynamic": false,
- "info": "Optional distance metric for vector comparisons in the vector store.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "namespace": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "namespace",
- "display_name": "Namespace",
- "advanced": true,
- "dynamic": false,
- "info": "Optional namespace within Astra DB to use for the collection.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "number_of_results": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": 4,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "number_of_results",
- "display_name": "Number of Results",
- "advanced": true,
- "dynamic": false,
- "info": "Number of results to return.",
- "load_from_db": false,
- "title_case": false
- },
- "pre_delete_collection": {
- "type": "bool",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "pre_delete_collection",
- "display_name": "Pre Delete Collection",
- "advanced": true,
- "dynamic": false,
- "info": "Boolean flag to determine whether to delete the collection before creating a new one.",
- "load_from_db": false,
- "title_case": false
- },
- "search_type": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "value": "Similarity",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "options": [
- "Similarity",
- "MMR"
- ],
- "name": "search_type",
- "display_name": "Search Type",
- "advanced": false,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "setup_mode": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "value": "Sync",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "options": [
- "Sync",
- "Async",
- "Off"
- ],
- "name": "setup_mode",
- "display_name": "Setup Mode",
- "advanced": true,
- "dynamic": false,
- "info": "Configuration mode for setting up the vector store, with options like \u201cSync\u201d, \u201cAsync\u201d, or \u201cOff\u201d.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "token": {
- "type": "str",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": true,
- "name": "token",
- "display_name": "Token",
- "advanced": false,
- "dynamic": false,
- "info": "Authentication token for accessing Astra DB.",
- "load_from_db": true,
- "title_case": false,
- "input_types": [
- "Text"
- ],
- "value": "ASTRA_DB_APPLICATION_TOKEN"
- },
- "_type": "CustomComponent"
- },
- "description": "Searches an existing Astra DB Vector Store.",
- "icon": "AstraDB",
- "base_classes": [
- "Record"
- ],
- "display_name": "Astra DB Search",
- "documentation": "",
- "custom_fields": {
- "embedding": null,
- "collection_name": null,
- "input_value": null,
- "token": null,
- "api_endpoint": null,
- "search_type": null,
- "number_of_results": null,
- "namespace": null,
- "metric": null,
- "batch_size": null,
- "bulk_insert_batch_concurrency": null,
- "bulk_insert_overwrite_concurrency": null,
- "bulk_delete_concurrency": null,
- "setup_mode": null,
- "pre_delete_collection": null,
- "metadata_indexing_include": null,
- "metadata_indexing_exclude": null,
- "collection_indexing_policy": null
- },
- "output_types": [
- "Record"
- ],
- "field_formatters": {},
- "frozen": false,
- "field_order": [
- "token",
- "api_endpoint",
- "collection_name",
- "input_value",
- "embedding"
- ],
- "beta": false
- },
- "id": "AstraDBSearch-41nRz"
- },
- "selected": false,
- "width": 384,
- "height": 713,
- "dragging": false,
- "positionAbsolute": {
- "x": 1723.976434815103,
- "y": 277.03317407245913
- }
+ "output_types": ["Text"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [
+ "max_tokens",
+ "model_kwargs",
+ "model_name",
+ "openai_api_base",
+ "openai_api_key",
+ "temperature",
+ "input_value",
+ "system_message",
+ "stream"
+ ],
+ "beta": false
+ },
+ "id": "OpenAIModel-EjXlN"
+ },
+ "selected": true,
+ "width": 384,
+ "height": 563,
+ "positionAbsolute": {
+ "x": 3410.117202077183,
+ "y": 431.2038048137648
+ },
+ "dragging": false
+ },
+ {
+ "id": "Prompt-xeI6K",
+ "type": "genericNode",
+ "position": {
+ "x": 2969.0261961391298,
+ "y": 442.1613649809069
+ },
+ "data": {
+ "type": "Prompt",
+ "node": {
+ "template": {
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "from langchain_core.prompts import PromptTemplate\n\nfrom langflow.field_typing import Prompt, TemplateField, Text\nfrom langflow.interface.custom.custom_component import CustomComponent\n\n\nclass PromptComponent(CustomComponent):\n display_name: str = \"Prompt\"\n description: str = \"Create a prompt template with dynamic variables.\"\n icon = \"prompts\"\n\n def build_config(self):\n return {\n \"template\": TemplateField(display_name=\"Template\"),\n \"code\": TemplateField(advanced=True),\n }\n\n def build(\n self,\n template: Prompt,\n **kwargs,\n ) -> Text:\n from langflow.base.prompts.utils import dict_values_to_string\n\n prompt_template = PromptTemplate.from_template(Text(template))\n kwargs = dict_values_to_string(kwargs)\n kwargs = {k: \"\\n\".join(v) if isinstance(v, list) else v for k, v in kwargs.items()}\n try:\n formated_prompt = prompt_template.format(**kwargs)\n except Exception as exc:\n raise ValueError(f\"Error formatting prompt: {exc}\") from exc\n self.status = f'Prompt:\\n\"{formated_prompt}\"'\n return formated_prompt\n",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "template": {
+ "type": "prompt",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": "{context}\n\n---\n\nGiven the context above, answer the question as best as possible.\n\nQuestion: {question}\n\nAnswer: ",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "template",
+ "display_name": "Template",
+ "advanced": false,
+ "input_types": ["Text"],
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "_type": "CustomComponent",
+ "context": {
+ "field_type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "context",
+ "display_name": "context",
+ "advanced": false,
+ "input_types": [
+ "Document",
+ "BaseOutputParser",
+ "Record",
+ "Text"
+ ],
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "type": "str"
+ },
+ "question": {
+ "field_type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "question",
+ "display_name": "question",
+ "advanced": false,
+ "input_types": [
+ "Document",
+ "BaseOutputParser",
+ "Record",
+ "Text"
+ ],
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "type": "str"
+ }
},
- {
- "id": "AstraDB-eUCSS",
- "type": "genericNode",
- "position": {
- "x": 3372.04958055989,
- "y": 1611.0742035495277
- },
- "data": {
- "type": "AstraDB",
- "node": {
- "template": {
- "embedding": {
- "type": "Embeddings",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "embedding",
- "display_name": "Embedding",
- "advanced": false,
- "dynamic": false,
- "info": "Embedding to use",
- "load_from_db": false,
- "title_case": false
- },
- "inputs": {
- "type": "Record",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "inputs",
- "display_name": "Inputs",
- "advanced": false,
- "dynamic": false,
- "info": "Optional list of records to be processed and stored in the vector store.",
- "load_from_db": false,
- "title_case": false
- },
- "api_endpoint": {
- "type": "str",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "api_endpoint",
- "display_name": "API Endpoint",
- "advanced": false,
- "dynamic": false,
- "info": "API endpoint URL for the Astra DB service.",
- "load_from_db": true,
- "title_case": false,
- "input_types": [
- "Text"
- ],
- "value": "ASTRA_DB_API_ENDPOINT"
- },
- "batch_size": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "batch_size",
- "display_name": "Batch Size",
- "advanced": true,
- "dynamic": false,
- "info": "Optional number of records to process in a single batch.",
- "load_from_db": false,
- "title_case": false
- },
- "bulk_delete_concurrency": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "bulk_delete_concurrency",
- "display_name": "Bulk Delete Concurrency",
- "advanced": true,
- "dynamic": false,
- "info": "Optional concurrency level for bulk delete operations.",
- "load_from_db": false,
- "title_case": false
- },
- "bulk_insert_batch_concurrency": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "bulk_insert_batch_concurrency",
- "display_name": "Bulk Insert Batch Concurrency",
- "advanced": true,
- "dynamic": false,
- "info": "Optional concurrency level for bulk insert operations.",
- "load_from_db": false,
- "title_case": false
- },
- "bulk_insert_overwrite_concurrency": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "bulk_insert_overwrite_concurrency",
- "display_name": "Bulk Insert Overwrite Concurrency",
- "advanced": true,
- "dynamic": false,
- "info": "Optional concurrency level for bulk insert operations that overwrite existing records.",
- "load_from_db": false,
- "title_case": false
- },
- "code": {
- "type": "code",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "from typing import List, Optional\n\nfrom langchain_astradb import AstraDBVectorStore\nfrom langchain_astradb.utils.astradb import SetupMode\n\nfrom langflow.custom import CustomComponent\nfrom langflow.field_typing import Embeddings, VectorStore\nfrom langflow.schema import Record\n\n\nclass AstraDBVectorStoreComponent(CustomComponent):\n display_name = \"Astra DB\"\n description = \"Builds or loads an Astra DB Vector Store.\"\n icon = \"AstraDB\"\n field_order = [\"token\", \"api_endpoint\", \"collection_name\", \"inputs\", \"embedding\"]\n\n def build_config(self):\n return {\n \"inputs\": {\n \"display_name\": \"Inputs\",\n \"info\": \"Optional list of records to be processed and stored in the vector store.\",\n },\n \"embedding\": {\"display_name\": \"Embedding\", \"info\": \"Embedding to use\"},\n \"collection_name\": {\n \"display_name\": \"Collection Name\",\n \"info\": \"The name of the collection within Astra DB where the vectors will be stored.\",\n },\n \"token\": {\n \"display_name\": \"Token\",\n \"info\": \"Authentication token for accessing Astra DB.\",\n \"password\": True,\n },\n \"api_endpoint\": {\n \"display_name\": \"API Endpoint\",\n \"info\": \"API endpoint URL for the Astra DB service.\",\n },\n \"namespace\": {\n \"display_name\": \"Namespace\",\n \"info\": \"Optional namespace within Astra DB to use for the collection.\",\n \"advanced\": True,\n },\n \"metric\": {\n \"display_name\": \"Metric\",\n \"info\": \"Optional distance metric for vector comparisons in the vector store.\",\n \"advanced\": True,\n },\n \"batch_size\": {\n \"display_name\": \"Batch Size\",\n \"info\": \"Optional number of records to process in a single batch.\",\n \"advanced\": True,\n },\n \"bulk_insert_batch_concurrency\": {\n \"display_name\": \"Bulk Insert Batch Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations.\",\n \"advanced\": True,\n },\n \"bulk_insert_overwrite_concurrency\": {\n \"display_name\": \"Bulk Insert Overwrite Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations that overwrite existing records.\",\n \"advanced\": True,\n },\n \"bulk_delete_concurrency\": {\n \"display_name\": \"Bulk Delete Concurrency\",\n \"info\": \"Optional concurrency level for bulk delete operations.\",\n \"advanced\": True,\n },\n \"setup_mode\": {\n \"display_name\": \"Setup Mode\",\n \"info\": \"Configuration mode for setting up the vector store, with options like \u201cSync\u201d, \u201cAsync\u201d, or \u201cOff\u201d.\",\n \"options\": [\"Sync\", \"Async\", \"Off\"],\n \"advanced\": True,\n },\n \"pre_delete_collection\": {\n \"display_name\": \"Pre Delete Collection\",\n \"info\": \"Boolean flag to determine whether to delete the collection before creating a new one.\",\n \"advanced\": True,\n },\n \"metadata_indexing_include\": {\n \"display_name\": \"Metadata Indexing Include\",\n \"info\": \"Optional list of metadata fields to include in the indexing.\",\n \"advanced\": True,\n },\n \"metadata_indexing_exclude\": {\n \"display_name\": \"Metadata Indexing Exclude\",\n \"info\": \"Optional list of metadata fields to exclude from the indexing.\",\n \"advanced\": True,\n },\n \"collection_indexing_policy\": {\n \"display_name\": \"Collection Indexing Policy\",\n \"info\": \"Optional dictionary defining the indexing policy for the collection.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n embedding: Embeddings,\n token: str,\n api_endpoint: str,\n collection_name: str,\n inputs: Optional[List[Record]] = None,\n namespace: Optional[str] = None,\n metric: Optional[str] = None,\n batch_size: Optional[int] = None,\n bulk_insert_batch_concurrency: Optional[int] = None,\n bulk_insert_overwrite_concurrency: Optional[int] = None,\n bulk_delete_concurrency: Optional[int] = None,\n setup_mode: str = \"Async\",\n pre_delete_collection: bool = False,\n metadata_indexing_include: Optional[List[str]] = None,\n metadata_indexing_exclude: Optional[List[str]] = None,\n collection_indexing_policy: Optional[dict] = None,\n ) -> VectorStore:\n try:\n setup_mode_value = SetupMode[setup_mode.upper()]\n except KeyError:\n raise ValueError(f\"Invalid setup mode: {setup_mode}\")\n if inputs:\n documents = [_input.to_lc_document() for _input in inputs]\n\n vector_store = AstraDBVectorStore.from_documents(\n documents=documents,\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode_value,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n else:\n vector_store = AstraDBVectorStore(\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode_value,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n\n return vector_store\n",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "code",
- "advanced": true,
- "dynamic": true,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "collection_indexing_policy": {
- "type": "dict",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "collection_indexing_policy",
- "display_name": "Collection Indexing Policy",
- "advanced": true,
- "dynamic": false,
- "info": "Optional dictionary defining the indexing policy for the collection.",
- "load_from_db": false,
- "title_case": false
- },
- "collection_name": {
- "type": "str",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "collection_name",
- "display_name": "Collection Name",
- "advanced": false,
- "dynamic": false,
- "info": "The name of the collection within Astra DB where the vectors will be stored.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ],
- "value": "langflow"
- },
- "metadata_indexing_exclude": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "metadata_indexing_exclude",
- "display_name": "Metadata Indexing Exclude",
- "advanced": true,
- "dynamic": false,
- "info": "Optional list of metadata fields to exclude from the indexing.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "metadata_indexing_include": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "metadata_indexing_include",
- "display_name": "Metadata Indexing Include",
- "advanced": true,
- "dynamic": false,
- "info": "Optional list of metadata fields to include in the indexing.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "metric": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "metric",
- "display_name": "Metric",
- "advanced": true,
- "dynamic": false,
- "info": "Optional distance metric for vector comparisons in the vector store.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "namespace": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "namespace",
- "display_name": "Namespace",
- "advanced": true,
- "dynamic": false,
- "info": "Optional namespace within Astra DB to use for the collection.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "pre_delete_collection": {
- "type": "bool",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "pre_delete_collection",
- "display_name": "Pre Delete Collection",
- "advanced": true,
- "dynamic": false,
- "info": "Boolean flag to determine whether to delete the collection before creating a new one.",
- "load_from_db": false,
- "title_case": false
- },
- "setup_mode": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "value": "Async",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "options": [
- "Sync",
- "Async",
- "Off"
- ],
- "name": "setup_mode",
- "display_name": "Setup Mode",
- "advanced": true,
- "dynamic": false,
- "info": "Configuration mode for setting up the vector store, with options like \u201cSync\u201d, \u201cAsync\u201d, or \u201cOff\u201d.",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "token": {
- "type": "str",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": true,
- "name": "token",
- "display_name": "Token",
- "advanced": false,
- "dynamic": false,
- "info": "Authentication token for accessing Astra DB.",
- "load_from_db": true,
- "title_case": false,
- "input_types": [
- "Text"
- ],
- "value": "ASTRA_DB_APPLICATION_TOKEN"
- },
- "_type": "CustomComponent"
- },
- "description": "Builds or loads an Astra DB Vector Store.",
- "icon": "AstraDB",
- "base_classes": [
- "VectorStore"
- ],
- "display_name": "Astra DB",
- "documentation": "",
- "custom_fields": {
- "embedding": null,
- "token": null,
- "api_endpoint": null,
- "collection_name": null,
- "inputs": null,
- "namespace": null,
- "metric": null,
- "batch_size": null,
- "bulk_insert_batch_concurrency": null,
- "bulk_insert_overwrite_concurrency": null,
- "bulk_delete_concurrency": null,
- "setup_mode": null,
- "pre_delete_collection": null,
- "metadata_indexing_include": null,
- "metadata_indexing_exclude": null,
- "collection_indexing_policy": null
- },
- "output_types": [
- "VectorStore"
- ],
- "field_formatters": {},
- "frozen": false,
- "field_order": [
- "token",
- "api_endpoint",
- "collection_name",
- "inputs",
- "embedding"
- ],
- "beta": false
- },
- "id": "AstraDB-eUCSS"
- },
- "selected": false,
- "width": 384,
- "height": 573,
- "positionAbsolute": {
- "x": 3372.04958055989,
- "y": 1611.0742035495277
- },
- "dragging": false
+ "description": "Create a prompt template with dynamic variables.",
+ "icon": "prompts",
+ "is_input": null,
+ "is_output": null,
+ "is_composition": null,
+ "base_classes": ["object", "Text", "str"],
+ "name": "",
+ "display_name": "Prompt",
+ "documentation": "",
+ "custom_fields": {
+ "template": ["context", "question"]
},
- {
- "id": "OpenAIEmbeddings-9TPjc",
- "type": "genericNode",
- "position": {
- "x": 2814.0402191223047,
- "y": 1955.9268168273086
- },
- "data": {
- "type": "OpenAIEmbeddings",
- "node": {
- "template": {
- "allowed_special": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": [],
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "allowed_special",
- "display_name": "Allowed Special",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "chunk_size": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": 1000,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "chunk_size",
- "display_name": "Chunk Size",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "client": {
- "type": "Any",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "client",
- "display_name": "Client",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "code": {
- "type": "code",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": true,
- "value": "from typing import Any, Dict, List, Optional\n\nfrom langchain_openai.embeddings.base import OpenAIEmbeddings\n\nfrom langflow.field_typing import Embeddings, NestedDict\nfrom langflow.interface.custom.custom_component import CustomComponent\n\n\nclass OpenAIEmbeddingsComponent(CustomComponent):\n display_name = \"OpenAI Embeddings\"\n description = \"Generate embeddings using OpenAI models.\"\n\n def build_config(self):\n return {\n \"allowed_special\": {\n \"display_name\": \"Allowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"default_headers\": {\n \"display_name\": \"Default Headers\",\n \"advanced\": True,\n \"field_type\": \"dict\",\n },\n \"default_query\": {\n \"display_name\": \"Default Query\",\n \"advanced\": True,\n \"field_type\": \"NestedDict\",\n },\n \"disallowed_special\": {\n \"display_name\": \"Disallowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"chunk_size\": {\"display_name\": \"Chunk Size\", \"advanced\": True},\n \"client\": {\"display_name\": \"Client\", \"advanced\": True},\n \"deployment\": {\"display_name\": \"Deployment\", \"advanced\": True},\n \"embedding_ctx_length\": {\n \"display_name\": \"Embedding Context Length\",\n \"advanced\": True,\n },\n \"max_retries\": {\"display_name\": \"Max Retries\", \"advanced\": True},\n \"model\": {\n \"display_name\": \"Model\",\n \"advanced\": False,\n \"options\": [\n \"text-embedding-3-small\",\n \"text-embedding-3-large\",\n \"text-embedding-ada-002\",\n ],\n },\n \"model_kwargs\": {\"display_name\": \"Model Kwargs\", \"advanced\": True},\n \"openai_api_base\": {\n \"display_name\": \"OpenAI API Base\",\n \"password\": True,\n \"advanced\": True,\n },\n \"openai_api_key\": {\"display_name\": \"OpenAI API Key\", \"password\": True},\n \"openai_api_type\": {\n \"display_name\": \"OpenAI API Type\",\n \"advanced\": True,\n \"password\": True,\n },\n \"openai_api_version\": {\n \"display_name\": \"OpenAI API Version\",\n \"advanced\": True,\n },\n \"openai_organization\": {\n \"display_name\": \"OpenAI Organization\",\n \"advanced\": True,\n },\n \"openai_proxy\": {\"display_name\": \"OpenAI Proxy\", \"advanced\": True},\n \"request_timeout\": {\"display_name\": \"Request Timeout\", \"advanced\": True},\n \"show_progress_bar\": {\n \"display_name\": \"Show Progress Bar\",\n \"advanced\": True,\n },\n \"skip_empty\": {\"display_name\": \"Skip Empty\", \"advanced\": True},\n \"tiktoken_model_name\": {\n \"display_name\": \"TikToken Model Name\",\n \"advanced\": True,\n },\n \"tiktoken_enable\": {\"display_name\": \"TikToken Enable\", \"advanced\": True},\n }\n\n def build(\n self,\n openai_api_key: str,\n default_headers: Optional[Dict[str, str]] = None,\n default_query: Optional[NestedDict] = {},\n allowed_special: List[str] = [],\n disallowed_special: List[str] = [\"all\"],\n chunk_size: int = 1000,\n client: Optional[Any] = None,\n deployment: str = \"text-embedding-ada-002\",\n embedding_ctx_length: int = 8191,\n max_retries: int = 6,\n model: str = \"text-embedding-ada-002\",\n model_kwargs: NestedDict = {},\n openai_api_base: Optional[str] = None,\n openai_api_type: Optional[str] = None,\n openai_api_version: Optional[str] = None,\n openai_organization: Optional[str] = None,\n openai_proxy: Optional[str] = None,\n request_timeout: Optional[float] = None,\n show_progress_bar: bool = False,\n skip_empty: bool = False,\n tiktoken_enable: bool = True,\n tiktoken_model_name: Optional[str] = None,\n ) -> Embeddings:\n # This is to avoid errors with Vector Stores (e.g Chroma)\n if disallowed_special == [\"all\"]:\n disallowed_special = \"all\" # type: ignore\n\n return OpenAIEmbeddings(\n tiktoken_enabled=tiktoken_enable,\n default_headers=default_headers,\n default_query=default_query,\n allowed_special=set(allowed_special),\n disallowed_special=\"all\",\n chunk_size=chunk_size,\n client=client,\n deployment=deployment,\n embedding_ctx_length=embedding_ctx_length,\n max_retries=max_retries,\n model=model,\n model_kwargs=model_kwargs,\n base_url=openai_api_base,\n api_key=openai_api_key,\n openai_api_type=openai_api_type,\n api_version=openai_api_version,\n organization=openai_organization,\n openai_proxy=openai_proxy,\n timeout=request_timeout,\n show_progress_bar=show_progress_bar,\n skip_empty=skip_empty,\n tiktoken_model_name=tiktoken_model_name,\n )\n",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "code",
- "advanced": true,
- "dynamic": true,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "default_headers": {
- "type": "dict",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "default_headers",
- "display_name": "Default Headers",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "default_query": {
- "type": "NestedDict",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": {},
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "default_query",
- "display_name": "Default Query",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "deployment": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": "text-embedding-ada-002",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "deployment",
- "display_name": "Deployment",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "disallowed_special": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": [
- "all"
- ],
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "disallowed_special",
- "display_name": "Disallowed Special",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "embedding_ctx_length": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": 8191,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "embedding_ctx_length",
- "display_name": "Embedding Context Length",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "max_retries": {
- "type": "int",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": 6,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "max_retries",
- "display_name": "Max Retries",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "model": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": true,
- "show": true,
- "multiline": false,
- "value": "text-embedding-ada-002",
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "options": [
- "text-embedding-3-small",
- "text-embedding-3-large",
- "text-embedding-ada-002"
- ],
- "name": "model",
- "display_name": "Model",
- "advanced": false,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "model_kwargs": {
- "type": "NestedDict",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": {},
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "model_kwargs",
- "display_name": "Model Kwargs",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "openai_api_base": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": true,
- "name": "openai_api_base",
- "display_name": "OpenAI API Base",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "openai_api_key": {
- "type": "str",
- "required": true,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": true,
- "name": "openai_api_key",
- "display_name": "OpenAI API Key",
- "advanced": false,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ],
- "value": ""
- },
- "openai_api_type": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": true,
- "name": "openai_api_type",
- "display_name": "OpenAI API Type",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "openai_api_version": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "openai_api_version",
- "display_name": "OpenAI API Version",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "openai_organization": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "openai_organization",
- "display_name": "OpenAI Organization",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "openai_proxy": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "openai_proxy",
- "display_name": "OpenAI Proxy",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "request_timeout": {
- "type": "float",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "request_timeout",
- "display_name": "Request Timeout",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "rangeSpec": {
- "step_type": "float",
- "min": -1,
- "max": 1,
- "step": 0.1
- },
- "load_from_db": false,
- "title_case": false
- },
- "show_progress_bar": {
- "type": "bool",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "show_progress_bar",
- "display_name": "Show Progress Bar",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "skip_empty": {
- "type": "bool",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "skip_empty",
- "display_name": "Skip Empty",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "tiktoken_enable": {
- "type": "bool",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "value": true,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "tiktoken_enable",
- "display_name": "TikToken Enable",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false
- },
- "tiktoken_model_name": {
- "type": "str",
- "required": false,
- "placeholder": "",
- "list": false,
- "show": true,
- "multiline": false,
- "fileTypes": [],
- "file_path": "",
- "password": false,
- "name": "tiktoken_model_name",
- "display_name": "TikToken Model Name",
- "advanced": true,
- "dynamic": false,
- "info": "",
- "load_from_db": false,
- "title_case": false,
- "input_types": [
- "Text"
- ]
- },
- "_type": "CustomComponent"
- },
- "description": "Generate embeddings using OpenAI models.",
- "base_classes": [
- "Embeddings"
- ],
- "display_name": "OpenAI Embeddings",
- "documentation": "",
- "custom_fields": {
- "openai_api_key": null,
- "default_headers": null,
- "default_query": null,
- "allowed_special": null,
- "disallowed_special": null,
- "chunk_size": null,
- "client": null,
- "deployment": null,
- "embedding_ctx_length": null,
- "max_retries": null,
- "model": null,
- "model_kwargs": null,
- "openai_api_base": null,
- "openai_api_type": null,
- "openai_api_version": null,
- "openai_organization": null,
- "openai_proxy": null,
- "request_timeout": null,
- "show_progress_bar": null,
- "skip_empty": null,
- "tiktoken_enable": null,
- "tiktoken_model_name": null
- },
- "output_types": [
- "Embeddings"
- ],
- "field_formatters": {},
- "frozen": false,
- "field_order": [],
- "beta": false
- },
- "id": "OpenAIEmbeddings-9TPjc"
- },
- "selected": false,
- "width": 384,
- "height": 383,
- "positionAbsolute": {
- "x": 2814.0402191223047,
- "y": 1955.9268168273086
- },
- "dragging": false
- }
- ],
- "edges": [
- {
- "source": "TextOutput-BDknO",
- "target": "Prompt-xeI6K",
- "sourceHandle": "{\u0153baseClasses\u0153:[\u0153object\u0153,\u0153Text\u0153,\u0153str\u0153],\u0153dataType\u0153:\u0153TextOutput\u0153,\u0153id\u0153:\u0153TextOutput-BDknO\u0153}",
- "targetHandle": "{\u0153fieldName\u0153:\u0153context\u0153,\u0153id\u0153:\u0153Prompt-xeI6K\u0153,\u0153inputTypes\u0153:[\u0153Document\u0153,\u0153BaseOutputParser\u0153,\u0153Record\u0153,\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
- "id": "reactflow__edge-TextOutput-BDknO{\u0153baseClasses\u0153:[\u0153object\u0153,\u0153Text\u0153,\u0153str\u0153],\u0153dataType\u0153:\u0153TextOutput\u0153,\u0153id\u0153:\u0153TextOutput-BDknO\u0153}-Prompt-xeI6K{\u0153fieldName\u0153:\u0153context\u0153,\u0153id\u0153:\u0153Prompt-xeI6K\u0153,\u0153inputTypes\u0153:[\u0153Document\u0153,\u0153BaseOutputParser\u0153,\u0153Record\u0153,\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
- "data": {
- "targetHandle": {
- "fieldName": "context",
- "id": "Prompt-xeI6K",
- "inputTypes": [
- "Document",
- "BaseOutputParser",
- "Record",
- "Text"
- ],
- "type": "str"
- },
- "sourceHandle": {
- "baseClasses": [
- "object",
- "Text",
- "str"
- ],
- "dataType": "TextOutput",
- "id": "TextOutput-BDknO"
- }
- },
- "style": {
- "stroke": "#555"
- },
- "className": "stroke-gray-900 stroke-connection",
- "selected": false
+ "output_types": ["Text"],
+ "full_path": null,
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false,
+ "error": null
+ },
+ "id": "Prompt-xeI6K",
+ "description": "Create a prompt template with dynamic variables.",
+ "display_name": "Prompt"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 477,
+ "positionAbsolute": {
+ "x": 2969.0261961391298,
+ "y": 442.1613649809069
+ },
+ "dragging": false
+ },
+ {
+ "id": "ChatOutput-Q39I8",
+ "type": "genericNode",
+ "position": {
+ "x": 3887.2073667611485,
+ "y": 588.4801225794856
+ },
+ "data": {
+ "type": "ChatOutput",
+ "node": {
+ "template": {
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "from typing import Optional, Union\n\nfrom langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.schema import Record\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n def build(\n self,\n sender: Optional[str] = \"Machine\",\n sender_name: Optional[str] = \"AI\",\n input_value: Optional[str] = None,\n session_id: Optional[str] = None,\n return_record: Optional[bool] = False,\n record_template: Optional[str] = \"{text}\",\n ) -> Union[Text, Record]:\n return super().build(\n sender=sender,\n sender_name=sender_name,\n input_value=input_value,\n session_id=session_id,\n return_record=return_record,\n record_template=record_template,\n )\n",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "input_value": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "input_value",
+ "display_name": "Message",
+ "advanced": false,
+ "input_types": ["Text"],
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "record_template": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "{text}",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "record_template",
+ "display_name": "Record Template",
+ "advanced": true,
+ "dynamic": false,
+ "info": "In case of Message being a Record, this template will be used to convert it to text.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "return_record": {
+ "type": "bool",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "return_record",
+ "display_name": "Return Record",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Return the message as a record containing the sender, sender_name, and session_id.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "sender": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "value": "Machine",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "options": ["Machine", "User"],
+ "name": "sender",
+ "display_name": "Sender Type",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "sender_name": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": "AI",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "sender_name",
+ "display_name": "Sender Name",
+ "advanced": false,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "session_id": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "session_id",
+ "display_name": "Session ID",
+ "advanced": true,
+ "dynamic": false,
+ "info": "If provided, the message will be stored in the memory.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "_type": "CustomComponent"
},
- {
- "source": "ChatInput-yxMKE",
- "target": "Prompt-xeI6K",
- "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Text\u0153,\u0153str\u0153,\u0153object\u0153,\u0153Record\u0153],\u0153dataType\u0153:\u0153ChatInput\u0153,\u0153id\u0153:\u0153ChatInput-yxMKE\u0153}",
- "targetHandle": "{\u0153fieldName\u0153:\u0153question\u0153,\u0153id\u0153:\u0153Prompt-xeI6K\u0153,\u0153inputTypes\u0153:[\u0153Document\u0153,\u0153BaseOutputParser\u0153,\u0153Record\u0153,\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
- "id": "reactflow__edge-ChatInput-yxMKE{\u0153baseClasses\u0153:[\u0153Text\u0153,\u0153str\u0153,\u0153object\u0153,\u0153Record\u0153],\u0153dataType\u0153:\u0153ChatInput\u0153,\u0153id\u0153:\u0153ChatInput-yxMKE\u0153}-Prompt-xeI6K{\u0153fieldName\u0153:\u0153question\u0153,\u0153id\u0153:\u0153Prompt-xeI6K\u0153,\u0153inputTypes\u0153:[\u0153Document\u0153,\u0153BaseOutputParser\u0153,\u0153Record\u0153,\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
- "data": {
- "targetHandle": {
- "fieldName": "question",
- "id": "Prompt-xeI6K",
- "inputTypes": [
- "Document",
- "BaseOutputParser",
- "Record",
- "Text"
- ],
- "type": "str"
- },
- "sourceHandle": {
- "baseClasses": [
- "Text",
- "str",
- "object",
- "Record"
- ],
- "dataType": "ChatInput",
- "id": "ChatInput-yxMKE"
- }
- },
- "style": {
- "stroke": "#555"
- },
- "className": "stroke-gray-900 stroke-connection",
- "selected": false
+ "description": "Display a chat message in the Playground.",
+ "icon": "ChatOutput",
+ "base_classes": ["object", "Text", "Record", "str"],
+ "display_name": "Chat Output",
+ "documentation": "",
+ "custom_fields": {
+ "sender": null,
+ "sender_name": null,
+ "input_value": null,
+ "session_id": null,
+ "return_record": null,
+ "record_template": null
},
- {
- "source": "Prompt-xeI6K",
- "target": "OpenAIModel-EjXlN",
- "sourceHandle": "{\u0153baseClasses\u0153:[\u0153object\u0153,\u0153Text\u0153,\u0153str\u0153],\u0153dataType\u0153:\u0153Prompt\u0153,\u0153id\u0153:\u0153Prompt-xeI6K\u0153}",
- "targetHandle": "{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153OpenAIModel-EjXlN\u0153,\u0153inputTypes\u0153:[\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
- "id": "reactflow__edge-Prompt-xeI6K{\u0153baseClasses\u0153:[\u0153object\u0153,\u0153Text\u0153,\u0153str\u0153],\u0153dataType\u0153:\u0153Prompt\u0153,\u0153id\u0153:\u0153Prompt-xeI6K\u0153}-OpenAIModel-EjXlN{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153OpenAIModel-EjXlN\u0153,\u0153inputTypes\u0153:[\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
- "data": {
- "targetHandle": {
- "fieldName": "input_value",
- "id": "OpenAIModel-EjXlN",
- "inputTypes": [
- "Text"
- ],
- "type": "str"
- },
- "sourceHandle": {
- "baseClasses": [
- "object",
- "Text",
- "str"
- ],
- "dataType": "Prompt",
- "id": "Prompt-xeI6K"
- }
- },
- "style": {
- "stroke": "#555"
- },
- "className": "stroke-gray-900 stroke-connection",
- "selected": false
+ "output_types": ["Text", "Record"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false
+ },
+ "id": "ChatOutput-Q39I8"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 383,
+ "positionAbsolute": {
+ "x": 3887.2073667611485,
+ "y": 588.4801225794856
+ },
+ "dragging": false
+ },
+ {
+ "id": "File-t0a6a",
+ "type": "genericNode",
+ "position": {
+ "x": 2257.233450682836,
+ "y": 1747.5389618367233
+ },
+ "data": {
+ "type": "File",
+ "node": {
+ "template": {
+ "path": {
+ "type": "file",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [
+ ".txt",
+ ".md",
+ ".mdx",
+ ".csv",
+ ".json",
+ ".yaml",
+ ".yml",
+ ".xml",
+ ".html",
+ ".htm",
+ ".pdf",
+ ".docx"
+ ],
+ "file_path": "51e2b78a-199b-4054-9f32-e288eef6924c/Langflow conversation.pdf",
+ "password": false,
+ "name": "path",
+ "display_name": "Path",
+ "advanced": false,
+ "dynamic": false,
+ "info": "Supported file types: txt, md, mdx, csv, json, yaml, yml, xml, html, htm, pdf, docx",
+ "load_from_db": false,
+ "title_case": false,
+ "value": ""
+ },
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "from pathlib import Path\nfrom typing import Any, Dict\n\nfrom langflow.base.data.utils import TEXT_FILE_TYPES, parse_text_file_to_record\nfrom langflow.interface.custom.custom_component import CustomComponent\nfrom langflow.schema import Record\n\n\nclass FileComponent(CustomComponent):\n display_name = \"File\"\n description = \"A generic file loader.\"\n icon = \"file-text\"\n\n def build_config(self) -> Dict[str, Any]:\n return {\n \"path\": {\n \"display_name\": \"Path\",\n \"field_type\": \"file\",\n \"file_types\": TEXT_FILE_TYPES,\n \"info\": f\"Supported file types: {', '.join(TEXT_FILE_TYPES)}\",\n },\n \"silent_errors\": {\n \"display_name\": \"Silent Errors\",\n \"advanced\": True,\n \"info\": \"If true, errors will not raise an exception.\",\n },\n }\n\n def load_file(self, path: str, silent_errors: bool = False) -> Record:\n resolved_path = self.resolve_path(path)\n path_obj = Path(resolved_path)\n extension = path_obj.suffix[1:].lower()\n if extension == \"doc\":\n raise ValueError(\"doc files are not supported. Please save as .docx\")\n if extension not in TEXT_FILE_TYPES:\n raise ValueError(f\"Unsupported file type: {extension}\")\n record = parse_text_file_to_record(resolved_path, silent_errors)\n self.status = record if record else \"No data\"\n return record or Record()\n\n def build(\n self,\n path: str,\n silent_errors: bool = False,\n ) -> Record:\n record = self.load_file(path, silent_errors)\n self.status = record\n return record\n",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "silent_errors": {
+ "type": "bool",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "silent_errors",
+ "display_name": "Silent Errors",
+ "advanced": true,
+ "dynamic": false,
+ "info": "If true, errors will not raise an exception.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "_type": "CustomComponent"
},
- {
- "source": "OpenAIModel-EjXlN",
- "target": "ChatOutput-Q39I8",
- "sourceHandle": "{\u0153baseClasses\u0153:[\u0153object\u0153,\u0153Text\u0153,\u0153str\u0153],\u0153dataType\u0153:\u0153OpenAIModel\u0153,\u0153id\u0153:\u0153OpenAIModel-EjXlN\u0153}",
- "targetHandle": "{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153ChatOutput-Q39I8\u0153,\u0153inputTypes\u0153:[\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
- "id": "reactflow__edge-OpenAIModel-EjXlN{\u0153baseClasses\u0153:[\u0153object\u0153,\u0153Text\u0153,\u0153str\u0153],\u0153dataType\u0153:\u0153OpenAIModel\u0153,\u0153id\u0153:\u0153OpenAIModel-EjXlN\u0153}-ChatOutput-Q39I8{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153ChatOutput-Q39I8\u0153,\u0153inputTypes\u0153:[\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
- "data": {
- "targetHandle": {
- "fieldName": "input_value",
- "id": "ChatOutput-Q39I8",
- "inputTypes": [
- "Text"
- ],
- "type": "str"
- },
- "sourceHandle": {
- "baseClasses": [
- "object",
- "Text",
- "str"
- ],
- "dataType": "OpenAIModel",
- "id": "OpenAIModel-EjXlN"
- }
- },
- "style": {
- "stroke": "#555"
- },
- "className": "stroke-gray-900 stroke-connection",
- "selected": false
+ "description": "A generic file loader.",
+ "icon": "file-text",
+ "base_classes": ["Record"],
+ "display_name": "File",
+ "documentation": "",
+ "custom_fields": {
+ "path": null,
+ "silent_errors": null
},
- {
- "source": "File-t0a6a",
- "target": "RecursiveCharacterTextSplitter-tR9QM",
- "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Record\u0153],\u0153dataType\u0153:\u0153File\u0153,\u0153id\u0153:\u0153File-t0a6a\u0153}",
- "targetHandle": "{\u0153fieldName\u0153:\u0153inputs\u0153,\u0153id\u0153:\u0153RecursiveCharacterTextSplitter-tR9QM\u0153,\u0153inputTypes\u0153:[\u0153Document\u0153,\u0153Record\u0153],\u0153type\u0153:\u0153Document\u0153}",
- "id": "reactflow__edge-File-t0a6a{\u0153baseClasses\u0153:[\u0153Record\u0153],\u0153dataType\u0153:\u0153File\u0153,\u0153id\u0153:\u0153File-t0a6a\u0153}-RecursiveCharacterTextSplitter-tR9QM{\u0153fieldName\u0153:\u0153inputs\u0153,\u0153id\u0153:\u0153RecursiveCharacterTextSplitter-tR9QM\u0153,\u0153inputTypes\u0153:[\u0153Document\u0153,\u0153Record\u0153],\u0153type\u0153:\u0153Document\u0153}",
- "data": {
- "targetHandle": {
- "fieldName": "inputs",
- "id": "RecursiveCharacterTextSplitter-tR9QM",
- "inputTypes": [
- "Document",
- "Record"
- ],
- "type": "Document"
- },
- "sourceHandle": {
- "baseClasses": [
- "Record"
- ],
- "dataType": "File",
- "id": "File-t0a6a"
- }
- },
- "style": {
- "stroke": "#555"
- },
- "className": "stroke-gray-900 stroke-connection",
- "selected": false
+ "output_types": ["Record"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false
+ },
+ "id": "File-t0a6a"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 281,
+ "positionAbsolute": {
+ "x": 2257.233450682836,
+ "y": 1747.5389618367233
+ },
+ "dragging": false
+ },
+ {
+ "id": "RecursiveCharacterTextSplitter-tR9QM",
+ "type": "genericNode",
+ "position": {
+ "x": 2791.013514133929,
+ "y": 1462.9588953494142
+ },
+ "data": {
+ "type": "RecursiveCharacterTextSplitter",
+ "node": {
+ "template": {
+ "inputs": {
+ "type": "Document",
+ "required": true,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "inputs",
+ "display_name": "Input",
+ "advanced": false,
+ "input_types": ["Document", "Record"],
+ "dynamic": false,
+ "info": "The texts to split.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "chunk_overlap": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": 200,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "chunk_overlap",
+ "display_name": "Chunk Overlap",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The amount of overlap between chunks.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "chunk_size": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": 1000,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "chunk_size",
+ "display_name": "Chunk Size",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The maximum length of each chunk.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "from typing import Optional\n\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom langchain_core.documents import Document\n\nfrom langflow.interface.custom.custom_component import CustomComponent\nfrom langflow.schema import Record\nfrom langflow.utils.util import build_loader_repr_from_records, unescape_string\n\n\nclass RecursiveCharacterTextSplitterComponent(CustomComponent):\n display_name: str = \"Recursive Character Text Splitter\"\n description: str = \"Split text into chunks of a specified length.\"\n documentation: str = \"https://docs.langflow.org/components/text-splitters#recursivecharactertextsplitter\"\n\n def build_config(self):\n return {\n \"inputs\": {\n \"display_name\": \"Input\",\n \"info\": \"The texts to split.\",\n \"input_types\": [\"Document\", \"Record\"],\n },\n \"separators\": {\n \"display_name\": \"Separators\",\n \"info\": 'The characters to split on.\\nIf left empty defaults to [\"\\\\n\\\\n\", \"\\\\n\", \" \", \"\"].',\n \"is_list\": True,\n },\n \"chunk_size\": {\n \"display_name\": \"Chunk Size\",\n \"info\": \"The maximum length of each chunk.\",\n \"field_type\": \"int\",\n \"value\": 1000,\n },\n \"chunk_overlap\": {\n \"display_name\": \"Chunk Overlap\",\n \"info\": \"The amount of overlap between chunks.\",\n \"field_type\": \"int\",\n \"value\": 200,\n },\n \"code\": {\"show\": False},\n }\n\n def build(\n self,\n inputs: list[Document],\n separators: Optional[list[str]] = None,\n chunk_size: Optional[int] = 1000,\n chunk_overlap: Optional[int] = 200,\n ) -> list[Record]:\n \"\"\"\n Split text into chunks of a specified length.\n\n Args:\n separators (list[str]): The characters to split on.\n chunk_size (int): The maximum length of each chunk.\n chunk_overlap (int): The amount of overlap between chunks.\n length_function (function): The function to use to calculate the length of the text.\n\n Returns:\n list[str]: The chunks of text.\n \"\"\"\n\n if separators == \"\":\n separators = None\n elif separators:\n # check if the separators list has escaped characters\n # if there are escaped characters, unescape them\n separators = [unescape_string(x) for x in separators]\n\n # Make sure chunk_size and chunk_overlap are ints\n if isinstance(chunk_size, str):\n chunk_size = int(chunk_size)\n if isinstance(chunk_overlap, str):\n chunk_overlap = int(chunk_overlap)\n splitter = RecursiveCharacterTextSplitter(\n separators=separators,\n chunk_size=chunk_size,\n chunk_overlap=chunk_overlap,\n )\n documents = []\n for _input in inputs:\n if isinstance(_input, Record):\n documents.append(_input.to_lc_document())\n else:\n documents.append(_input)\n docs = splitter.split_documents(documents)\n records = self.to_records(docs)\n self.repr_value = build_loader_repr_from_records(records)\n return records\n",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "separators": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "separators",
+ "display_name": "Separators",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The characters to split on.\nIf left empty defaults to [\"\\n\\n\", \"\\n\", \" \", \"\"].",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": [""]
+ },
+ "_type": "CustomComponent"
},
- {
- "source": "OpenAIEmbeddings-ZlOk1",
- "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Embeddings\u0153],\u0153dataType\u0153:\u0153OpenAIEmbeddings\u0153,\u0153id\u0153:\u0153OpenAIEmbeddings-ZlOk1\u0153}",
- "target": "AstraDBSearch-41nRz",
- "targetHandle": "{\u0153fieldName\u0153:\u0153embedding\u0153,\u0153id\u0153:\u0153AstraDBSearch-41nRz\u0153,\u0153inputTypes\u0153:null,\u0153type\u0153:\u0153Embeddings\u0153}",
- "data": {
- "targetHandle": {
- "fieldName": "embedding",
- "id": "AstraDBSearch-41nRz",
- "inputTypes": null,
- "type": "Embeddings"
- },
- "sourceHandle": {
- "baseClasses": [
- "Embeddings"
- ],
- "dataType": "OpenAIEmbeddings",
- "id": "OpenAIEmbeddings-ZlOk1"
- }
- },
- "style": {
- "stroke": "#555"
- },
- "className": "stroke-gray-900 stroke-connection",
- "id": "reactflow__edge-OpenAIEmbeddings-ZlOk1{\u0153baseClasses\u0153:[\u0153Embeddings\u0153],\u0153dataType\u0153:\u0153OpenAIEmbeddings\u0153,\u0153id\u0153:\u0153OpenAIEmbeddings-ZlOk1\u0153}-AstraDBSearch-41nRz{\u0153fieldName\u0153:\u0153embedding\u0153,\u0153id\u0153:\u0153AstraDBSearch-41nRz\u0153,\u0153inputTypes\u0153:null,\u0153type\u0153:\u0153Embeddings\u0153}"
+ "description": "Split text into chunks of a specified length.",
+ "base_classes": ["Record"],
+ "display_name": "Recursive Character Text Splitter",
+ "documentation": "https://docs.langflow.org/components/text-splitters#recursivecharactertextsplitter",
+ "custom_fields": {
+ "inputs": null,
+ "separators": null,
+ "chunk_size": null,
+ "chunk_overlap": null
},
- {
- "source": "ChatInput-yxMKE",
- "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Text\u0153,\u0153str\u0153,\u0153object\u0153,\u0153Record\u0153],\u0153dataType\u0153:\u0153ChatInput\u0153,\u0153id\u0153:\u0153ChatInput-yxMKE\u0153}",
- "target": "AstraDBSearch-41nRz",
- "targetHandle": "{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153AstraDBSearch-41nRz\u0153,\u0153inputTypes\u0153:[\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
- "data": {
- "targetHandle": {
- "fieldName": "input_value",
- "id": "AstraDBSearch-41nRz",
- "inputTypes": [
- "Text"
- ],
- "type": "str"
- },
- "sourceHandle": {
- "baseClasses": [
- "Text",
- "str",
- "object",
- "Record"
- ],
- "dataType": "ChatInput",
- "id": "ChatInput-yxMKE"
- }
- },
- "style": {
- "stroke": "#555"
- },
- "className": "stroke-gray-900 stroke-connection",
- "id": "reactflow__edge-ChatInput-yxMKE{\u0153baseClasses\u0153:[\u0153Text\u0153,\u0153str\u0153,\u0153object\u0153,\u0153Record\u0153],\u0153dataType\u0153:\u0153ChatInput\u0153,\u0153id\u0153:\u0153ChatInput-yxMKE\u0153}-AstraDBSearch-41nRz{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153AstraDBSearch-41nRz\u0153,\u0153inputTypes\u0153:[\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}"
+ "output_types": ["Record"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false
+ },
+ "id": "RecursiveCharacterTextSplitter-tR9QM"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 501,
+ "positionAbsolute": {
+ "x": 2791.013514133929,
+ "y": 1462.9588953494142
+ },
+ "dragging": false
+ },
+ {
+ "id": "AstraDBSearch-41nRz",
+ "type": "genericNode",
+ "position": {
+ "x": 1723.976434815103,
+ "y": 277.03317407245913
+ },
+ "data": {
+ "type": "AstraDBSearch",
+ "node": {
+ "template": {
+ "embedding": {
+ "type": "Embeddings",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "embedding",
+ "display_name": "Embedding",
+ "advanced": false,
+ "dynamic": false,
+ "info": "Embedding to use",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "input_value": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "input_value",
+ "display_name": "Input Value",
+ "advanced": false,
+ "dynamic": false,
+ "info": "Input value to search",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "api_endpoint": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "api_endpoint",
+ "display_name": "API Endpoint",
+ "advanced": false,
+ "dynamic": false,
+ "info": "API endpoint URL for the Astra DB service.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": "ASTRA_DB_API_ENDPOINT"
+ },
+ "batch_size": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "batch_size",
+ "display_name": "Batch Size",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional number of records to process in a single batch.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "bulk_delete_concurrency": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "bulk_delete_concurrency",
+ "display_name": "Bulk Delete Concurrency",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional concurrency level for bulk delete operations.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "bulk_insert_batch_concurrency": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "bulk_insert_batch_concurrency",
+ "display_name": "Bulk Insert Batch Concurrency",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional concurrency level for bulk insert operations.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "bulk_insert_overwrite_concurrency": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "bulk_insert_overwrite_concurrency",
+ "display_name": "Bulk Insert Overwrite Concurrency",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional concurrency level for bulk insert operations that overwrite existing records.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "from typing import List, Optional\n\nfrom langflow.components.vectorstores.AstraDB import AstraDBVectorStoreComponent\nfrom langflow.components.vectorstores.base.model import LCVectorStoreComponent\nfrom langflow.field_typing import Embeddings, Text\nfrom langflow.schema import Record\n\n\nclass AstraDBSearchComponent(LCVectorStoreComponent):\n display_name = \"Astra DB Search\"\n description = \"Searches an existing Astra DB Vector Store.\"\n icon = \"AstraDB\"\n field_order = [\"token\", \"api_endpoint\", \"collection_name\", \"input_value\", \"embedding\"]\n\n def build_config(self):\n return {\n \"search_type\": {\n \"display_name\": \"Search Type\",\n \"options\": [\"Similarity\", \"MMR\"],\n },\n \"input_value\": {\n \"display_name\": \"Input Value\",\n \"info\": \"Input value to search\",\n },\n \"embedding\": {\"display_name\": \"Embedding\", \"info\": \"Embedding to use\"},\n \"collection_name\": {\n \"display_name\": \"Collection Name\",\n \"info\": \"The name of the collection within Astra DB where the vectors will be stored.\",\n },\n \"token\": {\n \"display_name\": \"Token\",\n \"info\": \"Authentication token for accessing Astra DB.\",\n \"password\": True,\n },\n \"api_endpoint\": {\n \"display_name\": \"API Endpoint\",\n \"info\": \"API endpoint URL for the Astra DB service.\",\n },\n \"namespace\": {\n \"display_name\": \"Namespace\",\n \"info\": \"Optional namespace within Astra DB to use for the collection.\",\n \"advanced\": True,\n },\n \"metric\": {\n \"display_name\": \"Metric\",\n \"info\": \"Optional distance metric for vector comparisons in the vector store.\",\n \"advanced\": True,\n },\n \"batch_size\": {\n \"display_name\": \"Batch Size\",\n \"info\": \"Optional number of records to process in a single batch.\",\n \"advanced\": True,\n },\n \"bulk_insert_batch_concurrency\": {\n \"display_name\": \"Bulk Insert Batch Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations.\",\n \"advanced\": True,\n },\n \"bulk_insert_overwrite_concurrency\": {\n \"display_name\": \"Bulk Insert Overwrite Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations that overwrite existing records.\",\n \"advanced\": True,\n },\n \"bulk_delete_concurrency\": {\n \"display_name\": \"Bulk Delete Concurrency\",\n \"info\": \"Optional concurrency level for bulk delete operations.\",\n \"advanced\": True,\n },\n \"setup_mode\": {\n \"display_name\": \"Setup Mode\",\n \"info\": \"Configuration mode for setting up the vector store, with options like \u201cSync\u201d, \u201cAsync\u201d, or \u201cOff\u201d.\",\n \"options\": [\"Sync\", \"Async\", \"Off\"],\n \"advanced\": True,\n },\n \"pre_delete_collection\": {\n \"display_name\": \"Pre Delete Collection\",\n \"info\": \"Boolean flag to determine whether to delete the collection before creating a new one.\",\n \"advanced\": True,\n },\n \"metadata_indexing_include\": {\n \"display_name\": \"Metadata Indexing Include\",\n \"info\": \"Optional list of metadata fields to include in the indexing.\",\n \"advanced\": True,\n },\n \"metadata_indexing_exclude\": {\n \"display_name\": \"Metadata Indexing Exclude\",\n \"info\": \"Optional list of metadata fields to exclude from the indexing.\",\n \"advanced\": True,\n },\n \"collection_indexing_policy\": {\n \"display_name\": \"Collection Indexing Policy\",\n \"info\": \"Optional dictionary defining the indexing policy for the collection.\",\n \"advanced\": True,\n },\n \"number_of_results\": {\n \"display_name\": \"Number of Results\",\n \"info\": \"Number of results to return.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n embedding: Embeddings,\n collection_name: str,\n input_value: Text,\n token: str,\n api_endpoint: str,\n search_type: str = \"Similarity\",\n number_of_results: int = 4,\n namespace: Optional[str] = None,\n metric: Optional[str] = None,\n batch_size: Optional[int] = None,\n bulk_insert_batch_concurrency: Optional[int] = None,\n bulk_insert_overwrite_concurrency: Optional[int] = None,\n bulk_delete_concurrency: Optional[int] = None,\n setup_mode: str = \"Sync\",\n pre_delete_collection: bool = False,\n metadata_indexing_include: Optional[List[str]] = None,\n metadata_indexing_exclude: Optional[List[str]] = None,\n collection_indexing_policy: Optional[dict] = None,\n ) -> List[Record]:\n vector_store = AstraDBVectorStoreComponent().build(\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n try:\n return self.search_with_vector_store(input_value, search_type, vector_store, k=number_of_results)\n except KeyError as e:\n if \"content\" in str(e):\n raise ValueError(\n \"You should ingest data through Langflow (or LangChain) to query it in Langflow. Your collection does not contain a field name 'content'.\"\n )\n else:\n raise e\n",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "collection_indexing_policy": {
+ "type": "dict",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "collection_indexing_policy",
+ "display_name": "Collection Indexing Policy",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional dictionary defining the indexing policy for the collection.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "collection_name": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "collection_name",
+ "display_name": "Collection Name",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The name of the collection within Astra DB where the vectors will be stored.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": "langflow"
+ },
+ "metadata_indexing_exclude": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "metadata_indexing_exclude",
+ "display_name": "Metadata Indexing Exclude",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional list of metadata fields to exclude from the indexing.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "metadata_indexing_include": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "metadata_indexing_include",
+ "display_name": "Metadata Indexing Include",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional list of metadata fields to include in the indexing.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "metric": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "metric",
+ "display_name": "Metric",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional distance metric for vector comparisons in the vector store.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "namespace": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "namespace",
+ "display_name": "Namespace",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional namespace within Astra DB to use for the collection.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "number_of_results": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": 4,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "number_of_results",
+ "display_name": "Number of Results",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Number of results to return.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "pre_delete_collection": {
+ "type": "bool",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "pre_delete_collection",
+ "display_name": "Pre Delete Collection",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Boolean flag to determine whether to delete the collection before creating a new one.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "search_type": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "value": "Similarity",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "options": ["Similarity", "MMR"],
+ "name": "search_type",
+ "display_name": "Search Type",
+ "advanced": false,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "setup_mode": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "value": "Sync",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "options": ["Sync", "Async", "Off"],
+ "name": "setup_mode",
+ "display_name": "Setup Mode",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Configuration mode for setting up the vector store, with options like \u201cSync\u201d, \u201cAsync\u201d, or \u201cOff\u201d.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "token": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "token",
+ "display_name": "Token",
+ "advanced": false,
+ "dynamic": false,
+ "info": "Authentication token for accessing Astra DB.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": "ASTRA_DB_APPLICATION_TOKEN"
+ },
+ "_type": "CustomComponent"
},
- {
- "source": "RecursiveCharacterTextSplitter-tR9QM",
- "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Record\u0153],\u0153dataType\u0153:\u0153RecursiveCharacterTextSplitter\u0153,\u0153id\u0153:\u0153RecursiveCharacterTextSplitter-tR9QM\u0153}",
- "target": "AstraDB-eUCSS",
- "targetHandle": "{\u0153fieldName\u0153:\u0153inputs\u0153,\u0153id\u0153:\u0153AstraDB-eUCSS\u0153,\u0153inputTypes\u0153:null,\u0153type\u0153:\u0153Record\u0153}",
- "data": {
- "targetHandle": {
- "fieldName": "inputs",
- "id": "AstraDB-eUCSS",
- "inputTypes": null,
- "type": "Record"
- },
- "sourceHandle": {
- "baseClasses": [
- "Record"
- ],
- "dataType": "RecursiveCharacterTextSplitter",
- "id": "RecursiveCharacterTextSplitter-tR9QM"
- }
- },
- "style": {
- "stroke": "#555"
- },
- "className": "stroke-gray-900 stroke-connection",
- "id": "reactflow__edge-RecursiveCharacterTextSplitter-tR9QM{\u0153baseClasses\u0153:[\u0153Record\u0153],\u0153dataType\u0153:\u0153RecursiveCharacterTextSplitter\u0153,\u0153id\u0153:\u0153RecursiveCharacterTextSplitter-tR9QM\u0153}-AstraDB-eUCSS{\u0153fieldName\u0153:\u0153inputs\u0153,\u0153id\u0153:\u0153AstraDB-eUCSS\u0153,\u0153inputTypes\u0153:null,\u0153type\u0153:\u0153Record\u0153}",
- "selected": false
+ "description": "Searches an existing Astra DB Vector Store.",
+ "icon": "AstraDB",
+ "base_classes": ["Record"],
+ "display_name": "Astra DB Search",
+ "documentation": "",
+ "custom_fields": {
+ "embedding": null,
+ "collection_name": null,
+ "input_value": null,
+ "token": null,
+ "api_endpoint": null,
+ "search_type": null,
+ "number_of_results": null,
+ "namespace": null,
+ "metric": null,
+ "batch_size": null,
+ "bulk_insert_batch_concurrency": null,
+ "bulk_insert_overwrite_concurrency": null,
+ "bulk_delete_concurrency": null,
+ "setup_mode": null,
+ "pre_delete_collection": null,
+ "metadata_indexing_include": null,
+ "metadata_indexing_exclude": null,
+ "collection_indexing_policy": null
},
- {
- "source": "OpenAIEmbeddings-9TPjc",
- "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Embeddings\u0153],\u0153dataType\u0153:\u0153OpenAIEmbeddings\u0153,\u0153id\u0153:\u0153OpenAIEmbeddings-9TPjc\u0153}",
- "target": "AstraDB-eUCSS",
- "targetHandle": "{\u0153fieldName\u0153:\u0153embedding\u0153,\u0153id\u0153:\u0153AstraDB-eUCSS\u0153,\u0153inputTypes\u0153:null,\u0153type\u0153:\u0153Embeddings\u0153}",
- "data": {
- "targetHandle": {
- "fieldName": "embedding",
- "id": "AstraDB-eUCSS",
- "inputTypes": null,
- "type": "Embeddings"
- },
- "sourceHandle": {
- "baseClasses": [
- "Embeddings"
- ],
- "dataType": "OpenAIEmbeddings",
- "id": "OpenAIEmbeddings-9TPjc"
- }
- },
- "style": {
- "stroke": "#555"
- },
- "className": "stroke-gray-900 stroke-connection",
- "id": "reactflow__edge-OpenAIEmbeddings-9TPjc{\u0153baseClasses\u0153:[\u0153Embeddings\u0153],\u0153dataType\u0153:\u0153OpenAIEmbeddings\u0153,\u0153id\u0153:\u0153OpenAIEmbeddings-9TPjc\u0153}-AstraDB-eUCSS{\u0153fieldName\u0153:\u0153embedding\u0153,\u0153id\u0153:\u0153AstraDB-eUCSS\u0153,\u0153inputTypes\u0153:null,\u0153type\u0153:\u0153Embeddings\u0153}",
- "selected": false
- },
- {
- "source": "AstraDBSearch-41nRz",
- "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Record\u0153],\u0153dataType\u0153:\u0153AstraDBSearch\u0153,\u0153id\u0153:\u0153AstraDBSearch-41nRz\u0153}",
- "target": "TextOutput-BDknO",
- "targetHandle": "{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153TextOutput-BDknO\u0153,\u0153inputTypes\u0153:[\u0153Record\u0153,\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
- "data": {
- "targetHandle": {
- "fieldName": "input_value",
- "id": "TextOutput-BDknO",
- "inputTypes": [
- "Record",
- "Text"
- ],
- "type": "str"
- },
- "sourceHandle": {
- "baseClasses": [
- "Record"
- ],
- "dataType": "AstraDBSearch",
- "id": "AstraDBSearch-41nRz"
- }
- },
- "style": {
- "stroke": "#555"
- },
- "className": "stroke-gray-900 stroke-connection",
- "id": "reactflow__edge-AstraDBSearch-41nRz{\u0153baseClasses\u0153:[\u0153Record\u0153],\u0153dataType\u0153:\u0153AstraDBSearch\u0153,\u0153id\u0153:\u0153AstraDBSearch-41nRz\u0153}-TextOutput-BDknO{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153TextOutput-BDknO\u0153,\u0153inputTypes\u0153:[\u0153Record\u0153,\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}"
- }
- ],
- "viewport": {
- "x": -259.6782520315529,
- "y": 90.3428735006047,
- "zoom": 0.2687057134854984
+ "output_types": ["Record"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [
+ "token",
+ "api_endpoint",
+ "collection_name",
+ "input_value",
+ "embedding"
+ ],
+ "beta": false
+ },
+ "id": "AstraDBSearch-41nRz"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 713,
+ "dragging": false,
+ "positionAbsolute": {
+ "x": 1723.976434815103,
+ "y": 277.03317407245913
}
- },
- "description": "Visit https://pre-release.langflow.org/tutorials/rag-with-astradb for a detailed guide of this project.\nThis project give you both Ingestion and RAG in a single file. You'll need to visit https://astra.datastax.com/ to create an Astra DB instance, your Token and grab an API Endpoint.\nRunning this project requires you to add a file in the Files component, then define a Collection Name and click on the Play icon on the Astra DB component. \n\nAfter the ingestion ends you are ready to click on the Run button at the lower left corner and start asking questions about your data.",
- "name": "Vector Store RAG",
- "last_tested_version": "1.0.0a0",
- "is_component": false
+ },
+ {
+ "id": "AstraDB-eUCSS",
+ "type": "genericNode",
+ "position": {
+ "x": 3372.04958055989,
+ "y": 1611.0742035495277
+ },
+ "data": {
+ "type": "AstraDB",
+ "node": {
+ "template": {
+ "embedding": {
+ "type": "Embeddings",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "embedding",
+ "display_name": "Embedding",
+ "advanced": false,
+ "dynamic": false,
+ "info": "Embedding to use",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "inputs": {
+ "type": "Record",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "inputs",
+ "display_name": "Inputs",
+ "advanced": false,
+ "dynamic": false,
+ "info": "Optional list of records to be processed and stored in the vector store.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "api_endpoint": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "api_endpoint",
+ "display_name": "API Endpoint",
+ "advanced": false,
+ "dynamic": false,
+ "info": "API endpoint URL for the Astra DB service.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": "ASTRA_DB_API_ENDPOINT"
+ },
+ "batch_size": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "batch_size",
+ "display_name": "Batch Size",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional number of records to process in a single batch.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "bulk_delete_concurrency": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "bulk_delete_concurrency",
+ "display_name": "Bulk Delete Concurrency",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional concurrency level for bulk delete operations.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "bulk_insert_batch_concurrency": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "bulk_insert_batch_concurrency",
+ "display_name": "Bulk Insert Batch Concurrency",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional concurrency level for bulk insert operations.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "bulk_insert_overwrite_concurrency": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "bulk_insert_overwrite_concurrency",
+ "display_name": "Bulk Insert Overwrite Concurrency",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional concurrency level for bulk insert operations that overwrite existing records.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "from typing import List, Optional\n\nfrom langchain_astradb import AstraDBVectorStore\nfrom langchain_astradb.utils.astradb import SetupMode\n\nfrom langflow.custom import CustomComponent\nfrom langflow.field_typing import Embeddings, VectorStore\nfrom langflow.schema import Record\n\n\nclass AstraDBVectorStoreComponent(CustomComponent):\n display_name = \"Astra DB\"\n description = \"Builds or loads an Astra DB Vector Store.\"\n icon = \"AstraDB\"\n field_order = [\"token\", \"api_endpoint\", \"collection_name\", \"inputs\", \"embedding\"]\n\n def build_config(self):\n return {\n \"inputs\": {\n \"display_name\": \"Inputs\",\n \"info\": \"Optional list of records to be processed and stored in the vector store.\",\n },\n \"embedding\": {\"display_name\": \"Embedding\", \"info\": \"Embedding to use\"},\n \"collection_name\": {\n \"display_name\": \"Collection Name\",\n \"info\": \"The name of the collection within Astra DB where the vectors will be stored.\",\n },\n \"token\": {\n \"display_name\": \"Token\",\n \"info\": \"Authentication token for accessing Astra DB.\",\n \"password\": True,\n },\n \"api_endpoint\": {\n \"display_name\": \"API Endpoint\",\n \"info\": \"API endpoint URL for the Astra DB service.\",\n },\n \"namespace\": {\n \"display_name\": \"Namespace\",\n \"info\": \"Optional namespace within Astra DB to use for the collection.\",\n \"advanced\": True,\n },\n \"metric\": {\n \"display_name\": \"Metric\",\n \"info\": \"Optional distance metric for vector comparisons in the vector store.\",\n \"advanced\": True,\n },\n \"batch_size\": {\n \"display_name\": \"Batch Size\",\n \"info\": \"Optional number of records to process in a single batch.\",\n \"advanced\": True,\n },\n \"bulk_insert_batch_concurrency\": {\n \"display_name\": \"Bulk Insert Batch Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations.\",\n \"advanced\": True,\n },\n \"bulk_insert_overwrite_concurrency\": {\n \"display_name\": \"Bulk Insert Overwrite Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations that overwrite existing records.\",\n \"advanced\": True,\n },\n \"bulk_delete_concurrency\": {\n \"display_name\": \"Bulk Delete Concurrency\",\n \"info\": \"Optional concurrency level for bulk delete operations.\",\n \"advanced\": True,\n },\n \"setup_mode\": {\n \"display_name\": \"Setup Mode\",\n \"info\": \"Configuration mode for setting up the vector store, with options like \u201cSync\u201d, \u201cAsync\u201d, or \u201cOff\u201d.\",\n \"options\": [\"Sync\", \"Async\", \"Off\"],\n \"advanced\": True,\n },\n \"pre_delete_collection\": {\n \"display_name\": \"Pre Delete Collection\",\n \"info\": \"Boolean flag to determine whether to delete the collection before creating a new one.\",\n \"advanced\": True,\n },\n \"metadata_indexing_include\": {\n \"display_name\": \"Metadata Indexing Include\",\n \"info\": \"Optional list of metadata fields to include in the indexing.\",\n \"advanced\": True,\n },\n \"metadata_indexing_exclude\": {\n \"display_name\": \"Metadata Indexing Exclude\",\n \"info\": \"Optional list of metadata fields to exclude from the indexing.\",\n \"advanced\": True,\n },\n \"collection_indexing_policy\": {\n \"display_name\": \"Collection Indexing Policy\",\n \"info\": \"Optional dictionary defining the indexing policy for the collection.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n embedding: Embeddings,\n token: str,\n api_endpoint: str,\n collection_name: str,\n inputs: Optional[List[Record]] = None,\n namespace: Optional[str] = None,\n metric: Optional[str] = None,\n batch_size: Optional[int] = None,\n bulk_insert_batch_concurrency: Optional[int] = None,\n bulk_insert_overwrite_concurrency: Optional[int] = None,\n bulk_delete_concurrency: Optional[int] = None,\n setup_mode: str = \"Async\",\n pre_delete_collection: bool = False,\n metadata_indexing_include: Optional[List[str]] = None,\n metadata_indexing_exclude: Optional[List[str]] = None,\n collection_indexing_policy: Optional[dict] = None,\n ) -> VectorStore:\n try:\n setup_mode_value = SetupMode[setup_mode.upper()]\n except KeyError:\n raise ValueError(f\"Invalid setup mode: {setup_mode}\")\n if inputs:\n documents = [_input.to_lc_document() for _input in inputs]\n\n vector_store = AstraDBVectorStore.from_documents(\n documents=documents,\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode_value,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n else:\n vector_store = AstraDBVectorStore(\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode_value,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n\n return vector_store\n",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "collection_indexing_policy": {
+ "type": "dict",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "collection_indexing_policy",
+ "display_name": "Collection Indexing Policy",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional dictionary defining the indexing policy for the collection.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "collection_name": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "collection_name",
+ "display_name": "Collection Name",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The name of the collection within Astra DB where the vectors will be stored.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": "langflow"
+ },
+ "metadata_indexing_exclude": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "metadata_indexing_exclude",
+ "display_name": "Metadata Indexing Exclude",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional list of metadata fields to exclude from the indexing.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "metadata_indexing_include": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "metadata_indexing_include",
+ "display_name": "Metadata Indexing Include",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional list of metadata fields to include in the indexing.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "metric": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "metric",
+ "display_name": "Metric",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional distance metric for vector comparisons in the vector store.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "namespace": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "namespace",
+ "display_name": "Namespace",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Optional namespace within Astra DB to use for the collection.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "pre_delete_collection": {
+ "type": "bool",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "pre_delete_collection",
+ "display_name": "Pre Delete Collection",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Boolean flag to determine whether to delete the collection before creating a new one.",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "setup_mode": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "value": "Async",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "options": ["Sync", "Async", "Off"],
+ "name": "setup_mode",
+ "display_name": "Setup Mode",
+ "advanced": true,
+ "dynamic": false,
+ "info": "Configuration mode for setting up the vector store, with options like \u201cSync\u201d, \u201cAsync\u201d, or \u201cOff\u201d.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "token": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "token",
+ "display_name": "Token",
+ "advanced": false,
+ "dynamic": false,
+ "info": "Authentication token for accessing Astra DB.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": "ASTRA_DB_APPLICATION_TOKEN"
+ },
+ "_type": "CustomComponent"
+ },
+ "description": "Builds or loads an Astra DB Vector Store.",
+ "icon": "AstraDB",
+ "base_classes": ["VectorStore"],
+ "display_name": "Astra DB",
+ "documentation": "",
+ "custom_fields": {
+ "embedding": null,
+ "token": null,
+ "api_endpoint": null,
+ "collection_name": null,
+ "inputs": null,
+ "namespace": null,
+ "metric": null,
+ "batch_size": null,
+ "bulk_insert_batch_concurrency": null,
+ "bulk_insert_overwrite_concurrency": null,
+ "bulk_delete_concurrency": null,
+ "setup_mode": null,
+ "pre_delete_collection": null,
+ "metadata_indexing_include": null,
+ "metadata_indexing_exclude": null,
+ "collection_indexing_policy": null
+ },
+ "output_types": ["VectorStore"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [
+ "token",
+ "api_endpoint",
+ "collection_name",
+ "inputs",
+ "embedding"
+ ],
+ "beta": false
+ },
+ "id": "AstraDB-eUCSS"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 573,
+ "positionAbsolute": {
+ "x": 3372.04958055989,
+ "y": 1611.0742035495277
+ },
+ "dragging": false
+ },
+ {
+ "id": "OpenAIEmbeddings-9TPjc",
+ "type": "genericNode",
+ "position": {
+ "x": 2814.0402191223047,
+ "y": 1955.9268168273086
+ },
+ "data": {
+ "type": "OpenAIEmbeddings",
+ "node": {
+ "template": {
+ "allowed_special": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": [],
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "allowed_special",
+ "display_name": "Allowed Special",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "chunk_size": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": 1000,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "chunk_size",
+ "display_name": "Chunk Size",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "client": {
+ "type": "Any",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "client",
+ "display_name": "Client",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "from typing import Any, Dict, List, Optional\n\nfrom langchain_openai.embeddings.base import OpenAIEmbeddings\n\nfrom langflow.field_typing import Embeddings, NestedDict\nfrom langflow.interface.custom.custom_component import CustomComponent\n\n\nclass OpenAIEmbeddingsComponent(CustomComponent):\n display_name = \"OpenAI Embeddings\"\n description = \"Generate embeddings using OpenAI models.\"\n\n def build_config(self):\n return {\n \"allowed_special\": {\n \"display_name\": \"Allowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"default_headers\": {\n \"display_name\": \"Default Headers\",\n \"advanced\": True,\n \"field_type\": \"dict\",\n },\n \"default_query\": {\n \"display_name\": \"Default Query\",\n \"advanced\": True,\n \"field_type\": \"NestedDict\",\n },\n \"disallowed_special\": {\n \"display_name\": \"Disallowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"chunk_size\": {\"display_name\": \"Chunk Size\", \"advanced\": True},\n \"client\": {\"display_name\": \"Client\", \"advanced\": True},\n \"deployment\": {\"display_name\": \"Deployment\", \"advanced\": True},\n \"embedding_ctx_length\": {\n \"display_name\": \"Embedding Context Length\",\n \"advanced\": True,\n },\n \"max_retries\": {\"display_name\": \"Max Retries\", \"advanced\": True},\n \"model\": {\n \"display_name\": \"Model\",\n \"advanced\": False,\n \"options\": [\n \"text-embedding-3-small\",\n \"text-embedding-3-large\",\n \"text-embedding-ada-002\",\n ],\n },\n \"model_kwargs\": {\"display_name\": \"Model Kwargs\", \"advanced\": True},\n \"openai_api_base\": {\n \"display_name\": \"OpenAI API Base\",\n \"password\": True,\n \"advanced\": True,\n },\n \"openai_api_key\": {\"display_name\": \"OpenAI API Key\", \"password\": True},\n \"openai_api_type\": {\n \"display_name\": \"OpenAI API Type\",\n \"advanced\": True,\n \"password\": True,\n },\n \"openai_api_version\": {\n \"display_name\": \"OpenAI API Version\",\n \"advanced\": True,\n },\n \"openai_organization\": {\n \"display_name\": \"OpenAI Organization\",\n \"advanced\": True,\n },\n \"openai_proxy\": {\"display_name\": \"OpenAI Proxy\", \"advanced\": True},\n \"request_timeout\": {\"display_name\": \"Request Timeout\", \"advanced\": True},\n \"show_progress_bar\": {\n \"display_name\": \"Show Progress Bar\",\n \"advanced\": True,\n },\n \"skip_empty\": {\"display_name\": \"Skip Empty\", \"advanced\": True},\n \"tiktoken_model_name\": {\n \"display_name\": \"TikToken Model Name\",\n \"advanced\": True,\n },\n \"tiktoken_enable\": {\"display_name\": \"TikToken Enable\", \"advanced\": True},\n }\n\n def build(\n self,\n openai_api_key: str,\n default_headers: Optional[Dict[str, str]] = None,\n default_query: Optional[NestedDict] = {},\n allowed_special: List[str] = [],\n disallowed_special: List[str] = [\"all\"],\n chunk_size: int = 1000,\n client: Optional[Any] = None,\n deployment: str = \"text-embedding-ada-002\",\n embedding_ctx_length: int = 8191,\n max_retries: int = 6,\n model: str = \"text-embedding-ada-002\",\n model_kwargs: NestedDict = {},\n openai_api_base: Optional[str] = None,\n openai_api_type: Optional[str] = None,\n openai_api_version: Optional[str] = None,\n openai_organization: Optional[str] = None,\n openai_proxy: Optional[str] = None,\n request_timeout: Optional[float] = None,\n show_progress_bar: bool = False,\n skip_empty: bool = False,\n tiktoken_enable: bool = True,\n tiktoken_model_name: Optional[str] = None,\n ) -> Embeddings:\n # This is to avoid errors with Vector Stores (e.g Chroma)\n if disallowed_special == [\"all\"]:\n disallowed_special = \"all\" # type: ignore\n\n return OpenAIEmbeddings(\n tiktoken_enabled=tiktoken_enable,\n default_headers=default_headers,\n default_query=default_query,\n allowed_special=set(allowed_special),\n disallowed_special=\"all\",\n chunk_size=chunk_size,\n client=client,\n deployment=deployment,\n embedding_ctx_length=embedding_ctx_length,\n max_retries=max_retries,\n model=model,\n model_kwargs=model_kwargs,\n base_url=openai_api_base,\n api_key=openai_api_key,\n openai_api_type=openai_api_type,\n api_version=openai_api_version,\n organization=openai_organization,\n openai_proxy=openai_proxy,\n timeout=request_timeout,\n show_progress_bar=show_progress_bar,\n skip_empty=skip_empty,\n tiktoken_model_name=tiktoken_model_name,\n )\n",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "default_headers": {
+ "type": "dict",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "default_headers",
+ "display_name": "Default Headers",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "default_query": {
+ "type": "NestedDict",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": {},
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "default_query",
+ "display_name": "Default Query",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "deployment": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": "text-embedding-ada-002",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "deployment",
+ "display_name": "Deployment",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "disallowed_special": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": ["all"],
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "disallowed_special",
+ "display_name": "Disallowed Special",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "embedding_ctx_length": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": 8191,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "embedding_ctx_length",
+ "display_name": "Embedding Context Length",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "max_retries": {
+ "type": "int",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": 6,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "max_retries",
+ "display_name": "Max Retries",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "model": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "value": "text-embedding-ada-002",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "options": [
+ "text-embedding-3-small",
+ "text-embedding-3-large",
+ "text-embedding-ada-002"
+ ],
+ "name": "model",
+ "display_name": "Model",
+ "advanced": false,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "model_kwargs": {
+ "type": "NestedDict",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": {},
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "model_kwargs",
+ "display_name": "Model Kwargs",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "openai_api_base": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "openai_api_base",
+ "display_name": "OpenAI API Base",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "openai_api_key": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "openai_api_key",
+ "display_name": "OpenAI API Key",
+ "advanced": false,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": ""
+ },
+ "openai_api_type": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "openai_api_type",
+ "display_name": "OpenAI API Type",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "openai_api_version": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "openai_api_version",
+ "display_name": "OpenAI API Version",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "openai_organization": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "openai_organization",
+ "display_name": "OpenAI Organization",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "openai_proxy": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "openai_proxy",
+ "display_name": "OpenAI Proxy",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "request_timeout": {
+ "type": "float",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "request_timeout",
+ "display_name": "Request Timeout",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "rangeSpec": {
+ "step_type": "float",
+ "min": -1,
+ "max": 1,
+ "step": 0.1
+ },
+ "load_from_db": false,
+ "title_case": false
+ },
+ "show_progress_bar": {
+ "type": "bool",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "show_progress_bar",
+ "display_name": "Show Progress Bar",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "skip_empty": {
+ "type": "bool",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "skip_empty",
+ "display_name": "Skip Empty",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "tiktoken_enable": {
+ "type": "bool",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": true,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "tiktoken_enable",
+ "display_name": "TikToken Enable",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "tiktoken_model_name": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "tiktoken_model_name",
+ "display_name": "TikToken Model Name",
+ "advanced": true,
+ "dynamic": false,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "_type": "CustomComponent"
+ },
+ "description": "Generate embeddings using OpenAI models.",
+ "base_classes": ["Embeddings"],
+ "display_name": "OpenAI Embeddings",
+ "documentation": "",
+ "custom_fields": {
+ "openai_api_key": null,
+ "default_headers": null,
+ "default_query": null,
+ "allowed_special": null,
+ "disallowed_special": null,
+ "chunk_size": null,
+ "client": null,
+ "deployment": null,
+ "embedding_ctx_length": null,
+ "max_retries": null,
+ "model": null,
+ "model_kwargs": null,
+ "openai_api_base": null,
+ "openai_api_type": null,
+ "openai_api_version": null,
+ "openai_organization": null,
+ "openai_proxy": null,
+ "request_timeout": null,
+ "show_progress_bar": null,
+ "skip_empty": null,
+ "tiktoken_enable": null,
+ "tiktoken_model_name": null
+ },
+ "output_types": ["Embeddings"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false
+ },
+ "id": "OpenAIEmbeddings-9TPjc"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 383,
+ "positionAbsolute": {
+ "x": 2814.0402191223047,
+ "y": 1955.9268168273086
+ },
+ "dragging": false
+ }
+ ],
+ "edges": [
+ {
+ "source": "TextOutput-BDknO",
+ "target": "Prompt-xeI6K",
+ "sourceHandle": "{\u0153baseClasses\u0153:[\u0153object\u0153,\u0153Text\u0153,\u0153str\u0153],\u0153dataType\u0153:\u0153TextOutput\u0153,\u0153id\u0153:\u0153TextOutput-BDknO\u0153}",
+ "targetHandle": "{\u0153fieldName\u0153:\u0153context\u0153,\u0153id\u0153:\u0153Prompt-xeI6K\u0153,\u0153inputTypes\u0153:[\u0153Document\u0153,\u0153BaseOutputParser\u0153,\u0153Record\u0153,\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
+ "id": "reactflow__edge-TextOutput-BDknO{\u0153baseClasses\u0153:[\u0153object\u0153,\u0153Text\u0153,\u0153str\u0153],\u0153dataType\u0153:\u0153TextOutput\u0153,\u0153id\u0153:\u0153TextOutput-BDknO\u0153}-Prompt-xeI6K{\u0153fieldName\u0153:\u0153context\u0153,\u0153id\u0153:\u0153Prompt-xeI6K\u0153,\u0153inputTypes\u0153:[\u0153Document\u0153,\u0153BaseOutputParser\u0153,\u0153Record\u0153,\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
+ "data": {
+ "targetHandle": {
+ "fieldName": "context",
+ "id": "Prompt-xeI6K",
+ "inputTypes": ["Document", "BaseOutputParser", "Record", "Text"],
+ "type": "str"
+ },
+ "sourceHandle": {
+ "baseClasses": ["object", "Text", "str"],
+ "dataType": "TextOutput",
+ "id": "TextOutput-BDknO"
+ }
+ },
+ "style": {
+ "stroke": "#555"
+ },
+ "className": "stroke-gray-900 stroke-connection",
+ "selected": false
+ },
+ {
+ "source": "ChatInput-yxMKE",
+ "target": "Prompt-xeI6K",
+ "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Text\u0153,\u0153str\u0153,\u0153object\u0153,\u0153Record\u0153],\u0153dataType\u0153:\u0153ChatInput\u0153,\u0153id\u0153:\u0153ChatInput-yxMKE\u0153}",
+ "targetHandle": "{\u0153fieldName\u0153:\u0153question\u0153,\u0153id\u0153:\u0153Prompt-xeI6K\u0153,\u0153inputTypes\u0153:[\u0153Document\u0153,\u0153BaseOutputParser\u0153,\u0153Record\u0153,\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
+ "id": "reactflow__edge-ChatInput-yxMKE{\u0153baseClasses\u0153:[\u0153Text\u0153,\u0153str\u0153,\u0153object\u0153,\u0153Record\u0153],\u0153dataType\u0153:\u0153ChatInput\u0153,\u0153id\u0153:\u0153ChatInput-yxMKE\u0153}-Prompt-xeI6K{\u0153fieldName\u0153:\u0153question\u0153,\u0153id\u0153:\u0153Prompt-xeI6K\u0153,\u0153inputTypes\u0153:[\u0153Document\u0153,\u0153BaseOutputParser\u0153,\u0153Record\u0153,\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
+ "data": {
+ "targetHandle": {
+ "fieldName": "question",
+ "id": "Prompt-xeI6K",
+ "inputTypes": ["Document", "BaseOutputParser", "Record", "Text"],
+ "type": "str"
+ },
+ "sourceHandle": {
+ "baseClasses": ["Text", "str", "object", "Record"],
+ "dataType": "ChatInput",
+ "id": "ChatInput-yxMKE"
+ }
+ },
+ "style": {
+ "stroke": "#555"
+ },
+ "className": "stroke-gray-900 stroke-connection",
+ "selected": false
+ },
+ {
+ "source": "Prompt-xeI6K",
+ "target": "OpenAIModel-EjXlN",
+ "sourceHandle": "{\u0153baseClasses\u0153:[\u0153object\u0153,\u0153Text\u0153,\u0153str\u0153],\u0153dataType\u0153:\u0153Prompt\u0153,\u0153id\u0153:\u0153Prompt-xeI6K\u0153}",
+ "targetHandle": "{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153OpenAIModel-EjXlN\u0153,\u0153inputTypes\u0153:[\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
+ "id": "reactflow__edge-Prompt-xeI6K{\u0153baseClasses\u0153:[\u0153object\u0153,\u0153Text\u0153,\u0153str\u0153],\u0153dataType\u0153:\u0153Prompt\u0153,\u0153id\u0153:\u0153Prompt-xeI6K\u0153}-OpenAIModel-EjXlN{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153OpenAIModel-EjXlN\u0153,\u0153inputTypes\u0153:[\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
+ "data": {
+ "targetHandle": {
+ "fieldName": "input_value",
+ "id": "OpenAIModel-EjXlN",
+ "inputTypes": ["Text"],
+ "type": "str"
+ },
+ "sourceHandle": {
+ "baseClasses": ["object", "Text", "str"],
+ "dataType": "Prompt",
+ "id": "Prompt-xeI6K"
+ }
+ },
+ "style": {
+ "stroke": "#555"
+ },
+ "className": "stroke-gray-900 stroke-connection",
+ "selected": false
+ },
+ {
+ "source": "OpenAIModel-EjXlN",
+ "target": "ChatOutput-Q39I8",
+ "sourceHandle": "{\u0153baseClasses\u0153:[\u0153object\u0153,\u0153Text\u0153,\u0153str\u0153],\u0153dataType\u0153:\u0153OpenAIModel\u0153,\u0153id\u0153:\u0153OpenAIModel-EjXlN\u0153}",
+ "targetHandle": "{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153ChatOutput-Q39I8\u0153,\u0153inputTypes\u0153:[\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
+ "id": "reactflow__edge-OpenAIModel-EjXlN{\u0153baseClasses\u0153:[\u0153object\u0153,\u0153Text\u0153,\u0153str\u0153],\u0153dataType\u0153:\u0153OpenAIModel\u0153,\u0153id\u0153:\u0153OpenAIModel-EjXlN\u0153}-ChatOutput-Q39I8{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153ChatOutput-Q39I8\u0153,\u0153inputTypes\u0153:[\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
+ "data": {
+ "targetHandle": {
+ "fieldName": "input_value",
+ "id": "ChatOutput-Q39I8",
+ "inputTypes": ["Text"],
+ "type": "str"
+ },
+ "sourceHandle": {
+ "baseClasses": ["object", "Text", "str"],
+ "dataType": "OpenAIModel",
+ "id": "OpenAIModel-EjXlN"
+ }
+ },
+ "style": {
+ "stroke": "#555"
+ },
+ "className": "stroke-gray-900 stroke-connection",
+ "selected": false
+ },
+ {
+ "source": "File-t0a6a",
+ "target": "RecursiveCharacterTextSplitter-tR9QM",
+ "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Record\u0153],\u0153dataType\u0153:\u0153File\u0153,\u0153id\u0153:\u0153File-t0a6a\u0153}",
+ "targetHandle": "{\u0153fieldName\u0153:\u0153inputs\u0153,\u0153id\u0153:\u0153RecursiveCharacterTextSplitter-tR9QM\u0153,\u0153inputTypes\u0153:[\u0153Document\u0153,\u0153Record\u0153],\u0153type\u0153:\u0153Document\u0153}",
+ "id": "reactflow__edge-File-t0a6a{\u0153baseClasses\u0153:[\u0153Record\u0153],\u0153dataType\u0153:\u0153File\u0153,\u0153id\u0153:\u0153File-t0a6a\u0153}-RecursiveCharacterTextSplitter-tR9QM{\u0153fieldName\u0153:\u0153inputs\u0153,\u0153id\u0153:\u0153RecursiveCharacterTextSplitter-tR9QM\u0153,\u0153inputTypes\u0153:[\u0153Document\u0153,\u0153Record\u0153],\u0153type\u0153:\u0153Document\u0153}",
+ "data": {
+ "targetHandle": {
+ "fieldName": "inputs",
+ "id": "RecursiveCharacterTextSplitter-tR9QM",
+ "inputTypes": ["Document", "Record"],
+ "type": "Document"
+ },
+ "sourceHandle": {
+ "baseClasses": ["Record"],
+ "dataType": "File",
+ "id": "File-t0a6a"
+ }
+ },
+ "style": {
+ "stroke": "#555"
+ },
+ "className": "stroke-gray-900 stroke-connection",
+ "selected": false
+ },
+ {
+ "source": "OpenAIEmbeddings-ZlOk1",
+ "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Embeddings\u0153],\u0153dataType\u0153:\u0153OpenAIEmbeddings\u0153,\u0153id\u0153:\u0153OpenAIEmbeddings-ZlOk1\u0153}",
+ "target": "AstraDBSearch-41nRz",
+ "targetHandle": "{\u0153fieldName\u0153:\u0153embedding\u0153,\u0153id\u0153:\u0153AstraDBSearch-41nRz\u0153,\u0153inputTypes\u0153:null,\u0153type\u0153:\u0153Embeddings\u0153}",
+ "data": {
+ "targetHandle": {
+ "fieldName": "embedding",
+ "id": "AstraDBSearch-41nRz",
+ "inputTypes": null,
+ "type": "Embeddings"
+ },
+ "sourceHandle": {
+ "baseClasses": ["Embeddings"],
+ "dataType": "OpenAIEmbeddings",
+ "id": "OpenAIEmbeddings-ZlOk1"
+ }
+ },
+ "style": {
+ "stroke": "#555"
+ },
+ "className": "stroke-gray-900 stroke-connection",
+ "id": "reactflow__edge-OpenAIEmbeddings-ZlOk1{\u0153baseClasses\u0153:[\u0153Embeddings\u0153],\u0153dataType\u0153:\u0153OpenAIEmbeddings\u0153,\u0153id\u0153:\u0153OpenAIEmbeddings-ZlOk1\u0153}-AstraDBSearch-41nRz{\u0153fieldName\u0153:\u0153embedding\u0153,\u0153id\u0153:\u0153AstraDBSearch-41nRz\u0153,\u0153inputTypes\u0153:null,\u0153type\u0153:\u0153Embeddings\u0153}"
+ },
+ {
+ "source": "ChatInput-yxMKE",
+ "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Text\u0153,\u0153str\u0153,\u0153object\u0153,\u0153Record\u0153],\u0153dataType\u0153:\u0153ChatInput\u0153,\u0153id\u0153:\u0153ChatInput-yxMKE\u0153}",
+ "target": "AstraDBSearch-41nRz",
+ "targetHandle": "{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153AstraDBSearch-41nRz\u0153,\u0153inputTypes\u0153:[\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
+ "data": {
+ "targetHandle": {
+ "fieldName": "input_value",
+ "id": "AstraDBSearch-41nRz",
+ "inputTypes": ["Text"],
+ "type": "str"
+ },
+ "sourceHandle": {
+ "baseClasses": ["Text", "str", "object", "Record"],
+ "dataType": "ChatInput",
+ "id": "ChatInput-yxMKE"
+ }
+ },
+ "style": {
+ "stroke": "#555"
+ },
+ "className": "stroke-gray-900 stroke-connection",
+ "id": "reactflow__edge-ChatInput-yxMKE{\u0153baseClasses\u0153:[\u0153Text\u0153,\u0153str\u0153,\u0153object\u0153,\u0153Record\u0153],\u0153dataType\u0153:\u0153ChatInput\u0153,\u0153id\u0153:\u0153ChatInput-yxMKE\u0153}-AstraDBSearch-41nRz{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153AstraDBSearch-41nRz\u0153,\u0153inputTypes\u0153:[\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}"
+ },
+ {
+ "source": "RecursiveCharacterTextSplitter-tR9QM",
+ "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Record\u0153],\u0153dataType\u0153:\u0153RecursiveCharacterTextSplitter\u0153,\u0153id\u0153:\u0153RecursiveCharacterTextSplitter-tR9QM\u0153}",
+ "target": "AstraDB-eUCSS",
+ "targetHandle": "{\u0153fieldName\u0153:\u0153inputs\u0153,\u0153id\u0153:\u0153AstraDB-eUCSS\u0153,\u0153inputTypes\u0153:null,\u0153type\u0153:\u0153Record\u0153}",
+ "data": {
+ "targetHandle": {
+ "fieldName": "inputs",
+ "id": "AstraDB-eUCSS",
+ "inputTypes": null,
+ "type": "Record"
+ },
+ "sourceHandle": {
+ "baseClasses": ["Record"],
+ "dataType": "RecursiveCharacterTextSplitter",
+ "id": "RecursiveCharacterTextSplitter-tR9QM"
+ }
+ },
+ "style": {
+ "stroke": "#555"
+ },
+ "className": "stroke-gray-900 stroke-connection",
+ "id": "reactflow__edge-RecursiveCharacterTextSplitter-tR9QM{\u0153baseClasses\u0153:[\u0153Record\u0153],\u0153dataType\u0153:\u0153RecursiveCharacterTextSplitter\u0153,\u0153id\u0153:\u0153RecursiveCharacterTextSplitter-tR9QM\u0153}-AstraDB-eUCSS{\u0153fieldName\u0153:\u0153inputs\u0153,\u0153id\u0153:\u0153AstraDB-eUCSS\u0153,\u0153inputTypes\u0153:null,\u0153type\u0153:\u0153Record\u0153}",
+ "selected": false
+ },
+ {
+ "source": "OpenAIEmbeddings-9TPjc",
+ "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Embeddings\u0153],\u0153dataType\u0153:\u0153OpenAIEmbeddings\u0153,\u0153id\u0153:\u0153OpenAIEmbeddings-9TPjc\u0153}",
+ "target": "AstraDB-eUCSS",
+ "targetHandle": "{\u0153fieldName\u0153:\u0153embedding\u0153,\u0153id\u0153:\u0153AstraDB-eUCSS\u0153,\u0153inputTypes\u0153:null,\u0153type\u0153:\u0153Embeddings\u0153}",
+ "data": {
+ "targetHandle": {
+ "fieldName": "embedding",
+ "id": "AstraDB-eUCSS",
+ "inputTypes": null,
+ "type": "Embeddings"
+ },
+ "sourceHandle": {
+ "baseClasses": ["Embeddings"],
+ "dataType": "OpenAIEmbeddings",
+ "id": "OpenAIEmbeddings-9TPjc"
+ }
+ },
+ "style": {
+ "stroke": "#555"
+ },
+ "className": "stroke-gray-900 stroke-connection",
+ "id": "reactflow__edge-OpenAIEmbeddings-9TPjc{\u0153baseClasses\u0153:[\u0153Embeddings\u0153],\u0153dataType\u0153:\u0153OpenAIEmbeddings\u0153,\u0153id\u0153:\u0153OpenAIEmbeddings-9TPjc\u0153}-AstraDB-eUCSS{\u0153fieldName\u0153:\u0153embedding\u0153,\u0153id\u0153:\u0153AstraDB-eUCSS\u0153,\u0153inputTypes\u0153:null,\u0153type\u0153:\u0153Embeddings\u0153}",
+ "selected": false
+ },
+ {
+ "source": "AstraDBSearch-41nRz",
+ "sourceHandle": "{\u0153baseClasses\u0153:[\u0153Record\u0153],\u0153dataType\u0153:\u0153AstraDBSearch\u0153,\u0153id\u0153:\u0153AstraDBSearch-41nRz\u0153}",
+ "target": "TextOutput-BDknO",
+ "targetHandle": "{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153TextOutput-BDknO\u0153,\u0153inputTypes\u0153:[\u0153Record\u0153,\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}",
+ "data": {
+ "targetHandle": {
+ "fieldName": "input_value",
+ "id": "TextOutput-BDknO",
+ "inputTypes": ["Record", "Text"],
+ "type": "str"
+ },
+ "sourceHandle": {
+ "baseClasses": ["Record"],
+ "dataType": "AstraDBSearch",
+ "id": "AstraDBSearch-41nRz"
+ }
+ },
+ "style": {
+ "stroke": "#555"
+ },
+ "className": "stroke-gray-900 stroke-connection",
+ "id": "reactflow__edge-AstraDBSearch-41nRz{\u0153baseClasses\u0153:[\u0153Record\u0153],\u0153dataType\u0153:\u0153AstraDBSearch\u0153,\u0153id\u0153:\u0153AstraDBSearch-41nRz\u0153}-TextOutput-BDknO{\u0153fieldName\u0153:\u0153input_value\u0153,\u0153id\u0153:\u0153TextOutput-BDknO\u0153,\u0153inputTypes\u0153:[\u0153Record\u0153,\u0153Text\u0153],\u0153type\u0153:\u0153str\u0153}"
+ }
+ ],
+ "viewport": {
+ "x": -259.6782520315529,
+ "y": 90.3428735006047,
+ "zoom": 0.2687057134854984
+ }
+ },
+ "description": "Visit https://pre-release.langflow.org/tutorials/rag-with-astradb for a detailed guide of this project.\nThis project give you both Ingestion and RAG in a single file. You'll need to visit https://astra.datastax.com/ to create an Astra DB instance, your Token and grab an API Endpoint.\nRunning this project requires you to add a file in the Files component, then define a Collection Name and click on the Play icon on the Astra DB component. \n\nAfter the ingestion ends you are ready to click on the Run button at the lower left corner and start asking questions about your data.",
+ "name": "Vector Store RAG",
+ "last_tested_version": "1.0.0a0",
+ "is_component": false
}
diff --git a/docs/static/img/langflow_basic_howto.gif b/docs/static/img/langflow_basic_howto.gif
new file mode 100644
index 000000000..023a294e0
Binary files /dev/null and b/docs/static/img/langflow_basic_howto.gif differ
diff --git a/docs/static/img/notion/notion_bundle.jpg b/docs/static/img/notion/notion_bundle.jpg
new file mode 100644
index 000000000..b6dc62da7
Binary files /dev/null and b/docs/static/img/notion/notion_bundle.jpg differ
diff --git a/docs/static/json_files/Notion_Components_bundle.json b/docs/static/json_files/Notion_Components_bundle.json
index 21181187c..5e632ad9c 100644
--- a/docs/static/json_files/Notion_Components_bundle.json
+++ b/docs/static/json_files/Notion_Components_bundle.json
@@ -1 +1,881 @@
-{"id":"7cd51434-9767-450f-8742-27857367f8c2","data":{"nodes":[{"id":"RecordsToText-Q69g5","type":"genericNode","position":{"x":-2671.5528488127866,"y":-963.4266471378126},"data":{"type":"RecordsToText","node":{"template":{"code":{"type":"code","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"value":"import requests\r\nfrom typing import List\r\n\r\nfrom langflow import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\n\r\nclass NotionUserList(CustomComponent):\r\n display_name = \"List Users [Notion]\"\r\n description = \"Retrieve users from Notion.\"\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/list-users\"\r\n icon = \"NotionDirectoryLoader\"\r\n \r\n def build_config(self):\r\n return {\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n notion_secret: str,\r\n ) -> List[Record]:\r\n url = \"https://api.notion.com/v1/users\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Notion-Version\": \"2022-06-28\",\r\n }\r\n\r\n response = requests.get(url, headers=headers)\r\n response.raise_for_status()\r\n\r\n data = response.json()\r\n results = data['results']\r\n\r\n records = []\r\n for user in results:\r\n id = user['id']\r\n type = user['type']\r\n name = user.get('name', '')\r\n avatar_url = user.get('avatar_url', '')\r\n\r\n record_data = {\r\n \"id\": id,\r\n \"type\": type,\r\n \"name\": name,\r\n \"avatar_url\": avatar_url,\r\n }\r\n\r\n output = \"User:\\n\"\r\n for key, value in record_data.items():\r\n output += f\"{key.replace('_', ' ').title()}: {value}\\n\"\r\n output += \"________________________\\n\"\r\n\r\n record = Record(text=output, data=record_data)\r\n records.append(record)\r\n\r\n self.status = \"\\n\".join(record.text for record in records)\r\n return records","fileTypes":[],"file_path":"","password":false,"name":"code","advanced":true,"dynamic":true,"info":"","load_from_db":false,"title_case":false},"notion_secret":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":true,"name":"notion_secret","display_name":"Notion Secret","advanced":false,"dynamic":false,"info":"The Notion integration token.","load_from_db":false,"title_case":false,"input_types":["Text"],"value":""},"_type":"CustomComponent"},"description":"Retrieve users from Notion.","icon":"NotionDirectoryLoader","base_classes":["Record"],"display_name":"List Users [Notion] ","documentation":"https://docs.langflow.org/integrations/notion/list-users","custom_fields":{"notion_secret":null},"output_types":["Record"],"field_formatters":{},"frozen":false,"field_order":[],"beta":false},"id":"RecordsToText-Q69g5","description":"Retrieve users from Notion.","display_name":"List Users [Notion] "},"selected":false,"width":384,"height":289,"dragging":false,"positionAbsolute":{"x":-2671.5528488127866,"y":-963.4266471378126}},{"id":"CustomComponent-PU0K5","type":"genericNode","position":{"x":-3077.2269116193215,"y":-960.9450220159636},"data":{"type":"CustomComponent","node":{"template":{"code":{"type":"code","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"value":"import json\r\nfrom typing import Optional\r\n\r\nimport requests\r\nfrom langflow.custom import CustomComponent\r\n\r\n\r\nclass NotionPageCreator(CustomComponent):\r\n display_name = \"Create Page [Notion]\"\r\n description = \"A component for creating Notion pages.\"\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/page-create\"\r\n icon = \"NotionDirectoryLoader\"\r\n\r\n def build_config(self):\r\n return {\r\n \"database_id\": {\r\n \"display_name\": \"Database ID\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The ID of the Notion database.\",\r\n },\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n \"properties\": {\r\n \"display_name\": \"Properties\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The properties of the new page. Depending on your database setup, this can change. E.G: {'Task name': {'id': 'title', 'type': 'title', 'title': [{'type': 'text', 'text': {'content': 'Send Notion Components to LF', 'link': null}}]}}\",\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n database_id: str,\r\n notion_secret: str,\r\n properties: str = '{\"Task name\": {\"id\": \"title\", \"type\": \"title\", \"title\": [{\"type\": \"text\", \"text\": {\"content\": \"Send Notion Components to LF\", \"link\": null}}]}}',\r\n ) -> str:\r\n if not database_id or not properties:\r\n raise ValueError(\"Invalid input. Please provide 'database_id' and 'properties'.\")\r\n\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Content-Type\": \"application/json\",\r\n \"Notion-Version\": \"2022-06-28\",\r\n }\r\n\r\n data = {\r\n \"parent\": {\"database_id\": database_id},\r\n \"properties\": json.loads(properties),\r\n }\r\n\r\n response = requests.post(\"https://api.notion.com/v1/pages\", headers=headers, json=data)\r\n\r\n if response.status_code == 200:\r\n page_id = response.json()[\"id\"]\r\n self.status = f\"Successfully created Notion page with ID: {page_id}\\n {str(response.json())}\"\r\n return response.json()\r\n else:\r\n error_message = f\"Failed to create Notion page. Status code: {response.status_code}, Error: {response.text}\"\r\n self.status = error_message\r\n raise Exception(error_message)","fileTypes":[],"file_path":"","password":false,"name":"code","advanced":true,"dynamic":true,"info":"","load_from_db":false,"title_case":false},"database_id":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":false,"name":"database_id","display_name":"Database ID","advanced":false,"dynamic":false,"info":"The ID of the Notion database.","load_from_db":false,"title_case":false,"input_types":["Text"]},"notion_secret":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":true,"name":"notion_secret","display_name":"Notion Secret","advanced":false,"dynamic":false,"info":"The Notion integration token.","load_from_db":false,"title_case":false,"input_types":["Text"],"value":""},"properties":{"type":"str","required":false,"placeholder":"","list":false,"show":true,"multiline":false,"value":"{\"Task name\": {\"id\": \"title\", \"type\": \"title\", \"title\": [{\"type\": \"text\", \"text\": {\"content\": \"Send Notion Components to LF\", \"link\": null}}]}}","fileTypes":[],"file_path":"","password":false,"name":"properties","display_name":"Properties","advanced":false,"dynamic":false,"info":"The properties of the new page. Depending on your database setup, this can change. E.G: {'Task name': {'id': 'title', 'type': 'title', 'title': [{'type': 'text', 'text': {'content': 'Send Notion Components to LF', 'link': null}}]}}","load_from_db":false,"title_case":false,"input_types":["Text"]},"_type":"CustomComponent"},"description":"A component for creating Notion pages.","icon":"NotionDirectoryLoader","base_classes":["object","str","Text"],"display_name":"Create Page [Notion] ","documentation":"https://docs.langflow.org/integrations/notion/page-create","custom_fields":{"database_id":null,"notion_secret":null,"properties":null},"output_types":["Text"],"field_formatters":{},"frozen":false,"field_order":[],"beta":false},"id":"CustomComponent-PU0K5","description":"A component for creating Notion pages.","display_name":"Create Page [Notion] "},"selected":false,"width":384,"height":477,"positionAbsolute":{"x":-3077.2269116193215,"y":-960.9450220159636},"dragging":false},{"id":"CustomComponent-YODla","type":"genericNode","position":{"x":-3485.297183150799,"y":-362.8525892356713},"data":{"type":"CustomComponent","node":{"template":{"code":{"type":"code","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"value":"import requests\r\nfrom typing import Dict\r\n\r\nfrom langflow import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\n\r\nclass NotionDatabaseProperties(CustomComponent):\r\n display_name = \"List Database Properties [Notion]\"\r\n description = \"Retrieve properties of a Notion database.\"\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/list-database-properties\"\r\n icon = \"NotionDirectoryLoader\"\r\n \r\n def build_config(self):\r\n return {\r\n \"database_id\": {\r\n \"display_name\": \"Database ID\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The ID of the Notion database.\",\r\n },\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n database_id: str,\r\n notion_secret: str,\r\n ) -> Record:\r\n url = f\"https://api.notion.com/v1/databases/{database_id}\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Notion-Version\": \"2022-06-28\", # Use the latest supported version\r\n }\r\n\r\n response = requests.get(url, headers=headers)\r\n response.raise_for_status()\r\n\r\n data = response.json()\r\n properties = data.get(\"properties\", {})\r\n\r\n record = Record(text=str(response.json()), data=properties)\r\n self.status = f\"Retrieved {len(properties)} properties from the Notion database.\\n {record.text}\"\r\n return record","fileTypes":[],"file_path":"","password":false,"name":"code","advanced":true,"dynamic":true,"info":"","load_from_db":false,"title_case":false},"database_id":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":false,"name":"database_id","display_name":"Database ID","advanced":false,"dynamic":false,"info":"The ID of the Notion database.","load_from_db":true,"title_case":false,"input_types":["Text"],"value":"NOTION_NMSTX_DB_ID"},"notion_secret":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":true,"name":"notion_secret","display_name":"Notion Secret","advanced":false,"dynamic":false,"info":"The Notion integration token.","load_from_db":true,"title_case":false,"input_types":["Text"],"value":""},"_type":"CustomComponent"},"description":"Retrieve properties of a Notion database.","icon":"NotionDirectoryLoader","base_classes":["Record"],"display_name":"List Database Properties [Notion] ","documentation":"https://docs.langflow.org/integrations/notion/list-database-properties","custom_fields":{"database_id":null,"notion_secret":null},"output_types":["Record"],"field_formatters":{},"frozen":false,"field_order":[],"beta":false},"id":"CustomComponent-YODla","description":"Retrieve properties of a Notion database.","display_name":"List Database Properties [Notion] "},"selected":true,"width":384,"height":383,"dragging":false,"positionAbsolute":{"x":-3485.297183150799,"y":-362.8525892356713}},{"id":"CustomComponent-wHlSz","type":"genericNode","position":{"x":-2668.7714642455403,"y":-657.2376228212606},"data":{"type":"CustomComponent","node":{"template":{"code":{"type":"code","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"value":"import json\r\nimport requests\r\nfrom typing import Dict, Any\r\n\r\nfrom langflow import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\n\r\nclass NotionPageUpdate(CustomComponent):\r\n display_name = \"Update Page Property [Notion]\"\r\n description = \"Update the properties of a Notion page.\"\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/page-update\"\r\n icon = \"NotionDirectoryLoader\"\r\n\r\n def build_config(self):\r\n return {\r\n \"page_id\": {\r\n \"display_name\": \"Page ID\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The ID of the Notion page to update.\",\r\n },\r\n \"properties\": {\r\n \"display_name\": \"Properties\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The properties to update on the page (as a JSON string).\",\r\n \"multiline\": True,\r\n },\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n page_id: str,\r\n properties: str,\r\n notion_secret: str,\r\n ) -> Record:\r\n url = f\"https://api.notion.com/v1/pages/{page_id}\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Content-Type\": \"application/json\",\r\n \"Notion-Version\": \"2022-06-28\", # Use the latest supported version\r\n }\r\n\r\n try:\r\n parsed_properties = json.loads(properties)\r\n except json.JSONDecodeError as e:\r\n raise ValueError(\"Invalid JSON format for properties\") from e\r\n\r\n data = {\r\n \"properties\": parsed_properties\r\n }\r\n\r\n response = requests.patch(url, headers=headers, json=data)\r\n response.raise_for_status()\r\n\r\n updated_page = response.json()\r\n\r\n output = \"Updated page properties:\\n\"\r\n for prop_name, prop_value in updated_page[\"properties\"].items():\r\n output += f\"{prop_name}: {prop_value}\\n\"\r\n\r\n self.status = output\r\n return Record(data=updated_page)","fileTypes":[],"file_path":"","password":false,"name":"code","advanced":true,"dynamic":true,"info":"","load_from_db":false,"title_case":false},"notion_secret":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":true,"name":"notion_secret","display_name":"Notion Secret","advanced":false,"dynamic":false,"info":"The Notion integration token.","load_from_db":true,"title_case":false,"input_types":["Text"],"value":""},"page_id":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":false,"name":"page_id","display_name":"Page ID","advanced":false,"dynamic":false,"info":"The ID of the Notion page to update.","load_from_db":false,"title_case":false,"input_types":["Text"]},"properties":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"fileTypes":[],"file_path":"","password":false,"name":"properties","display_name":"Properties","advanced":false,"dynamic":false,"info":"The properties to update on the page (as a JSON string).","load_from_db":false,"title_case":false,"input_types":["Text"],"value":"{ \"title\": [ { \"text\": { \"content\": \"Test Page\" } } ] }"},"_type":"CustomComponent"},"description":"Update the properties of a Notion page.","icon":"NotionDirectoryLoader","base_classes":["Record"],"display_name":"Update Page Property [Notion]","documentation":"https://docs.langflow.org/integrations/notion/page-update","custom_fields":{"page_id":null,"properties":null,"notion_secret":null},"output_types":["Record"],"field_formatters":{},"frozen":false,"field_order":[],"beta":false},"id":"CustomComponent-wHlSz","description":"Update the properties of a Notion page.","display_name":"Update Page Property [Notion]"},"selected":false,"width":384,"height":477,"dragging":false,"positionAbsolute":{"x":-2668.7714642455403,"y":-657.2376228212606}},{"id":"CustomComponent-oelYw","type":"genericNode","position":{"x":-2253.1007124701327,"y":-448.47240118604134},"data":{"type":"CustomComponent","node":{"template":{"code":{"type":"code","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"value":"import requests\r\nfrom typing import Dict, Any\r\n\r\nfrom langflow import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\n\r\nclass NotionPageContent(CustomComponent):\r\n display_name = \"Page Content Viewer [Notion]\"\r\n description = \"Retrieve the content of a Notion page as plain text.\"\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/page-content-viewer\"\r\n icon = \"NotionDirectoryLoader\"\r\n\r\n def build_config(self):\r\n return {\r\n \"page_id\": {\r\n \"display_name\": \"Page ID\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The ID of the Notion page to retrieve.\",\r\n },\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n page_id: str,\r\n notion_secret: str,\r\n ) -> Record:\r\n blocks_url = f\"https://api.notion.com/v1/blocks/{page_id}/children?page_size=100\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Notion-Version\": \"2022-06-28\", # Use the latest supported version\r\n }\r\n\r\n # Retrieve the child blocks\r\n blocks_response = requests.get(blocks_url, headers=headers)\r\n blocks_response.raise_for_status()\r\n blocks_data = blocks_response.json()\r\n\r\n # Parse the blocks and extract the content as plain text\r\n content = self.parse_blocks(blocks_data[\"results\"])\r\n\r\n self.status = content\r\n return Record(data={\"content\": content}, text=content)\r\n\r\n def parse_blocks(self, blocks: list) -> str:\r\n content = \"\"\r\n for block in blocks:\r\n block_type = block[\"type\"]\r\n if block_type in [\"paragraph\", \"heading_1\", \"heading_2\", \"heading_3\", \"quote\"]:\r\n content += self.parse_rich_text(block[block_type][\"rich_text\"]) + \"\\n\\n\"\r\n elif block_type in [\"bulleted_list_item\", \"numbered_list_item\"]:\r\n content += self.parse_rich_text(block[block_type][\"rich_text\"]) + \"\\n\"\r\n elif block_type == \"to_do\":\r\n content += self.parse_rich_text(block[\"to_do\"][\"rich_text\"]) + \"\\n\"\r\n elif block_type == \"code\":\r\n content += self.parse_rich_text(block[\"code\"][\"rich_text\"]) + \"\\n\\n\"\r\n elif block_type == \"image\":\r\n content += f\"[Image: {block['image']['external']['url']}]\\n\\n\"\r\n elif block_type == \"divider\":\r\n content += \"---\\n\\n\"\r\n return content.strip()\r\n\r\n def parse_rich_text(self, rich_text: list) -> str:\r\n text = \"\"\r\n for segment in rich_text:\r\n text += segment[\"plain_text\"]\r\n return text","fileTypes":[],"file_path":"","password":false,"name":"code","advanced":true,"dynamic":true,"info":"","load_from_db":false,"title_case":false},"notion_secret":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":true,"name":"notion_secret","display_name":"Notion Secret","advanced":false,"dynamic":false,"info":"The Notion integration token.","load_from_db":true,"title_case":false,"input_types":["Text"],"value":""},"page_id":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":false,"name":"page_id","display_name":"Page ID","advanced":false,"dynamic":false,"info":"The ID of the Notion page to retrieve.","load_from_db":false,"title_case":false,"input_types":["Text"]},"_type":"CustomComponent"},"description":"Retrieve the content of a Notion page as plain text.","icon":"NotionDirectoryLoader","base_classes":["Record"],"display_name":"Page Content Viewer [Notion] ","documentation":"https://docs.langflow.org/integrations/notion/page-content-viewer","custom_fields":{"page_id":null,"notion_secret":null},"output_types":["Record"],"field_formatters":{},"frozen":false,"field_order":[],"beta":false},"id":"CustomComponent-oelYw","description":"Retrieve the content of a Notion page as plain text.","display_name":"Page Content Viewer [Notion] "},"selected":false,"width":384,"height":383,"positionAbsolute":{"x":-2253.1007124701327,"y":-448.47240118604134},"dragging":false},{"id":"CustomComponent-Pn52w","type":"genericNode","position":{"x":-3070.9222948695096,"y":-472.4537855763852},"data":{"type":"CustomComponent","node":{"template":{"code":{"type":"code","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"value":"import requests\r\nimport json\r\nfrom typing import Dict, Any, List\r\nfrom langflow.custom import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\nclass NotionListPages(CustomComponent):\r\n display_name = \"List Pages [Notion]\"\r\n description = (\r\n \"Query a Notion database with filtering and sorting. \"\r\n \"The input should be a JSON string containing the 'filter' and 'sorts' objects. \"\r\n \"Example input:\\n\"\r\n '{\"filter\": {\"property\": \"Status\", \"select\": {\"equals\": \"Done\"}}, \"sorts\": [{\"timestamp\": \"created_time\", \"direction\": \"descending\"}]}'\r\n )\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/list-pages\"\r\n icon = \"NotionDirectoryLoader\"\r\n\r\n field_order = [\r\n \"notion_secret\",\r\n \"database_id\",\r\n \"query_payload\",\r\n ]\r\n\r\n def build_config(self):\r\n return {\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n \"database_id\": {\r\n \"display_name\": \"Database ID\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The ID of the Notion database to query.\",\r\n },\r\n \"query_payload\": {\r\n \"display_name\": \"Database query\",\r\n \"field_type\": \"str\",\r\n \"info\": \"A JSON string containing the filters that will be used for querying the database. EG: {'filter': {'property': 'Status', 'status': {'equals': 'In progress'}}}\",\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n notion_secret: str,\r\n database_id: str,\r\n query_payload: str = \"{}\",\r\n ) -> List[Record]:\r\n try:\r\n query_data = json.loads(query_payload)\r\n filter_obj = query_data.get(\"filter\")\r\n sorts = query_data.get(\"sorts\", [])\r\n\r\n url = f\"https://api.notion.com/v1/databases/{database_id}/query\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Content-Type\": \"application/json\",\r\n \"Notion-Version\": \"2022-06-28\",\r\n }\r\n\r\n data = {\r\n \"sorts\": sorts,\r\n }\r\n\r\n if filter_obj:\r\n data[\"filter\"] = filter_obj\r\n\r\n response = requests.post(url, headers=headers, json=data)\r\n response.raise_for_status()\r\n\r\n results = response.json()\r\n records = []\r\n combined_text = f\"Pages found: {len(results['results'])}\\n\\n\"\r\n for page in results['results']:\r\n page_data = {\r\n 'id': page['id'],\r\n 'url': page['url'],\r\n 'created_time': page['created_time'],\r\n 'last_edited_time': page['last_edited_time'],\r\n 'properties': page['properties'],\r\n }\r\n\r\n text = (\r\n f\"id: {page['id']}\\n\"\r\n f\"url: {page['url']}\\n\"\r\n f\"created_time: {page['created_time']}\\n\"\r\n f\"last_edited_time: {page['last_edited_time']}\\n\"\r\n f\"properties: {json.dumps(page['properties'], indent=2)}\\n\\n\"\r\n )\r\n\r\n combined_text += text\r\n records.append(Record(text=text, data=page_data))\r\n \r\n self.status = combined_text.strip()\r\n return records\r\n\r\n except Exception as e:\r\n self.status = f\"An error occurred: {str(e)}\"\r\n return [Record(text=self.status, data=[])]","fileTypes":[],"file_path":"","password":false,"name":"code","advanced":true,"dynamic":true,"info":"","load_from_db":false,"title_case":false},"database_id":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":false,"name":"database_id","display_name":"Database ID","advanced":false,"dynamic":false,"info":"The ID of the Notion database to query.","load_from_db":true,"title_case":false,"input_types":["Text"],"value":"NOTION_NMSTX_DB_ID"},"notion_secret":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":true,"name":"notion_secret","display_name":"Notion Secret","advanced":false,"dynamic":false,"info":"The Notion integration token.","load_from_db":true,"title_case":false,"input_types":["Text"],"value":""},"query_payload":{"type":"str","required":false,"placeholder":"","list":false,"show":true,"multiline":false,"value":{},"fileTypes":[],"file_path":"","password":false,"name":"query_payload","display_name":"Database query","advanced":false,"dynamic":false,"info":"A JSON string containing the filters that will be used for querying the database. EG: {'filter': {'property': 'Status', 'status': {'equals': 'In progress'}}}","load_from_db":false,"title_case":false,"input_types":["Text"]},"_type":"CustomComponent"},"description":"Query a Notion database with filtering and sorting. The input should be a JSON string containing the 'filter' and 'sorts' objects. Example input:\n{\"filter\": {\"property\": \"Status\", \"select\": {\"equals\": \"Done\"}}, \"sorts\": [{\"timestamp\": \"created_time\", \"direction\": \"descending\"}]}","icon":"NotionDirectoryLoader","base_classes":["Record"],"display_name":"List Pages [Notion] ","documentation":"https://docs.langflow.org/integrations/notion/list-pages","custom_fields":{"notion_secret":null,"database_id":null,"query_payload":null},"output_types":["Record"],"field_formatters":{},"frozen":false,"field_order":["notion_secret","database_id","query_payload"],"beta":false},"id":"CustomComponent-Pn52w","description":"Query a Notion database with filtering and sorting. The input should be a JSON string containing the 'filter' and 'sorts' objects. Example input:\n{\"filter\": {\"property\": \"Status\", \"select\": {\"equals\": \"Done\"}}, \"sorts\": [{\"timestamp\": \"created_time\", \"direction\": \"descending\"}]}","display_name":"List Pages [Notion] "},"selected":false,"width":384,"height":517,"positionAbsolute":{"x":-3070.9222948695096,"y":-472.4537855763852},"dragging":false},{"id":"CustomComponent-I8Dec","type":"genericNode","position":{"x":-2256.686402636563,"y":-963.4541117792749},"data":{"type":"CustomComponent","node":{"template":{"block_id":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":false,"name":"block_id","display_name":"Page/Block ID","advanced":false,"dynamic":false,"info":"The ID of the page/block to add the content.","load_from_db":false,"title_case":false,"input_types":["Text"]},"code":{"type":"code","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"value":"import json\r\nfrom typing import List, Dict, Any\r\nfrom markdown import markdown\r\nfrom bs4 import BeautifulSoup\r\nimport requests\r\n\r\nfrom langflow import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\nclass AddContentToPage(CustomComponent):\r\n display_name = \"Add Content to Page [Notion]\"\r\n description = \"Convert markdown text to Notion blocks and append them to a Notion page.\"\r\n documentation: str = \"https://developers.notion.com/reference/patch-block-children\"\r\n icon = \"NotionDirectoryLoader\"\r\n\r\n def build_config(self):\r\n return {\r\n \"markdown_text\": {\r\n \"display_name\": \"Markdown Text\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The markdown text to convert to Notion blocks.\",\r\n \"multiline\": True,\r\n },\r\n \"block_id\": {\r\n \"display_name\": \"Page/Block ID\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The ID of the page/block to add the content.\",\r\n },\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n }\r\n\r\n def build(self, markdown_text: str, block_id: str, notion_secret: str) -> Record:\r\n html_text = markdown(markdown_text)\r\n soup = BeautifulSoup(html_text, 'html.parser')\r\n blocks = self.process_node(soup)\r\n\r\n url = f\"https://api.notion.com/v1/blocks/{block_id}/children\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Content-Type\": \"application/json\",\r\n \"Notion-Version\": \"2022-06-28\",\r\n }\r\n\r\n data = {\r\n \"children\": blocks,\r\n }\r\n\r\n response = requests.patch(url, headers=headers, json=data)\r\n self.status = str(response.json())\r\n response.raise_for_status()\r\n\r\n result = response.json()\r\n self.status = f\"Appended {len(blocks)} blocks to page with ID: {block_id}\"\r\n return Record(data=result, text=json.dumps(result))\r\n\r\n def process_node(self, node):\r\n blocks = []\r\n if isinstance(node, str):\r\n text = node.strip()\r\n if text:\r\n if text.startswith('#'):\r\n heading_level = text.count('#', 0, 6)\r\n heading_text = text[heading_level:].strip()\r\n if heading_level == 1:\r\n blocks.append(self.create_block('heading_1', heading_text))\r\n elif heading_level == 2:\r\n blocks.append(self.create_block('heading_2', heading_text))\r\n elif heading_level == 3:\r\n blocks.append(self.create_block('heading_3', heading_text))\r\n else:\r\n blocks.append(self.create_block('paragraph', text))\r\n elif node.name == 'h1':\r\n blocks.append(self.create_block('heading_1', node.get_text(strip=True)))\r\n elif node.name == 'h2':\r\n blocks.append(self.create_block('heading_2', node.get_text(strip=True)))\r\n elif node.name == 'h3':\r\n blocks.append(self.create_block('heading_3', node.get_text(strip=True)))\r\n elif node.name == 'p':\r\n code_node = node.find('code')\r\n if code_node:\r\n code_text = code_node.get_text()\r\n language, code = self.extract_language_and_code(code_text)\r\n blocks.append(self.create_block('code', code, language=language))\r\n elif self.is_table(str(node)):\r\n blocks.extend(self.process_table(node))\r\n else:\r\n blocks.append(self.create_block('paragraph', node.get_text(strip=True)))\r\n elif node.name == 'ul':\r\n blocks.extend(self.process_list(node, 'bulleted_list_item'))\r\n elif node.name == 'ol':\r\n blocks.extend(self.process_list(node, 'numbered_list_item'))\r\n elif node.name == 'blockquote':\r\n blocks.append(self.create_block('quote', node.get_text(strip=True)))\r\n elif node.name == 'hr':\r\n blocks.append(self.create_block('divider', ''))\r\n elif node.name == 'img':\r\n blocks.append(self.create_block('image', '', image_url=node.get('src')))\r\n elif node.name == 'a':\r\n blocks.append(self.create_block('bookmark', node.get_text(strip=True), link_url=node.get('href')))\r\n elif node.name == 'table':\r\n blocks.extend(self.process_table(node))\r\n\r\n for child in node.children:\r\n if isinstance(child, str):\r\n continue\r\n blocks.extend(self.process_node(child))\r\n\r\n return blocks\r\n\r\n def extract_language_and_code(self, code_text):\r\n lines = code_text.split('\\n')\r\n language = lines[0].strip()\r\n code = '\\n'.join(lines[1:]).strip()\r\n return language, code\r\n\r\n def is_code_block(self, text):\r\n return text.startswith('```')\r\n\r\n def extract_code_block(self, text):\r\n lines = text.split('\\n')\r\n language = lines[0].strip('`').strip()\r\n code = '\\n'.join(lines[1:]).strip('`').strip()\r\n return language, code\r\n \r\n def is_table(self, text):\r\n rows = text.split('\\n')\r\n if len(rows) < 2:\r\n return False\r\n\r\n has_separator = False\r\n for i, row in enumerate(rows):\r\n if '|' in row:\r\n cells = [cell.strip() for cell in row.split('|')]\r\n cells = [cell for cell in cells if cell] # Remove empty cells\r\n if i == 1 and all(set(cell) <= set('-|') for cell in cells):\r\n has_separator = True\r\n elif not cells:\r\n return False\r\n\r\n return has_separator and len(rows) >= 3\r\n\r\n def process_list(self, node, list_type):\r\n blocks = []\r\n for item in node.find_all('li'):\r\n item_text = item.get_text(strip=True)\r\n checked = item_text.startswith('[x]')\r\n is_checklist = item_text.startswith('[ ]') or checked\r\n\r\n if is_checklist:\r\n item_text = item_text.replace('[x]', '').replace('[ ]', '').strip()\r\n blocks.append(self.create_block('to_do', item_text, checked=checked))\r\n else:\r\n blocks.append(self.create_block(list_type, item_text))\r\n return blocks\r\n\r\n def process_table(self, node):\r\n blocks = []\r\n header_row = node.find('thead').find('tr') if node.find('thead') else None\r\n body_rows = node.find('tbody').find_all('tr') if node.find('tbody') else []\r\n\r\n if header_row or body_rows:\r\n table_width = max(len(header_row.find_all(['th', 'td'])) if header_row else 0,\r\n max(len(row.find_all(['th', 'td'])) for row in body_rows))\r\n\r\n table_block = self.create_block('table', '', table_width=table_width, has_column_header=bool(header_row))\r\n blocks.append(table_block)\r\n\r\n if header_row:\r\n header_cells = [cell.get_text(strip=True) for cell in header_row.find_all(['th', 'td'])]\r\n header_row_block = self.create_block('table_row', header_cells)\r\n blocks.append(header_row_block)\r\n\r\n for row in body_rows:\r\n cells = [cell.get_text(strip=True) for cell in row.find_all(['th', 'td'])]\r\n row_block = self.create_block('table_row', cells)\r\n blocks.append(row_block)\r\n\r\n return blocks\r\n \r\n def create_block(self, block_type: str, content: str, **kwargs) -> Dict[str, Any]:\r\n block = {\r\n \"object\": \"block\",\r\n \"type\": block_type,\r\n block_type: {},\r\n }\r\n\r\n if block_type in [\"paragraph\", \"heading_1\", \"heading_2\", \"heading_3\", \"bulleted_list_item\", \"numbered_list_item\", \"quote\"]:\r\n block[block_type][\"rich_text\"] = [\r\n {\r\n \"type\": \"text\",\r\n \"text\": {\r\n \"content\": content,\r\n },\r\n }\r\n ]\r\n elif block_type == 'to_do':\r\n block[block_type][\"rich_text\"] = [\r\n {\r\n \"type\": \"text\",\r\n \"text\": {\r\n \"content\": content,\r\n },\r\n }\r\n ]\r\n block[block_type]['checked'] = kwargs.get('checked', False)\r\n elif block_type == 'code':\r\n block[block_type]['rich_text'] = [\r\n {\r\n \"type\": \"text\",\r\n \"text\": {\r\n \"content\": content,\r\n },\r\n }\r\n ]\r\n block[block_type]['language'] = kwargs.get('language', 'plain text')\r\n elif block_type == 'image':\r\n block[block_type] = {\r\n \"type\": \"external\",\r\n \"external\": {\r\n \"url\": kwargs.get('image_url', '')\r\n }\r\n }\r\n elif block_type == 'divider':\r\n pass\r\n elif block_type == 'bookmark':\r\n block[block_type]['url'] = kwargs.get('link_url', '')\r\n elif block_type == 'table':\r\n block[block_type]['table_width'] = kwargs.get('table_width', 0)\r\n block[block_type]['has_column_header'] = kwargs.get('has_column_header', False)\r\n block[block_type]['has_row_header'] = kwargs.get('has_row_header', False)\r\n elif block_type == 'table_row':\r\n block[block_type]['cells'] = [[{'type': 'text', 'text': {'content': cell}} for cell in content]]\r\n\r\n return block","fileTypes":[],"file_path":"","password":false,"name":"code","advanced":true,"dynamic":true,"info":"","load_from_db":false,"title_case":false},"markdown_text":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"fileTypes":[],"file_path":"","password":false,"name":"markdown_text","display_name":"Markdown Text","advanced":false,"dynamic":false,"info":"The markdown text to convert to Notion blocks.","load_from_db":false,"title_case":false,"input_types":["Text"],"value":"# Heading 1\n\n## Heading 2\n\n### Heading 3\n\nThis is a regular paragraph.\n\nHere's another paragraph with an image:\n\n\n## Checklist\n- [x] Completed task\n- [ ] Incomplete task\n- [x] Another completed task\n\n## Numbered List\n1. First item\n2. Second item\n3. Third item\n\n## Bulleted List\n- Item 1\n- Item 2\n- Item 3\n\n## Code Block\n```python\ndef hello_world():\n print(\"Hello, World!\")\n```\n\n## Quote\n> This is a blockquote.\n> It can span multiple lines.\n\n## Horizontal Rule\n---\n\n\n## Link\n[Notion API Documentation](https://developers.notion.com)\n\n"},"notion_secret":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":true,"name":"notion_secret","display_name":"Notion Secret","advanced":false,"dynamic":false,"info":"The Notion integration token.","load_from_db":true,"title_case":false,"input_types":["Text"],"value":""},"_type":"CustomComponent"},"description":"Convert markdown text to Notion blocks and append them to a Notion page.","icon":"NotionDirectoryLoader","base_classes":["Record"],"display_name":"Add Content to Page [Notion] ","documentation":"https://developers.notion.com/reference/patch-block-children","custom_fields":{"markdown_text":null,"block_id":null,"notion_secret":null},"output_types":["Record"],"field_formatters":{},"frozen":false,"field_order":[],"beta":false,"official":false},"id":"CustomComponent-I8Dec"},"selected":false,"width":384,"height":497,"positionAbsolute":{"x":-2256.686402636563,"y":-963.4541117792749},"dragging":false},{"id":"CustomComponent-ZcsA9","type":"genericNode","position":{"x":-3488.029350341937,"y":-965.3756250644985},"data":{"type":"CustomComponent","node":{"template":{"code":{"type":"code","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"value":"import requests\r\nfrom typing import Dict, Any, List\r\nfrom langflow.custom import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\nclass NotionSearch(CustomComponent):\r\n display_name = \"Search Notion\"\r\n description = (\r\n \"Searches all pages and databases that have been shared with an integration.\"\r\n )\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/search\"\r\n icon = \"NotionDirectoryLoader\"\r\n\r\n field_order = [\r\n \"notion_secret\",\r\n \"query\",\r\n \"filter_value\",\r\n \"sort_direction\",\r\n ]\r\n\r\n def build_config(self):\r\n return {\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n \"query\": {\r\n \"display_name\": \"Search Query\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The text that the API compares page and database titles against.\",\r\n },\r\n \"filter_value\": {\r\n \"display_name\": \"Filter Type\",\r\n \"field_type\": \"str\",\r\n \"info\": \"Limits the results to either only pages or only databases.\",\r\n \"options\": [\"page\", \"database\"],\r\n \"default_value\": \"page\",\r\n },\r\n \"sort_direction\": {\r\n \"display_name\": \"Sort Direction\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The direction to sort the results.\",\r\n \"options\": [\"ascending\", \"descending\"],\r\n \"default_value\": \"descending\",\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n notion_secret: str,\r\n query: str = \"\",\r\n filter_value: str = \"page\",\r\n sort_direction: str = \"descending\",\r\n ) -> List[Record]:\r\n try:\r\n url = \"https://api.notion.com/v1/search\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Content-Type\": \"application/json\",\r\n \"Notion-Version\": \"2022-06-28\",\r\n }\r\n\r\n data = {\r\n \"query\": query,\r\n \"filter\": {\r\n \"value\": filter_value,\r\n \"property\": \"object\"\r\n },\r\n \"sort\":{\r\n \"direction\": sort_direction,\r\n \"timestamp\": \"last_edited_time\"\r\n }\r\n }\r\n\r\n response = requests.post(url, headers=headers, json=data)\r\n response.raise_for_status()\r\n\r\n results = response.json()\r\n records = []\r\n combined_text = f\"Results found: {len(results['results'])}\\n\\n\"\r\n for result in results['results']:\r\n result_data = {\r\n 'id': result['id'],\r\n 'type': result['object'],\r\n 'last_edited_time': result['last_edited_time'],\r\n }\r\n \r\n if result['object'] == 'page':\r\n result_data['title_or_url'] = result['url']\r\n text = f\"id: {result['id']}\\ntitle_or_url: {result['url']}\\n\"\r\n elif result['object'] == 'database':\r\n if 'title' in result and isinstance(result['title'], list) and len(result['title']) > 0:\r\n result_data['title_or_url'] = result['title'][0]['plain_text']\r\n text = f\"id: {result['id']}\\ntitle_or_url: {result['title'][0]['plain_text']}\\n\"\r\n else:\r\n result_data['title_or_url'] = \"N/A\"\r\n text = f\"id: {result['id']}\\ntitle_or_url: N/A\\n\"\r\n\r\n text += f\"type: {result['object']}\\nlast_edited_time: {result['last_edited_time']}\\n\\n\"\r\n combined_text += text\r\n records.append(Record(text=text, data=result_data))\r\n \r\n self.status = combined_text\r\n return records\r\n\r\n except Exception as e:\r\n self.status = f\"An error occurred: {str(e)}\"\r\n return [Record(text=self.status, data=[])]","fileTypes":[],"file_path":"","password":false,"name":"code","advanced":true,"dynamic":true,"info":"","load_from_db":false,"title_case":false},"filter_value":{"type":"str","required":false,"placeholder":"","list":true,"show":true,"multiline":false,"value":"database","fileTypes":[],"file_path":"","password":false,"options":["page","database"],"name":"filter_value","display_name":"Filter Type","advanced":false,"dynamic":false,"info":"Limits the results to either only pages or only databases.","load_from_db":false,"title_case":false,"input_types":["Text"]},"notion_secret":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":true,"name":"notion_secret","display_name":"Notion Secret","advanced":false,"dynamic":false,"info":"The Notion integration token.","load_from_db":true,"title_case":false,"input_types":["Text"],"value":""},"query":{"type":"str","required":false,"placeholder":"","list":false,"show":true,"multiline":false,"value":"","fileTypes":[],"file_path":"","password":false,"name":"query","display_name":"Search Query","advanced":false,"dynamic":false,"info":"The text that the API compares page and database titles against.","load_from_db":false,"title_case":false,"input_types":["Text"]},"sort_direction":{"type":"str","required":false,"placeholder":"","list":true,"show":true,"multiline":false,"value":"descending","fileTypes":[],"file_path":"","password":false,"options":["ascending","descending"],"name":"sort_direction","display_name":"Sort Direction","advanced":false,"dynamic":false,"info":"The direction to sort the results.","load_from_db":false,"title_case":false,"input_types":["Text"]},"_type":"CustomComponent"},"description":"Searches all pages and databases that have been shared with an integration.","icon":"NotionDirectoryLoader","base_classes":["Record"],"display_name":"Search [Notion]","documentation":"https://docs.langflow.org/integrations/notion/search","custom_fields":{"notion_secret":null,"query":null,"filter_value":null,"sort_direction":null},"output_types":["Record"],"field_formatters":{},"frozen":false,"field_order":["notion_secret","query","filter_value","sort_direction"],"beta":false},"id":"CustomComponent-ZcsA9","description":"Searches all pages and databases that have been shared with an integration.","display_name":"Search [Notion]"},"selected":false,"width":384,"height":591,"positionAbsolute":{"x":-3488.029350341937,"y":-965.3756250644985},"dragging":false}],"edges":[],"viewport":{"x":2623.378922967084,"y":696.8541079344027,"zoom":0.5981384177708997}},"description":"A Bundle containing Notion components for Page and Database manipulation. You can list pages, users databases, update properties, create new pages and add content to Notion Pages.","name":"Notion - Components","last_tested_version":"1.0.0a36","is_component":false}
\ No newline at end of file
+{
+ "id": "7cd51434-9767-450f-8742-27857367f8c2",
+ "data": {
+ "nodes": [
+ {
+ "id": "RecordsToText-Q69g5",
+ "type": "genericNode",
+ "position": { "x": -2671.5528488127866, "y": -963.4266471378126 },
+ "data": {
+ "type": "RecordsToText",
+ "node": {
+ "template": {
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "import requests\r\nfrom typing import List\r\n\r\nfrom langflow import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\n\r\nclass NotionUserList(CustomComponent):\r\n display_name = \"List Users [Notion]\"\r\n description = \"Retrieve users from Notion.\"\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/list-users\"\r\n icon = \"NotionDirectoryLoader\"\r\n \r\n def build_config(self):\r\n return {\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n notion_secret: str,\r\n ) -> List[Record]:\r\n url = \"https://api.notion.com/v1/users\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Notion-Version\": \"2022-06-28\",\r\n }\r\n\r\n response = requests.get(url, headers=headers)\r\n response.raise_for_status()\r\n\r\n data = response.json()\r\n results = data['results']\r\n\r\n records = []\r\n for user in results:\r\n id = user['id']\r\n type = user['type']\r\n name = user.get('name', '')\r\n avatar_url = user.get('avatar_url', '')\r\n\r\n record_data = {\r\n \"id\": id,\r\n \"type\": type,\r\n \"name\": name,\r\n \"avatar_url\": avatar_url,\r\n }\r\n\r\n output = \"User:\\n\"\r\n for key, value in record_data.items():\r\n output += f\"{key.replace('_', ' ').title()}: {value}\\n\"\r\n output += \"________________________\\n\"\r\n\r\n record = Record(text=output, data=record_data)\r\n records.append(record)\r\n\r\n self.status = \"\\n\".join(record.text for record in records)\r\n return records",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "notion_secret": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "notion_secret",
+ "display_name": "Notion Secret",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The Notion integration token.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": ""
+ },
+ "_type": "CustomComponent"
+ },
+ "description": "Retrieve users from Notion.",
+ "icon": "NotionDirectoryLoader",
+ "base_classes": ["Record"],
+ "display_name": "List Users [Notion] ",
+ "documentation": "https://docs.langflow.org/integrations/notion/list-users",
+ "custom_fields": { "notion_secret": null },
+ "output_types": ["Record"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false
+ },
+ "id": "RecordsToText-Q69g5",
+ "description": "Retrieve users from Notion.",
+ "display_name": "List Users [Notion] "
+ },
+ "selected": false,
+ "width": 384,
+ "height": 289,
+ "dragging": false,
+ "positionAbsolute": {
+ "x": -2671.5528488127866,
+ "y": -963.4266471378126
+ }
+ },
+ {
+ "id": "CustomComponent-PU0K5",
+ "type": "genericNode",
+ "position": { "x": -3077.2269116193215, "y": -960.9450220159636 },
+ "data": {
+ "type": "CustomComponent",
+ "node": {
+ "template": {
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "import json\r\nfrom typing import Optional\r\n\r\nimport requests\r\nfrom langflow.custom import CustomComponent\r\n\r\n\r\nclass NotionPageCreator(CustomComponent):\r\n display_name = \"Create Page [Notion]\"\r\n description = \"A component for creating Notion pages.\"\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/page-create\"\r\n icon = \"NotionDirectoryLoader\"\r\n\r\n def build_config(self):\r\n return {\r\n \"database_id\": {\r\n \"display_name\": \"Database ID\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The ID of the Notion database.\",\r\n },\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n \"properties\": {\r\n \"display_name\": \"Properties\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The properties of the new page. Depending on your database setup, this can change. E.G: {'Task name': {'id': 'title', 'type': 'title', 'title': [{'type': 'text', 'text': {'content': 'Send Notion Components to LF', 'link': null}}]}}\",\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n database_id: str,\r\n notion_secret: str,\r\n properties: str = '{\"Task name\": {\"id\": \"title\", \"type\": \"title\", \"title\": [{\"type\": \"text\", \"text\": {\"content\": \"Send Notion Components to LF\", \"link\": null}}]}}',\r\n ) -> str:\r\n if not database_id or not properties:\r\n raise ValueError(\"Invalid input. Please provide 'database_id' and 'properties'.\")\r\n\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Content-Type\": \"application/json\",\r\n \"Notion-Version\": \"2022-06-28\",\r\n }\r\n\r\n data = {\r\n \"parent\": {\"database_id\": database_id},\r\n \"properties\": json.loads(properties),\r\n }\r\n\r\n response = requests.post(\"https://api.notion.com/v1/pages\", headers=headers, json=data)\r\n\r\n if response.status_code == 200:\r\n page_id = response.json()[\"id\"]\r\n self.status = f\"Successfully created Notion page with ID: {page_id}\\n {str(response.json())}\"\r\n return response.json()\r\n else:\r\n error_message = f\"Failed to create Notion page. Status code: {response.status_code}, Error: {response.text}\"\r\n self.status = error_message\r\n raise Exception(error_message)",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "database_id": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "database_id",
+ "display_name": "Database ID",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The ID of the Notion database.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "notion_secret": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "notion_secret",
+ "display_name": "Notion Secret",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The Notion integration token.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": ""
+ },
+ "properties": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": "{\"Task name\": {\"id\": \"title\", \"type\": \"title\", \"title\": [{\"type\": \"text\", \"text\": {\"content\": \"Send Notion Components to LF\", \"link\": null}}]}}",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "properties",
+ "display_name": "Properties",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The properties of the new page. Depending on your database setup, this can change. E.G: {'Task name': {'id': 'title', 'type': 'title', 'title': [{'type': 'text', 'text': {'content': 'Send Notion Components to LF', 'link': null}}]}}",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "_type": "CustomComponent"
+ },
+ "description": "A component for creating Notion pages.",
+ "icon": "NotionDirectoryLoader",
+ "base_classes": ["object", "str", "Text"],
+ "display_name": "Create Page [Notion] ",
+ "documentation": "https://docs.langflow.org/integrations/notion/page-create",
+ "custom_fields": {
+ "database_id": null,
+ "notion_secret": null,
+ "properties": null
+ },
+ "output_types": ["Text"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false
+ },
+ "id": "CustomComponent-PU0K5",
+ "description": "A component for creating Notion pages.",
+ "display_name": "Create Page [Notion] "
+ },
+ "selected": false,
+ "width": 384,
+ "height": 477,
+ "positionAbsolute": {
+ "x": -3077.2269116193215,
+ "y": -960.9450220159636
+ },
+ "dragging": false
+ },
+ {
+ "id": "CustomComponent-YODla",
+ "type": "genericNode",
+ "position": { "x": -3485.297183150799, "y": -362.8525892356713 },
+ "data": {
+ "type": "CustomComponent",
+ "node": {
+ "template": {
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "import requests\r\nfrom typing import Dict\r\n\r\nfrom langflow import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\n\r\nclass NotionDatabaseProperties(CustomComponent):\r\n display_name = \"List Database Properties [Notion]\"\r\n description = \"Retrieve properties of a Notion database.\"\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/list-database-properties\"\r\n icon = \"NotionDirectoryLoader\"\r\n \r\n def build_config(self):\r\n return {\r\n \"database_id\": {\r\n \"display_name\": \"Database ID\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The ID of the Notion database.\",\r\n },\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n database_id: str,\r\n notion_secret: str,\r\n ) -> Record:\r\n url = f\"https://api.notion.com/v1/databases/{database_id}\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Notion-Version\": \"2022-06-28\", # Use the latest supported version\r\n }\r\n\r\n response = requests.get(url, headers=headers)\r\n response.raise_for_status()\r\n\r\n data = response.json()\r\n properties = data.get(\"properties\", {})\r\n\r\n record = Record(text=str(response.json()), data=properties)\r\n self.status = f\"Retrieved {len(properties)} properties from the Notion database.\\n {record.text}\"\r\n return record",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "database_id": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "database_id",
+ "display_name": "Database ID",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The ID of the Notion database.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": "NOTION_NMSTX_DB_ID"
+ },
+ "notion_secret": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "notion_secret",
+ "display_name": "Notion Secret",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The Notion integration token.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": ""
+ },
+ "_type": "CustomComponent"
+ },
+ "description": "Retrieve properties of a Notion database.",
+ "icon": "NotionDirectoryLoader",
+ "base_classes": ["Record"],
+ "display_name": "List Database Properties [Notion] ",
+ "documentation": "https://docs.langflow.org/integrations/notion/list-database-properties",
+ "custom_fields": { "database_id": null, "notion_secret": null },
+ "output_types": ["Record"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false
+ },
+ "id": "CustomComponent-YODla",
+ "description": "Retrieve properties of a Notion database.",
+ "display_name": "List Database Properties [Notion] "
+ },
+ "selected": true,
+ "width": 384,
+ "height": 383,
+ "dragging": false,
+ "positionAbsolute": { "x": -3485.297183150799, "y": -362.8525892356713 }
+ },
+ {
+ "id": "CustomComponent-wHlSz",
+ "type": "genericNode",
+ "position": { "x": -2668.7714642455403, "y": -657.2376228212606 },
+ "data": {
+ "type": "CustomComponent",
+ "node": {
+ "template": {
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "import json\r\nimport requests\r\nfrom typing import Dict, Any\r\n\r\nfrom langflow import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\n\r\nclass NotionPageUpdate(CustomComponent):\r\n display_name = \"Update Page Property [Notion]\"\r\n description = \"Update the properties of a Notion page.\"\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/page-update\"\r\n icon = \"NotionDirectoryLoader\"\r\n\r\n def build_config(self):\r\n return {\r\n \"page_id\": {\r\n \"display_name\": \"Page ID\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The ID of the Notion page to update.\",\r\n },\r\n \"properties\": {\r\n \"display_name\": \"Properties\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The properties to update on the page (as a JSON string).\",\r\n \"multiline\": True,\r\n },\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n page_id: str,\r\n properties: str,\r\n notion_secret: str,\r\n ) -> Record:\r\n url = f\"https://api.notion.com/v1/pages/{page_id}\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Content-Type\": \"application/json\",\r\n \"Notion-Version\": \"2022-06-28\", # Use the latest supported version\r\n }\r\n\r\n try:\r\n parsed_properties = json.loads(properties)\r\n except json.JSONDecodeError as e:\r\n raise ValueError(\"Invalid JSON format for properties\") from e\r\n\r\n data = {\r\n \"properties\": parsed_properties\r\n }\r\n\r\n response = requests.patch(url, headers=headers, json=data)\r\n response.raise_for_status()\r\n\r\n updated_page = response.json()\r\n\r\n output = \"Updated page properties:\\n\"\r\n for prop_name, prop_value in updated_page[\"properties\"].items():\r\n output += f\"{prop_name}: {prop_value}\\n\"\r\n\r\n self.status = output\r\n return Record(data=updated_page)",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "notion_secret": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "notion_secret",
+ "display_name": "Notion Secret",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The Notion integration token.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": ""
+ },
+ "page_id": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "page_id",
+ "display_name": "Page ID",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The ID of the Notion page to update.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "properties": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "properties",
+ "display_name": "Properties",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The properties to update on the page (as a JSON string).",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": "{ \"title\": [ { \"text\": { \"content\": \"Test Page\" } } ] }"
+ },
+ "_type": "CustomComponent"
+ },
+ "description": "Update the properties of a Notion page.",
+ "icon": "NotionDirectoryLoader",
+ "base_classes": ["Record"],
+ "display_name": "Update Page Property [Notion]",
+ "documentation": "https://docs.langflow.org/integrations/notion/page-update",
+ "custom_fields": {
+ "page_id": null,
+ "properties": null,
+ "notion_secret": null
+ },
+ "output_types": ["Record"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false
+ },
+ "id": "CustomComponent-wHlSz",
+ "description": "Update the properties of a Notion page.",
+ "display_name": "Update Page Property [Notion]"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 477,
+ "dragging": false,
+ "positionAbsolute": {
+ "x": -2668.7714642455403,
+ "y": -657.2376228212606
+ }
+ },
+ {
+ "id": "CustomComponent-oelYw",
+ "type": "genericNode",
+ "position": { "x": -2253.1007124701327, "y": -448.47240118604134 },
+ "data": {
+ "type": "CustomComponent",
+ "node": {
+ "template": {
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "import requests\r\nfrom typing import Dict, Any\r\n\r\nfrom langflow import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\n\r\nclass NotionPageContent(CustomComponent):\r\n display_name = \"Page Content Viewer [Notion]\"\r\n description = \"Retrieve the content of a Notion page as plain text.\"\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/page-content-viewer\"\r\n icon = \"NotionDirectoryLoader\"\r\n\r\n def build_config(self):\r\n return {\r\n \"page_id\": {\r\n \"display_name\": \"Page ID\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The ID of the Notion page to retrieve.\",\r\n },\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n page_id: str,\r\n notion_secret: str,\r\n ) -> Record:\r\n blocks_url = f\"https://api.notion.com/v1/blocks/{page_id}/children?page_size=100\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Notion-Version\": \"2022-06-28\", # Use the latest supported version\r\n }\r\n\r\n # Retrieve the child blocks\r\n blocks_response = requests.get(blocks_url, headers=headers)\r\n blocks_response.raise_for_status()\r\n blocks_data = blocks_response.json()\r\n\r\n # Parse the blocks and extract the content as plain text\r\n content = self.parse_blocks(blocks_data[\"results\"])\r\n\r\n self.status = content\r\n return Record(data={\"content\": content}, text=content)\r\n\r\n def parse_blocks(self, blocks: list) -> str:\r\n content = \"\"\r\n for block in blocks:\r\n block_type = block[\"type\"]\r\n if block_type in [\"paragraph\", \"heading_1\", \"heading_2\", \"heading_3\", \"quote\"]:\r\n content += self.parse_rich_text(block[block_type][\"rich_text\"]) + \"\\n\\n\"\r\n elif block_type in [\"bulleted_list_item\", \"numbered_list_item\"]:\r\n content += self.parse_rich_text(block[block_type][\"rich_text\"]) + \"\\n\"\r\n elif block_type == \"to_do\":\r\n content += self.parse_rich_text(block[\"to_do\"][\"rich_text\"]) + \"\\n\"\r\n elif block_type == \"code\":\r\n content += self.parse_rich_text(block[\"code\"][\"rich_text\"]) + \"\\n\\n\"\r\n elif block_type == \"image\":\r\n content += f\"[Image: {block['image']['external']['url']}]\\n\\n\"\r\n elif block_type == \"divider\":\r\n content += \"---\\n\\n\"\r\n return content.strip()\r\n\r\n def parse_rich_text(self, rich_text: list) -> str:\r\n text = \"\"\r\n for segment in rich_text:\r\n text += segment[\"plain_text\"]\r\n return text",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "notion_secret": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "notion_secret",
+ "display_name": "Notion Secret",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The Notion integration token.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": ""
+ },
+ "page_id": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "page_id",
+ "display_name": "Page ID",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The ID of the Notion page to retrieve.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "_type": "CustomComponent"
+ },
+ "description": "Retrieve the content of a Notion page as plain text.",
+ "icon": "NotionDirectoryLoader",
+ "base_classes": ["Record"],
+ "display_name": "Page Content Viewer [Notion] ",
+ "documentation": "https://docs.langflow.org/integrations/notion/page-content-viewer",
+ "custom_fields": { "page_id": null, "notion_secret": null },
+ "output_types": ["Record"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false
+ },
+ "id": "CustomComponent-oelYw",
+ "description": "Retrieve the content of a Notion page as plain text.",
+ "display_name": "Page Content Viewer [Notion] "
+ },
+ "selected": false,
+ "width": 384,
+ "height": 383,
+ "positionAbsolute": {
+ "x": -2253.1007124701327,
+ "y": -448.47240118604134
+ },
+ "dragging": false
+ },
+ {
+ "id": "CustomComponent-Pn52w",
+ "type": "genericNode",
+ "position": { "x": -3070.9222948695096, "y": -472.4537855763852 },
+ "data": {
+ "type": "CustomComponent",
+ "node": {
+ "template": {
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "import requests\r\nimport json\r\nfrom typing import Dict, Any, List\r\nfrom langflow.custom import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\nclass NotionListPages(CustomComponent):\r\n display_name = \"List Pages [Notion]\"\r\n description = (\r\n \"Query a Notion database with filtering and sorting. \"\r\n \"The input should be a JSON string containing the 'filter' and 'sorts' objects. \"\r\n \"Example input:\\n\"\r\n '{\"filter\": {\"property\": \"Status\", \"select\": {\"equals\": \"Done\"}}, \"sorts\": [{\"timestamp\": \"created_time\", \"direction\": \"descending\"}]}'\r\n )\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/list-pages\"\r\n icon = \"NotionDirectoryLoader\"\r\n\r\n field_order = [\r\n \"notion_secret\",\r\n \"database_id\",\r\n \"query_payload\",\r\n ]\r\n\r\n def build_config(self):\r\n return {\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n \"database_id\": {\r\n \"display_name\": \"Database ID\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The ID of the Notion database to query.\",\r\n },\r\n \"query_payload\": {\r\n \"display_name\": \"Database query\",\r\n \"field_type\": \"str\",\r\n \"info\": \"A JSON string containing the filters that will be used for querying the database. EG: {'filter': {'property': 'Status', 'status': {'equals': 'In progress'}}}\",\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n notion_secret: str,\r\n database_id: str,\r\n query_payload: str = \"{}\",\r\n ) -> List[Record]:\r\n try:\r\n query_data = json.loads(query_payload)\r\n filter_obj = query_data.get(\"filter\")\r\n sorts = query_data.get(\"sorts\", [])\r\n\r\n url = f\"https://api.notion.com/v1/databases/{database_id}/query\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Content-Type\": \"application/json\",\r\n \"Notion-Version\": \"2022-06-28\",\r\n }\r\n\r\n data = {\r\n \"sorts\": sorts,\r\n }\r\n\r\n if filter_obj:\r\n data[\"filter\"] = filter_obj\r\n\r\n response = requests.post(url, headers=headers, json=data)\r\n response.raise_for_status()\r\n\r\n results = response.json()\r\n records = []\r\n combined_text = f\"Pages found: {len(results['results'])}\\n\\n\"\r\n for page in results['results']:\r\n page_data = {\r\n 'id': page['id'],\r\n 'url': page['url'],\r\n 'created_time': page['created_time'],\r\n 'last_edited_time': page['last_edited_time'],\r\n 'properties': page['properties'],\r\n }\r\n\r\n text = (\r\n f\"id: {page['id']}\\n\"\r\n f\"url: {page['url']}\\n\"\r\n f\"created_time: {page['created_time']}\\n\"\r\n f\"last_edited_time: {page['last_edited_time']}\\n\"\r\n f\"properties: {json.dumps(page['properties'], indent=2)}\\n\\n\"\r\n )\r\n\r\n combined_text += text\r\n records.append(Record(text=text, data=page_data))\r\n \r\n self.status = combined_text.strip()\r\n return records\r\n\r\n except Exception as e:\r\n self.status = f\"An error occurred: {str(e)}\"\r\n return [Record(text=self.status, data=[])]",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "database_id": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "database_id",
+ "display_name": "Database ID",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The ID of the Notion database to query.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": "NOTION_NMSTX_DB_ID"
+ },
+ "notion_secret": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "notion_secret",
+ "display_name": "Notion Secret",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The Notion integration token.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": ""
+ },
+ "query_payload": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": {},
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "query_payload",
+ "display_name": "Database query",
+ "advanced": false,
+ "dynamic": false,
+ "info": "A JSON string containing the filters that will be used for querying the database. EG: {'filter': {'property': 'Status', 'status': {'equals': 'In progress'}}}",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "_type": "CustomComponent"
+ },
+ "description": "Query a Notion database with filtering and sorting. The input should be a JSON string containing the 'filter' and 'sorts' objects. Example input:\n{\"filter\": {\"property\": \"Status\", \"select\": {\"equals\": \"Done\"}}, \"sorts\": [{\"timestamp\": \"created_time\", \"direction\": \"descending\"}]}",
+ "icon": "NotionDirectoryLoader",
+ "base_classes": ["Record"],
+ "display_name": "List Pages [Notion] ",
+ "documentation": "https://docs.langflow.org/integrations/notion/list-pages",
+ "custom_fields": {
+ "notion_secret": null,
+ "database_id": null,
+ "query_payload": null
+ },
+ "output_types": ["Record"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": ["notion_secret", "database_id", "query_payload"],
+ "beta": false
+ },
+ "id": "CustomComponent-Pn52w",
+ "description": "Query a Notion database with filtering and sorting. The input should be a JSON string containing the 'filter' and 'sorts' objects. Example input:\n{\"filter\": {\"property\": \"Status\", \"select\": {\"equals\": \"Done\"}}, \"sorts\": [{\"timestamp\": \"created_time\", \"direction\": \"descending\"}]}",
+ "display_name": "List Pages [Notion] "
+ },
+ "selected": false,
+ "width": 384,
+ "height": 517,
+ "positionAbsolute": {
+ "x": -3070.9222948695096,
+ "y": -472.4537855763852
+ },
+ "dragging": false
+ },
+ {
+ "id": "CustomComponent-I8Dec",
+ "type": "genericNode",
+ "position": { "x": -2256.686402636563, "y": -963.4541117792749 },
+ "data": {
+ "type": "CustomComponent",
+ "node": {
+ "template": {
+ "block_id": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "block_id",
+ "display_name": "Page/Block ID",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The ID of the page/block to add the content.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "import json\r\nfrom typing import List, Dict, Any\r\nfrom markdown import markdown\r\nfrom bs4 import BeautifulSoup\r\nimport requests\r\n\r\nfrom langflow import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\nclass AddContentToPage(CustomComponent):\r\n display_name = \"Add Content to Page [Notion]\"\r\n description = \"Convert markdown text to Notion blocks and append them to a Notion page.\"\r\n documentation: str = \"https://developers.notion.com/reference/patch-block-children\"\r\n icon = \"NotionDirectoryLoader\"\r\n\r\n def build_config(self):\r\n return {\r\n \"markdown_text\": {\r\n \"display_name\": \"Markdown Text\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The markdown text to convert to Notion blocks.\",\r\n \"multiline\": True,\r\n },\r\n \"block_id\": {\r\n \"display_name\": \"Page/Block ID\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The ID of the page/block to add the content.\",\r\n },\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n }\r\n\r\n def build(self, markdown_text: str, block_id: str, notion_secret: str) -> Record:\r\n html_text = markdown(markdown_text)\r\n soup = BeautifulSoup(html_text, 'html.parser')\r\n blocks = self.process_node(soup)\r\n\r\n url = f\"https://api.notion.com/v1/blocks/{block_id}/children\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Content-Type\": \"application/json\",\r\n \"Notion-Version\": \"2022-06-28\",\r\n }\r\n\r\n data = {\r\n \"children\": blocks,\r\n }\r\n\r\n response = requests.patch(url, headers=headers, json=data)\r\n self.status = str(response.json())\r\n response.raise_for_status()\r\n\r\n result = response.json()\r\n self.status = f\"Appended {len(blocks)} blocks to page with ID: {block_id}\"\r\n return Record(data=result, text=json.dumps(result))\r\n\r\n def process_node(self, node):\r\n blocks = []\r\n if isinstance(node, str):\r\n text = node.strip()\r\n if text:\r\n if text.startswith('#'):\r\n heading_level = text.count('#', 0, 6)\r\n heading_text = text[heading_level:].strip()\r\n if heading_level == 1:\r\n blocks.append(self.create_block('heading_1', heading_text))\r\n elif heading_level == 2:\r\n blocks.append(self.create_block('heading_2', heading_text))\r\n elif heading_level == 3:\r\n blocks.append(self.create_block('heading_3', heading_text))\r\n else:\r\n blocks.append(self.create_block('paragraph', text))\r\n elif node.name == 'h1':\r\n blocks.append(self.create_block('heading_1', node.get_text(strip=True)))\r\n elif node.name == 'h2':\r\n blocks.append(self.create_block('heading_2', node.get_text(strip=True)))\r\n elif node.name == 'h3':\r\n blocks.append(self.create_block('heading_3', node.get_text(strip=True)))\r\n elif node.name == 'p':\r\n code_node = node.find('code')\r\n if code_node:\r\n code_text = code_node.get_text()\r\n language, code = self.extract_language_and_code(code_text)\r\n blocks.append(self.create_block('code', code, language=language))\r\n elif self.is_table(str(node)):\r\n blocks.extend(self.process_table(node))\r\n else:\r\n blocks.append(self.create_block('paragraph', node.get_text(strip=True)))\r\n elif node.name == 'ul':\r\n blocks.extend(self.process_list(node, 'bulleted_list_item'))\r\n elif node.name == 'ol':\r\n blocks.extend(self.process_list(node, 'numbered_list_item'))\r\n elif node.name == 'blockquote':\r\n blocks.append(self.create_block('quote', node.get_text(strip=True)))\r\n elif node.name == 'hr':\r\n blocks.append(self.create_block('divider', ''))\r\n elif node.name == 'img':\r\n blocks.append(self.create_block('image', '', image_url=node.get('src')))\r\n elif node.name == 'a':\r\n blocks.append(self.create_block('bookmark', node.get_text(strip=True), link_url=node.get('href')))\r\n elif node.name == 'table':\r\n blocks.extend(self.process_table(node))\r\n\r\n for child in node.children:\r\n if isinstance(child, str):\r\n continue\r\n blocks.extend(self.process_node(child))\r\n\r\n return blocks\r\n\r\n def extract_language_and_code(self, code_text):\r\n lines = code_text.split('\\n')\r\n language = lines[0].strip()\r\n code = '\\n'.join(lines[1:]).strip()\r\n return language, code\r\n\r\n def is_code_block(self, text):\r\n return text.startswith('```')\r\n\r\n def extract_code_block(self, text):\r\n lines = text.split('\\n')\r\n language = lines[0].strip('`').strip()\r\n code = '\\n'.join(lines[1:]).strip('`').strip()\r\n return language, code\r\n \r\n def is_table(self, text):\r\n rows = text.split('\\n')\r\n if len(rows) < 2:\r\n return False\r\n\r\n has_separator = False\r\n for i, row in enumerate(rows):\r\n if '|' in row:\r\n cells = [cell.strip() for cell in row.split('|')]\r\n cells = [cell for cell in cells if cell] # Remove empty cells\r\n if i == 1 and all(set(cell) <= set('-|') for cell in cells):\r\n has_separator = True\r\n elif not cells:\r\n return False\r\n\r\n return has_separator and len(rows) >= 3\r\n\r\n def process_list(self, node, list_type):\r\n blocks = []\r\n for item in node.find_all('li'):\r\n item_text = item.get_text(strip=True)\r\n checked = item_text.startswith('[x]')\r\n is_checklist = item_text.startswith('[ ]') or checked\r\n\r\n if is_checklist:\r\n item_text = item_text.replace('[x]', '').replace('[ ]', '').strip()\r\n blocks.append(self.create_block('to_do', item_text, checked=checked))\r\n else:\r\n blocks.append(self.create_block(list_type, item_text))\r\n return blocks\r\n\r\n def process_table(self, node):\r\n blocks = []\r\n header_row = node.find('thead').find('tr') if node.find('thead') else None\r\n body_rows = node.find('tbody').find_all('tr') if node.find('tbody') else []\r\n\r\n if header_row or body_rows:\r\n table_width = max(len(header_row.find_all(['th', 'td'])) if header_row else 0,\r\n max(len(row.find_all(['th', 'td'])) for row in body_rows))\r\n\r\n table_block = self.create_block('table', '', table_width=table_width, has_column_header=bool(header_row))\r\n blocks.append(table_block)\r\n\r\n if header_row:\r\n header_cells = [cell.get_text(strip=True) for cell in header_row.find_all(['th', 'td'])]\r\n header_row_block = self.create_block('table_row', header_cells)\r\n blocks.append(header_row_block)\r\n\r\n for row in body_rows:\r\n cells = [cell.get_text(strip=True) for cell in row.find_all(['th', 'td'])]\r\n row_block = self.create_block('table_row', cells)\r\n blocks.append(row_block)\r\n\r\n return blocks\r\n \r\n def create_block(self, block_type: str, content: str, **kwargs) -> Dict[str, Any]:\r\n block = {\r\n \"object\": \"block\",\r\n \"type\": block_type,\r\n block_type: {},\r\n }\r\n\r\n if block_type in [\"paragraph\", \"heading_1\", \"heading_2\", \"heading_3\", \"bulleted_list_item\", \"numbered_list_item\", \"quote\"]:\r\n block[block_type][\"rich_text\"] = [\r\n {\r\n \"type\": \"text\",\r\n \"text\": {\r\n \"content\": content,\r\n },\r\n }\r\n ]\r\n elif block_type == 'to_do':\r\n block[block_type][\"rich_text\"] = [\r\n {\r\n \"type\": \"text\",\r\n \"text\": {\r\n \"content\": content,\r\n },\r\n }\r\n ]\r\n block[block_type]['checked'] = kwargs.get('checked', False)\r\n elif block_type == 'code':\r\n block[block_type]['rich_text'] = [\r\n {\r\n \"type\": \"text\",\r\n \"text\": {\r\n \"content\": content,\r\n },\r\n }\r\n ]\r\n block[block_type]['language'] = kwargs.get('language', 'plain text')\r\n elif block_type == 'image':\r\n block[block_type] = {\r\n \"type\": \"external\",\r\n \"external\": {\r\n \"url\": kwargs.get('image_url', '')\r\n }\r\n }\r\n elif block_type == 'divider':\r\n pass\r\n elif block_type == 'bookmark':\r\n block[block_type]['url'] = kwargs.get('link_url', '')\r\n elif block_type == 'table':\r\n block[block_type]['table_width'] = kwargs.get('table_width', 0)\r\n block[block_type]['has_column_header'] = kwargs.get('has_column_header', False)\r\n block[block_type]['has_row_header'] = kwargs.get('has_row_header', False)\r\n elif block_type == 'table_row':\r\n block[block_type]['cells'] = [[{'type': 'text', 'text': {'content': cell}} for cell in content]]\r\n\r\n return block",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "markdown_text": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "markdown_text",
+ "display_name": "Markdown Text",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The markdown text to convert to Notion blocks.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": "# Heading 1\n\n## Heading 2\n\n### Heading 3\n\nThis is a regular paragraph.\n\nHere's another paragraph with an image:\n\n\n## Checklist\n- [x] Completed task\n- [ ] Incomplete task\n- [x] Another completed task\n\n## Numbered List\n1. First item\n2. Second item\n3. Third item\n\n## Bulleted List\n- Item 1\n- Item 2\n- Item 3\n\n## Code Block\n```python\ndef hello_world():\n print(\"Hello, World!\")\n```\n\n## Quote\n> This is a blockquote.\n> It can span multiple lines.\n\n## Horizontal Rule\n---\n\n\n## Link\n[Notion API Documentation](https://developers.notion.com)\n\n"
+ },
+ "notion_secret": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "notion_secret",
+ "display_name": "Notion Secret",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The Notion integration token.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": ""
+ },
+ "_type": "CustomComponent"
+ },
+ "description": "Convert markdown text to Notion blocks and append them to a Notion page.",
+ "icon": "NotionDirectoryLoader",
+ "base_classes": ["Record"],
+ "display_name": "Add Content to Page [Notion] ",
+ "documentation": "https://developers.notion.com/reference/patch-block-children",
+ "custom_fields": {
+ "markdown_text": null,
+ "block_id": null,
+ "notion_secret": null
+ },
+ "output_types": ["Record"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [],
+ "beta": false,
+ "official": false
+ },
+ "id": "CustomComponent-I8Dec"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 497,
+ "positionAbsolute": {
+ "x": -2256.686402636563,
+ "y": -963.4541117792749
+ },
+ "dragging": false
+ },
+ {
+ "id": "CustomComponent-ZcsA9",
+ "type": "genericNode",
+ "position": { "x": -3488.029350341937, "y": -965.3756250644985 },
+ "data": {
+ "type": "CustomComponent",
+ "node": {
+ "template": {
+ "code": {
+ "type": "code",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": true,
+ "value": "import requests\r\nfrom typing import Dict, Any, List\r\nfrom langflow.custom import CustomComponent\r\nfrom langflow.schema import Record\r\n\r\nclass NotionSearch(CustomComponent):\r\n display_name = \"Search Notion\"\r\n description = (\r\n \"Searches all pages and databases that have been shared with an integration.\"\r\n )\r\n documentation: str = \"https://docs.langflow.org/integrations/notion/search\"\r\n icon = \"NotionDirectoryLoader\"\r\n\r\n field_order = [\r\n \"notion_secret\",\r\n \"query\",\r\n \"filter_value\",\r\n \"sort_direction\",\r\n ]\r\n\r\n def build_config(self):\r\n return {\r\n \"notion_secret\": {\r\n \"display_name\": \"Notion Secret\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The Notion integration token.\",\r\n \"password\": True,\r\n },\r\n \"query\": {\r\n \"display_name\": \"Search Query\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The text that the API compares page and database titles against.\",\r\n },\r\n \"filter_value\": {\r\n \"display_name\": \"Filter Type\",\r\n \"field_type\": \"str\",\r\n \"info\": \"Limits the results to either only pages or only databases.\",\r\n \"options\": [\"page\", \"database\"],\r\n \"default_value\": \"page\",\r\n },\r\n \"sort_direction\": {\r\n \"display_name\": \"Sort Direction\",\r\n \"field_type\": \"str\",\r\n \"info\": \"The direction to sort the results.\",\r\n \"options\": [\"ascending\", \"descending\"],\r\n \"default_value\": \"descending\",\r\n },\r\n }\r\n\r\n def build(\r\n self,\r\n notion_secret: str,\r\n query: str = \"\",\r\n filter_value: str = \"page\",\r\n sort_direction: str = \"descending\",\r\n ) -> List[Record]:\r\n try:\r\n url = \"https://api.notion.com/v1/search\"\r\n headers = {\r\n \"Authorization\": f\"Bearer {notion_secret}\",\r\n \"Content-Type\": \"application/json\",\r\n \"Notion-Version\": \"2022-06-28\",\r\n }\r\n\r\n data = {\r\n \"query\": query,\r\n \"filter\": {\r\n \"value\": filter_value,\r\n \"property\": \"object\"\r\n },\r\n \"sort\":{\r\n \"direction\": sort_direction,\r\n \"timestamp\": \"last_edited_time\"\r\n }\r\n }\r\n\r\n response = requests.post(url, headers=headers, json=data)\r\n response.raise_for_status()\r\n\r\n results = response.json()\r\n records = []\r\n combined_text = f\"Results found: {len(results['results'])}\\n\\n\"\r\n for result in results['results']:\r\n result_data = {\r\n 'id': result['id'],\r\n 'type': result['object'],\r\n 'last_edited_time': result['last_edited_time'],\r\n }\r\n \r\n if result['object'] == 'page':\r\n result_data['title_or_url'] = result['url']\r\n text = f\"id: {result['id']}\\ntitle_or_url: {result['url']}\\n\"\r\n elif result['object'] == 'database':\r\n if 'title' in result and isinstance(result['title'], list) and len(result['title']) > 0:\r\n result_data['title_or_url'] = result['title'][0]['plain_text']\r\n text = f\"id: {result['id']}\\ntitle_or_url: {result['title'][0]['plain_text']}\\n\"\r\n else:\r\n result_data['title_or_url'] = \"N/A\"\r\n text = f\"id: {result['id']}\\ntitle_or_url: N/A\\n\"\r\n\r\n text += f\"type: {result['object']}\\nlast_edited_time: {result['last_edited_time']}\\n\\n\"\r\n combined_text += text\r\n records.append(Record(text=text, data=result_data))\r\n \r\n self.status = combined_text\r\n return records\r\n\r\n except Exception as e:\r\n self.status = f\"An error occurred: {str(e)}\"\r\n return [Record(text=self.status, data=[])]",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "code",
+ "advanced": true,
+ "dynamic": true,
+ "info": "",
+ "load_from_db": false,
+ "title_case": false
+ },
+ "filter_value": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "value": "database",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "options": ["page", "database"],
+ "name": "filter_value",
+ "display_name": "Filter Type",
+ "advanced": false,
+ "dynamic": false,
+ "info": "Limits the results to either only pages or only databases.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "notion_secret": {
+ "type": "str",
+ "required": true,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "fileTypes": [],
+ "file_path": "",
+ "password": true,
+ "name": "notion_secret",
+ "display_name": "Notion Secret",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The Notion integration token.",
+ "load_from_db": true,
+ "title_case": false,
+ "input_types": ["Text"],
+ "value": ""
+ },
+ "query": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": false,
+ "show": true,
+ "multiline": false,
+ "value": "",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "name": "query",
+ "display_name": "Search Query",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The text that the API compares page and database titles against.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "sort_direction": {
+ "type": "str",
+ "required": false,
+ "placeholder": "",
+ "list": true,
+ "show": true,
+ "multiline": false,
+ "value": "descending",
+ "fileTypes": [],
+ "file_path": "",
+ "password": false,
+ "options": ["ascending", "descending"],
+ "name": "sort_direction",
+ "display_name": "Sort Direction",
+ "advanced": false,
+ "dynamic": false,
+ "info": "The direction to sort the results.",
+ "load_from_db": false,
+ "title_case": false,
+ "input_types": ["Text"]
+ },
+ "_type": "CustomComponent"
+ },
+ "description": "Searches all pages and databases that have been shared with an integration.",
+ "icon": "NotionDirectoryLoader",
+ "base_classes": ["Record"],
+ "display_name": "Search [Notion]",
+ "documentation": "https://docs.langflow.org/integrations/notion/search",
+ "custom_fields": {
+ "notion_secret": null,
+ "query": null,
+ "filter_value": null,
+ "sort_direction": null
+ },
+ "output_types": ["Record"],
+ "field_formatters": {},
+ "frozen": false,
+ "field_order": [
+ "notion_secret",
+ "query",
+ "filter_value",
+ "sort_direction"
+ ],
+ "beta": false
+ },
+ "id": "CustomComponent-ZcsA9",
+ "description": "Searches all pages and databases that have been shared with an integration.",
+ "display_name": "Search [Notion]"
+ },
+ "selected": false,
+ "width": 384,
+ "height": 591,
+ "positionAbsolute": {
+ "x": -3488.029350341937,
+ "y": -965.3756250644985
+ },
+ "dragging": false
+ }
+ ],
+ "edges": [],
+ "viewport": {
+ "x": 2623.378922967084,
+ "y": 696.8541079344027,
+ "zoom": 0.5981384177708997
+ }
+ },
+ "description": "A Bundle containing Notion components for Page and Database manipulation. You can list pages, users databases, update properties, create new pages and add content to Notion Pages.",
+ "name": "Notion - Components",
+ "last_tested_version": "1.0.0a36",
+ "is_component": false
+}
diff --git a/docs/static/logos/twitter.svg b/docs/static/logos/twitter.svg
index 027488d3c..437e2bfdd 100644
--- a/docs/static/logos/twitter.svg
+++ b/docs/static/logos/twitter.svg
@@ -1,3 +1,3 @@
-