* Update model kwargs and temperature values

* Update keyboard shortcuts for advanced editing

* make Message field have no handles

* Update OpenAI API Key handling in OpenAIEmbeddingsComponent

* Remove unnecessary field_type key from CustomComponent class

* Update required field behavior in CustomComponent class

* Refactor AzureOpenAIModel.py: Removed unnecessary "required" attribute from input parameters

* Update BaiduQianfanChatModel and OpenAIModel configurations

* Fix range_spec step type validation

* Update RangeSpec step_type default value to "float"

* Fix Save debounce

* Update parameterUtils to use debounce instead of throttle

* Update input type options in schemas and graph base classes

* Refactor run_flow_with_caching endpoint to include simplified and experimental versions

* Add PythonFunctionComponent and test case for it

* Add nest_asyncio to fix event loop issue

* Refactor test_initial_setup.py to use RunOutputs instead of ResultData

* Remove unused code in test_endpoints.py

* Add asyncio loop to uvicorn command

* Refactor load_session method to handle coroutine result

* Fixed saving

* Fixed debouncing

* Add InputType and OutputType literals to schema.py

* Update input type in Graph class

* Add new schema for simplified API request

* Add delete_messages function and update test_successful_run assertions

* Add STREAM_INFO_TEXT constant to model components

* Add session_id to simplified_run_flow_with_caching endpoint

* Add field_typing import to OpenAIModel.py

* update starter projects

* Add constants for Langflow base module

* Update setup.py to include latest component versions

* Update Starter Examples

* sets starter_project fixture to Basic Prompting

* Refactor test_endpoints.py: Update test names and add new tests for different output types

* Update HuggingFace Spaces link and add image for dark mode

* Remove filepath reference

* Update Vertex params in base.py

* Add tests for different input types

* Add type annotations and improve test coverage

* Add duplicate space link to README

* Update HuggingFace Spaces badge in README

* Add Python 3.10 installation requirement to README

* Refactor flow running endpoints

* Refactor SimplifiedAPIRequest and add documentation for Tweaks

* Refactor input_request parameter in simplified_run_flow function

* Add support for retrieving specific component output

* Add custom Uvicorn worker for Langflow application

* Add asyncio loop to LangflowApplication initialization

* Update Makefile with new variables and start command

* Fix indentation in Makefile

* Refactor run_graph function to add support for running a JSON flow

* Refactor getChatInputField function and update API code

* Update HuggingFace Spaces documentation with duplication process

* Add asyncio event loop to uvicorn command

* Add installation of backend in start target

* udpate some starter projects

* Fix formatting in hugging-face-spaces.mdx

* Update installation instructions for Langflow

* set examples order

* Update start command in Makefile

* Add installation and usage instructions for Langflow

* Update Langflow installation and usage instructions

* Fix langflow command in README.md

* Fix broken link to HuggingFace Spaces guide

* Add new SVG assets for blog post, chat bot, and cloud docs

* Refactor example rendering in NewFlowModal

* Add new SVG file for short bio section

* Remove unused import and add new component

* Update title in usage.mdx

* Update HuggingFace Spaces heading in usage.mdx

* Update usage instructions in getting-started/usage.mdx

* Update cache option in usage documentation

* Remove 'advanced' flag from 'n_messages' parameter in MemoryComponent.py

* Refactor code to improve performance and readability

* Update project names and flow examples

* fix document qa example

* Remove commented out code in sidebars.js

* Delete unused documentation files

* Fix bug in login functionality

* Remove global variables from components

* Fix bug in login functionality

* fix modal returning to input

* Update max-width of chat message sender name

* Update styling for chat message component

* Refactor OpenAIEmbeddingsComponent signature

* Update usage.mdx file

* Update path in Makefile

* Add new migration and what's new documentation files

* Add new chapters and migration guides

* Update version to 0.0.13 in pyproject.toml

* new locks

* Update dependencies in pyproject.toml

* general fixes

* Update dependencies in pyproject.toml and poetry.lock files

* add padding to modal

*  (undrawCards/index.tsx): update the SVG used for BasicPrompt component to undraw_short_bio_re_fmx0.svg to match the desired design
♻️ (undrawCards/index.tsx): adjust the width and height of the BasicPrompt SVG to 65% to improve the visual appearance

* Commented out components/data in sidebars.js

* Refactor component names in outputs.mdx

* Update embedded chat script URL

* Add data component and fix formatting in outputs component

* Update dependencies in poetry.lock and pyproject.toml

* Update dependencies in poetry.lock and pyproject.toml

* Refactor code to improve performance and readability

* Update dependencies in poetry.lock and pyproject.toml

* Fixed IO Modal updates

* Remove dead code at API Modal

* Fixed overflow at CodeTabsComponent tweaks page

*  (NewFlowModal/index.tsx): update the name of the example from "Blog Writter" to "Blog Writer" for better consistency and clarity

* Update dependencies versions

* Update langflow-base to version 0.0.15 and fix setup_env script

* Update dependencies in pyproject.toml

* Lock dependencies in parallel

* Add logging statement to setup_app function

* Fix Ace not having type="module" and breaking build

* Update authentication settings for access token cookie

* Update package versions in package-lock.json

* Add scripts directory to Dockerfile

* Add setup_env command to build_and_run target

* Remove unnecessary make command in setup_env

* Remove unnecessary installation step in build_and_run

* Add debug configuration for CLI

* 🔧 chore(Makefile): refactor build_langflow target to use a separate script for updating dependencies and building
 feat(update_dependencies.py): add script to update pyproject.toml dependency version based on langflow-base version in src/backend/base/pyproject.toml

* Add number_of_results parameter to AstraDBSearchComponent

* Update HuggingFace Spaces links

* Remove duplicate imports in hugging-face-spaces.mdx

* Add number_of_results parameter to vector search components

* Fixed supabase not commited

* Revert "Fixed supabase not commited"

This reverts commit afb10a6262.

* Update duplicate-space.png image

* Delete unused files and components

* Add/update script to update dependencies

* Add .bak files to .gitignore

* Update version numbers and remove unnecessary dependencies

* Update langflow-base dependency path

* Add Text import to VertexAiModel.py

* Update langflow-base version to 0.0.16 and update dependencies

* Delete start projects and commit session in delete_start_projects function

* Refactor backend startup script to handle autologin option

* Update poetry installation script to include pipx update check

* Update pipx installation script for different operating systems

* Update Makefile to improve setup process

* Add error handling on streaming and fix streaming bug on error

* Added description to Blog Writer

* Sort base classes alphabetically

* Update duplicate-space.png image

* update position on langflow prompt chaining

* Add Langflow CLI and first steps documentation

* Add exception handling for missing 'content' field in search_with_vector_store method

* Remove unused import and update type hinting

* fix bug on egdes after creating group component

* Refactor APIRequest class and update model imports

* Remove unused imports and fix formatting issues

* Refactor reactflowUtils and styleUtils

* Add CLI documentation to getting-started/cli.mdx

* Add CLI usage instructions

* Add ZoomableImage component to first-steps.mdx

* Update CLI and first steps documentation

* Remove duplicate import and add new imports for ThemedImage and useBaseUrl

* Update Langflow CLI documentation link

* Remove first-steps.mdx and update index.mdx and sidebars.js

* Update Docusaurus dependencies

* Add AstraDB RAG Flow guide

* Remove unused imports

* Remove unnecessary import statement

* Refactor guide for better readability

* Add data component documentation

* Update component headings and add prompt template

* Fix logging level and version display

* Add datetime import and buffer for alembic log

* Update flow names in NewFlowModal and documentation

* Add starter projects to sidebars.js

* Fix error handling in DirectoryReader class

* Handle exception when loading components in setup.py

* Update version numbers in pyproject.toml files

* Update build_langflow_base and build_langflow_backup in Makefile

* Added docs

* Update dependencies and build process

* Add Admonition component for API Key documentation

* Update API endpoint in async-api.mdx

* Remove async-api guidelines

* Fix UnicodeDecodeError in DirectoryReader

* Update dependency version and fix encoding issues

* Add conditional build and publish for base and main projects

* Update version to 1.0.0a2 in pyproject.toml

* Remove duplicate imports and unnecessary code in custom-component.mdx

* Fix poetry lock command in Makefile

* Update package versions in pyproject.toml

* Remove unused components and update imports

* 📦 chore(pre-release-base.yml): add pre-release workflow for base project
📦 chore(pre-release-langflow.yml): add pre-release workflow for langflow project

* Add ChatLiteLLMModelComponent to models package

* Add frontend installation and build steps

* Add Dockerfile for building and pushing base image

* Add emoji package and nest-asyncio dependency

* 📝 (components.mdx): update margin style of ZoomableImage to improve spacing
📝 (features.mdx): update margin style of ZoomableImage to improve spacing
📝 (login.mdx): update margin style of ZoomableImage to improve spacing

* Fix module import error in validate.py

* Fix error message in directory_reader.py

* Update version import and handle ImportError

* Add cryptography and langchain-openai dependencies

* Update poetry installation and remove poetry-monorepo-dependency-plugin

* Update workflow and Dockerfile for Langflow base pre-release

* Update display names and descriptions for AstraDB components

* Update installation instructions for Langflow

* Update Astra DB links and remove unnecessary imports

* Rename AstraDB

* Add new components and images

* Update HuggingFace Spaces URLs

* Update Langflow documentation and add new starter projects

* Update flow name to "Basic Prompting (Hello, world!)" in relevant files

* Update Basic Prompting flow name to "Ahoy World!"

* Remove HuggingFace Spaces documentation

* Add new files and update sidebars.js

* Remove async-tasks.mdx and update sidebars.js

* Update starter project URLs

* Enable migration of global variables

* Update OpenAIEmbeddings deployment and model

* 📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment

📝 (rag-with-astradb.mdx): add margin to image styles to improve spacing and readability

* Update welcome message in index.mdx

* Add global variable feature to Langflow documentation

* Reorganized sidebar categories

* Update migration documentation

* Refactor SplitTextComponent class to accept inputs of type Record and Text

* Adjust embeddings docs

*  (cardComponent/index.tsx): add a minimum height to the card component to ensure consistent layout and prevent content from overlapping when the card is empty or has minimal content

* Update flow name from "Ahoy World!" to "Hello, world!"

* Update documentation for embeddings, models, and vector stores

* Update CreateRecordComponent and parameterUtils.ts

* Add documentation for Text and Record types

* Remove commented lines in sidebars.js

* Add run_flow_from_json function to load.py

* Update Langflow package to run flow from JSON file

* Fix type annotations and import errors

* Refactor tests and fix test data

---------

Co-authored-by: Rodrigo Nader <rodrigosilvanader@gmail.com>
Co-authored-by: anovazzi1 <otavio2204@gmail.com>
Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
Co-authored-by: carlosrcoelho <carlosrodrigo.coelho@gmail.com>
Co-authored-by: cristhianzl <cristhian.lousa@gmail.com>
Co-authored-by: Matheus <jacquesmats@gmail.com>
This commit is contained in:
Gabriel Luiz Freitas Almeida 2024-04-04 02:46:44 -03:00 committed by GitHub
commit 05cd6e4fd7
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
853 changed files with 59936 additions and 15456 deletions

View file

@ -1,6 +1,6 @@
.venv/
**/aws
# node_modules
node_modules
**/node_modules/
dist/
**/build/

View file

@ -56,6 +56,13 @@ LANGFLOW_REMOVE_API_KEYS=
# LANGFLOW_REDIS_CACHE_EXPIRE (default: 3600)
LANGFLOW_CACHE_TYPE=
# Set AUTO_LOGIN to false if you want to disable auto login
# and use the login form to login. LANGFLOW_SUPERUSER and LANGFLOW_SUPERUSER_PASSWORD
# must be set if AUTO_LOGIN is set to false
# Values: true, false
LANGFLOW_AUTO_LOGIN=
# Superuser username
# Example: LANGFLOW_SUPERUSER=admin
LANGFLOW_SUPERUSER=

View file

@ -14,7 +14,7 @@ on:
- "src/backend/**"
env:
POETRY_VERSION: "1.7.0"
POETRY_VERSION: "1.8.2"
jobs:
lint:
@ -22,7 +22,6 @@ jobs:
strategy:
matrix:
python-version:
- "3.9"
- "3.10"
- "3.11"
steps:
@ -32,12 +31,15 @@ jobs:
pipx install poetry==$POETRY_VERSION
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
id: setup-python
with:
python-version: ${{ matrix.python-version }}
cache: poetry
- name: Install dependencies
- name: Install Python dependencies
run: |
poetry env use ${{ matrix.python-version }}
poetry install
if: ${{ steps.setup-python.outputs.cache-hit != 'true' }}
- name: Analysing the code with our lint
run: |
make lint

View file

@ -1,4 +1,4 @@
name: pre-release
name: Langflow Base Pre-release
on:
pull_request:
@ -11,7 +11,7 @@ on:
workflow_dispatch:
env:
POETRY_VERSION: "1.5.1"
POETRY_VERSION: "1.8.2"
jobs:
if_release:
@ -27,7 +27,7 @@ jobs:
python-version: "3.10"
cache: "poetry"
- name: Build project for distribution
run: make build
run: make build base=true
- name: Check Version
id: check-version
run: |
@ -46,7 +46,7 @@ jobs:
env:
POETRY_PYPI_TOKEN_PYPI: ${{ secrets.PYPI_API_TOKEN }}
run: |
poetry publish
poetry publish base=true
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
@ -61,5 +61,6 @@ jobs:
with:
context: .
push: true
file: ./build_and_push.Dockerfile
tags: logspace/langflow:${{ steps.check-version.outputs.version }}
file: ./build_and_push_base.Dockerfile
tags: |
logspace/langflow:base-${{ steps.check-version.outputs.version }}

View file

@ -0,0 +1,70 @@
name: Langflow Pre-release
on:
pull_request:
types:
- closed
branches:
- dev
paths:
- "pyproject.toml"
workflow_dispatch:
workflow_run:
workflows: ["pre-release-base"]
types: [completed]
branches: [dev]
env:
POETRY_VERSION: "1.8.2"
jobs:
if_release:
if: ${{ (github.event.pull_request.merged == true) && contains(github.event.pull_request.labels.*.name, 'pre-release') }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install poetry
run: pipx install poetry==$POETRY_VERSION
- name: Set up Python 3.10
uses: actions/setup-python@v5
with:
python-version: "3.10"
cache: "poetry"
- name: Build project for distribution
run: make build main=true
- name: Check Version
id: check-version
run: |
echo version=$(poetry version --short) >> $GITHUB_OUTPUT
- name: Create Release
uses: ncipollo/release-action@v1
with:
artifacts: "dist/*"
token: ${{ secrets.GITHUB_TOKEN }}
draft: false
generateReleaseNotes: true
prerelease: true
tag: v${{ steps.check-version.outputs.version }}
commit: dev
- name: Publish to PyPI
env:
POETRY_PYPI_TOKEN_PYPI: ${{ secrets.PYPI_API_TOKEN }}
run: |
poetry publish main=true
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
file: ./build_and_push.Dockerfile
tags: |
logspace/langflow:${{ steps.check-version.outputs.version }}

View file

@ -15,7 +15,7 @@ on:
- "src/backend/**"
env:
POETRY_VERSION: "1.5.0"
POETRY_VERSION: "1.8.2"
jobs:
build:
@ -33,11 +33,15 @@ jobs:
run: pipx install poetry==$POETRY_VERSION
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
id: setup-python
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
- name: Install dependencies
run: poetry install
- name: Install Python dependencies
run: |
poetry env use ${{ matrix.python-version }}
poetry install
if: ${{ steps.setup-python.outputs.cache-hit != 'true' }}
- name: Run unit tests
run: |
make tests

View file

@ -10,7 +10,7 @@ on:
- "pyproject.toml"
env:
POETRY_VERSION: "1.5.1"
POETRY_VERSION: "1.8.2"
jobs:
if_release:

149
.github/workflows/typescript_test.yml vendored Normal file
View file

@ -0,0 +1,149 @@
name: Run Frontend Tests
on:
pull_request:
paths:
- "src/frontend/**"
env:
POETRY_VERSION: "1.8.2"
NODE_VERSION: "21"
PYTHON_VERSION: "3.10"
# Define the directory where Playwright browsers will be installed.
# Adjust if your project uses a different path.
PLAYWRIGHT_BROWSERS_PATH: "ms-playwright"
jobs:
setup-and-test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
shardIndex: [1, 2, 3, 4]
shardTotal: [4]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v3
id: setup-node
with:
node-version: ${{ env.NODE_VERSION }}
cache: "npm"
- name: Install Node.js dependencies
run: |
cd src/frontend
npm ci
if: ${{ steps.setup-node.outputs.cache-hit != 'true' }}
# Attempt to restore the correct Playwright browser binaries based on the
# currently installed version of Playwright (The browser binary versions
# may change with Playwright versions).
# Note: Playwright's cache directory is hard coded because that's what it
# says to do in the docs. There doesn't appear to be a command that prints
# it out for us.
# - uses: actions/cache@v4
# id: playwright-cache
# with:
# path: ${{ env.PLAYWRIGHT_BROWSERS_PATH }}
# key: "${{ runner.os }}-playwright-${{ hashFiles('src/frontend/package-lock.json') }}"
# # As a fallback, if the Playwright version has changed, try use the
# # most recently cached version. There's a good chance that at least one
# # of the browser binary versions haven't been updated, so Playwright can
# # skip installing that in the next step.
# # Note: When falling back to an old cache, `cache-hit` (used below)
# # will be `false`. This allows us to restore the potentially out of
# # date cache, but still let Playwright decide if it needs to download
# # new binaries or not.
# restore-keys: |
# ${{ runner.os }}-playwright-
- name: Cache playwright binaries
uses: actions/cache@v4
id: playwright-cache
with:
path: |
~/.cache/ms-playwright
key: ${{ runner.os }}-playwright-${{ hashFiles('src/frontend/package-lock.json') }}
- name: Install Frontend dependencies
run: |
cd src/frontend
npm ci
- name: Install Playwright's browser binaries
run: |
cd src/frontend
npx playwright install --with-deps
if: steps.playwright-cache.outputs.cache-hit != 'true'
- name: Install Playwright's dependencies
run: |
cd src/frontend
npx playwright install-deps
if: steps.playwright-cache.outputs.cache-hit != 'true'
# If the Playwright browser binaries weren't able to be restored, we tell
# paywright to install everything for us.
# - name: Install Playwright's dependencies
# if: steps.playwright-cache.outputs.cache-hit != 'true'
# run: npx playwright install --with-deps
- name: Install Poetry
run: pipx install "poetry==${{ env.POETRY_VERSION }}"
- name: Set up Python
uses: actions/setup-python@v5
id: setup-python
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: "poetry"
- name: Install Python dependencies
run: |
poetry env use ${{ env.PYTHON_VERSION }}
poetry install
if: ${{ steps.setup-python.outputs.cache-hit != 'true' }}
- name: Run Playwright Tests
run: |
cd src/frontend
npx playwright test --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }}
- name: Upload blob report to GitHub Actions Artifacts
if: always()
uses: actions/upload-artifact@v4
with:
name: blob-report-${{ matrix.shardIndex }}
path: src/frontend/blob-report
retention-days: 1
merge-reports:
needs: setup-and-test
runs-on: ubuntu-latest
if: always()
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_VERSION }}
- name: Download blob reports from GitHub Actions Artifacts
uses: actions/download-artifact@v4
with:
path: all-blob-reports
pattern: blob-report-*
merge-multiple: true
- name: Merge into HTML Report
run: |
npx playwright merge-reports --reporter html ./all-blob-reports
- name: Upload HTML report
uses: actions/upload-artifact@v4
with:
name: html-report--attempt-${{ github.run_attempt }}
path: playwright-report
retention-days: 14

5
.gitignore vendored
View file

@ -258,5 +258,10 @@ langflow.db
/tmp/*
src/backend/langflow/frontend/
src/backend/base/langflow/frontend/
.docker
scratchpad*
chroma*/*
stuff/*
src/frontend/playwright-report/index.html
*.bak

26
.vscode/launch.json vendored
View file

@ -13,10 +13,32 @@
"7860",
"--reload",
"--log-level",
"debug"
"debug",
"--loop",
"asyncio"
],
"jinja": true,
"justMyCode": true,
"justMyCode": false,
"env": {
"LANGFLOW_LOG_LEVEL": "debug"
},
"envFile": "${workspaceFolder}/.env"
},
{
"name": "Debug CLI",
"type": "python",
"request": "launch",
"module": "langflow",
"args": [
"run",
"--path",
"${workspaceFolder}/src/backend/langflow/frontend"
],
"jinja": true,
"justMyCode": false,
"env": {
"LANGFLOW_LOG_LEVEL": "debug"
},
"envFile": "${workspaceFolder}/.env"
},
{

162
Makefile
View file

@ -1,12 +1,31 @@
.PHONY: all init format lint build build_frontend install_frontend run_frontend run_backend dev help tests coverage
all: help
log_level ?= debug
host ?= 0.0.0.0
port ?= 7860
env ?= .env
open_browser ?= true
path = src/backend/base/langflow/frontend
setup_poetry:
pipx install poetry
add:
@echo 'Adding dependencies'
ifdef devel
cd src/backend/base && poetry add --group dev $(devel)
endif
ifdef main
poetry add $(main)
endif
ifdef base
cd src/backend/base && poetry add $(base)
endif
init:
@echo 'Installing pre-commit hooks'
git config core.hooksPath .githooks
@echo 'Making pre-commit hook executable'
chmod +x .githooks/pre-commit
@echo 'Installing backend dependencies'
make install_backend
@echo 'Installing frontend dependencies'
@ -32,12 +51,15 @@ format:
lint:
make install_backend
poetry run mypy src/backend/langflow
poetry run mypy --namespace-packages -p "langflow"
poetry run ruff . --fix
install_frontend:
cd src/frontend && npm install
install_frontendci:
cd src/frontend && npm ci
install_frontendc:
cd src/frontend && rm -rf node_modules package-lock.json && npm install
@ -47,22 +69,57 @@ run_frontend:
tests_frontend:
ifeq ($(UI), true)
cd src/frontend && ./run-tests.sh --ui
cd src/frontend && npx playwright test --ui --project=chromium
else
cd src/frontend && ./run-tests.sh
cd src/frontend && npx playwright test --project=chromium
endif
run_cli:
poetry run langflow run --path src/frontend/build
@echo 'Running the CLI'
@make install_frontend > /dev/null
@echo 'Install backend dependencies'
@make install_backend > /dev/null
@echo 'Building the frontend'
@make build_frontend > /dev/null
ifdef env
@make start env=$(env) host=$(host) port=$(port) log_level=$(log_level)
else
@make start host=$(host) port=$(port) log_level=$(log_level)
endif
run_cli_debug:
poetry run langflow run --path src/frontend/build --log-level debug
@echo 'Running the CLI in debug mode'
@make install_frontend > /dev/null
@echo 'Building the frontend'
@make build_frontend > /dev/null
@echo 'Install backend dependencies'
@make install_backend > /dev/null
ifdef env
@make start env=$(env) host=$(host) port=$(port) log_level=debug
else
@make start host=$(host) port=$(port) log_level=debug
endif
start:
@echo 'Running the CLI'
ifeq ($(open_browser),false)
@make install_backend && poetry run langflow run --path $(path) --log-level $(log_level) --host $(host) --port $(port) --env-file $(env) --no-open-browser
else
@make install_backend && poetry run langflow run --path $(path) --log-level $(log_level) --host $(host) --port $(port) --env-file $(env)
endif
setup_devcontainer:
make init
make build_frontend
poetry run langflow --path src/frontend/build
setup_env:
@sh ./scripts/setup/update_poetry.sh 1.8.2
@sh ./scripts/setup/setup_env.sh
frontend:
make install_frontend
make run_frontend
@ -72,38 +129,67 @@ frontendc:
make run_frontend
install_backend:
poetry install --extras deploy
@echo 'Setting up the environment'
@make setup_env
@echo 'Installing backend dependencies'
@poetry install --extras deploy
backend:
make install_backend
@-kill -9 `lsof -t -i:7860`
ifeq ($(login),1)
@echo "Running backend without autologin";
poetry run langflow run --backend-only --port 7860 --host 0.0.0.0 --no-open-browser --env-file .env
ifdef login
@echo "Running backend autologin is $(login)";
LANGFLOW_AUTO_LOGIN=$(login) poetry run uvicorn --factory langflow.main:create_app --host 0.0.0.0 --port 7860 --reload --env-file .env --loop asyncio
else
@echo "Running backend with autologin";
LANGFLOW_AUTO_LOGIN=True poetry run langflow run --backend-only --port 7860 --host 0.0.0.0 --no-open-browser --env-file .env
@echo "Running backend respecting the .env file";
poetry run uvicorn --factory langflow.main:create_app --host 0.0.0.0 --port 7860 --reload --env-file .env --loop asyncio
endif
build_and_run:
echo 'Removing dist folder'
@echo 'Removing dist folder'
@make setup_env
rm -rf dist
make build && poetry run pip install dist/*.tar.gz && poetry run langflow run
rm -rf src/backend/base/dist
make build
poetry run pip install dist/*.tar.gz
poetry run langflow run
build_and_install:
echo 'Removing dist folder'
@echo 'Removing dist folder'
rm -rf dist
make build && poetry run pip install dist/*.tar.gz
rm -rf src/backend/base/dist
make build && poetry run pip install dist/*.whl && pip install src/backend/base/dist/*.whl --force-reinstall
build_frontend:
cd src/frontend && CI='' npm run build
cp -r src/frontend/build src/backend/langflow/frontend
cp -r src/frontend/build src/backend/base/langflow/frontend
build:
make install_frontend
@echo 'Building the project'
@make setup_env
ifdef base
make install_frontendci
make build_frontend
poetry build --format sdist
rm -rf src/backend/langflow/frontend
make build_langflow_base
endif
ifdef main
make build_langflow
endif
build_langflow_base:
cd src/backend/base && poetry build
rm -rf src/backend/base/langflow/frontend
build_langflow_backup:
poetry lock && poetry build
build_langflow:
cd ./scripts && poetry run python update_dependencies.py
poetry lock
poetry build
mv pyproject.toml.bak pyproject.toml
mv poetry.lock.bak poetry.lock
dev:
make install_frontend
@ -115,10 +201,36 @@ else
docker compose $(if $(debug),-f docker-compose.debug.yml) up
endif
publish:
make build
lock_base:
cd src/backend/base && poetry lock
lock_langflow:
poetry lock
lock:
# Run both in parallel
@echo 'Locking dependencies'
cd src/backend/base && poetry lock
poetry lock
publish_base:
make build_langflow_base
cd src/backend/base && poetry publish
publish_langflow:
make build_langflow
poetry publish
publish:
@echo 'Publishing the project'
ifdef base
-make publish_base
endif
ifdef main
-make publish_langflow
endif
help:
@echo '----'
@echo 'format - run code formatters'

View file

@ -3,6 +3,7 @@
# ⛓️ Langflow
### Discover a simpler & smarter way to build around Foundation Models</h3>
# [![Langflow](https://github.com/logspace-ai/langflow/blob/dev/docs/static/img/new_langflow_demo.gif)](https://www.langflow.org)
# 📦 Installation
@ -38,11 +39,9 @@ Once youre done, you can export your flow as a JSON file.
Load the flow with:
```python
from langflow import load_flow_from_json
from langflow.load import run_flow_from_json
flow = load_flow_from_json("path/to/flow.json")
# Now you can use it
flow("Hey, have you heard of Langflow?")
results = run_flow_from_json("path/to/flow.json", input_value="Hello, World!")
```
# 🖥️ Command Line Interface (CLI)

View file

@ -23,7 +23,7 @@ ENV PYTHONUNBUFFERED=1 \
\
# poetry
# https://python-poetry.org/docs/configuration/#using-environment-variables
POETRY_VERSION=1.7.1 \
POETRY_VERSION=1.8.2 \
# make poetry install to this location
POETRY_HOME="/opt/poetry" \
# make poetry create the virtual environment in the project's root

View file

@ -23,7 +23,7 @@ ENV PYTHONUNBUFFERED=1 \
\
# poetry
# https://python-poetry.org/docs/configuration/#using-environment-variables
POETRY_VERSION=1.7.1 \
POETRY_VERSION=1.8.2 \
# make poetry install to this location
POETRY_HOME="/opt/poetry" \
# make poetry create the virtual environment in the project's root
@ -62,10 +62,14 @@ RUN apt-get update \
WORKDIR /app
COPY pyproject.toml poetry.lock ./
COPY src ./src
COPY scripts ./scripts
COPY Makefile ./
COPY README.md ./
RUN curl -sSL https://install.python-poetry.org | python3 - && make build
RUN --mount=type=cache,target=/root/.cache \
curl -sSL https://install.python-poetry.org | python3 -
RUN python -m pip install requests && cd ./scripts && python update_dependencies.py
RUN $POETRY_HOME/bin/poetry lock
RUN $POETRY_HOME/bin/poetry build
# Final stage for the application
FROM python-base as final

View file

@ -0,0 +1,91 @@
# syntax=docker/dockerfile:1
# Keep this syntax directive! It's used to enable Docker BuildKit
# Based on https://github.com/python-poetry/poetry/discussions/1879?sort=top#discussioncomment-216865
# but I try to keep it updated (see history)
################################
# PYTHON-BASE
# Sets up all our shared environment variables
################################
FROM python:3.10-slim as python-base
# python
ENV PYTHONUNBUFFERED=1 \
# prevents python creating .pyc files
PYTHONDONTWRITEBYTECODE=1 \
\
# pip
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100 \
\
# poetry
# https://python-poetry.org/docs/configuration/#using-environment-variables
POETRY_VERSION=1.8.2 \
# make poetry install to this location
POETRY_HOME="/opt/poetry" \
# make poetry create the virtual environment in the project's root
# it gets named `.venv`
POETRY_VIRTUALENVS_IN_PROJECT=true \
# do not ask any interactive question
POETRY_NO_INTERACTION=1 \
\
# paths
# this is where our requirements + virtual environment will live
PYSETUP_PATH="/opt/pysetup" \
VENV_PATH="/opt/pysetup/.venv"
# prepend poetry and venv to path
ENV PATH="$POETRY_HOME/bin:$VENV_PATH/bin:$PATH"
################################
# BUILDER-BASE
# Used to build deps + create our virtual environment
################################
FROM python-base as builder-base
RUN apt-get update \
&& apt-get install --no-install-recommends -y \
# deps for installing poetry
curl \
# deps for building python deps
build-essential \
# npm
npm \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN --mount=type=cache,target=/root/.cache \
curl -sSL https://install.python-poetry.org | python3 -
# Now we need to copy the entire project into the image
COPY pyproject.toml poetry.lock ./
COPY src/frontend/package.json /tmp/package.json
RUN cd /tmp && npm install
WORKDIR /app
COPY src/frontend ./src/frontend
RUN rm -rf src/frontend/node_modules
RUN cp -a /tmp/node_modules /app/src/frontend
COPY scripts ./scripts
COPY Makefile ./
COPY README.md ./
RUN cd src/frontend && npm run build
COPY src/backend ./src/backend
RUN cp -r src/frontend/build src/backend/base/langflow/frontend
RUN rm -rf src/backend/base/dist
RUN cd src/backend/base && $POETRY_HOME/bin/poetry build --format sdist
# Final stage for the application
FROM python-base as final
# Copy virtual environment and built .tar.gz from builder base
COPY --from=builder-base /app/src/backend/base/dist/*.tar.gz ./
# Install the package from the .tar.gz
RUN pip install *.tar.gz
WORKDIR /app
CMD ["python", "-m", "langflow", "run", "--host", "0.0.0.0", "--port", "7860"]

View file

@ -23,7 +23,7 @@ ENV PYTHONUNBUFFERED=1 \
\
# poetry
# https://python-poetry.org/docs/configuration/#using-environment-variables
POETRY_VERSION=1.5.1 \
POETRY_VERSION=1.8.2 \
# make poetry install to this location
POETRY_HOME="/opt/poetry" \
# make poetry create the virtual environment in the project's root

View file

@ -70,7 +70,6 @@ The CustomComponent class serves as the foundation for creating custom component
| Key | Description |
| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| _`field_type: str`_ | The type of the field (can be any of the types supported by the _`build`_ method). |
| _`is_list: bool`_ | If the field can be a list of values, meaning that the user can manually add more inputs to the same field. |
| _`options: List[str]`_ | When defined, the field becomes a dropdown menu where a list of strings defines the options to be displayed. If the _`value`_ attribute is set to one of the options, that option becomes default. For this parameter to work, _`field_type`_ should invariably be _`str`_. |
| _`multiline: bool`_ | Defines if a string field opens a text editor. Useful for longer texts. |
@ -78,20 +77,20 @@ The CustomComponent class serves as the foundation for creating custom component
| _`display_name: str`_ | Defines the name of the field. |
| _`advanced: bool`_ | Hide the field in the canvas view (displayed component settings only). Useful when a field is for advanced users. |
| _`password: bool`_ | To mask the input text. Useful to hide sensitive text (e.g. API keys). |
| _`required: bool`_ | Makes the field required. |
| _`required: bool`_ | This is determined automatically but can be used to override the default behavior. |
| _`info: str`_ | Adds a tooltip to the field. |
| _`file_types: List[str]`_ | This is a requirement if the _`field_type`_ is _file_. Defines which file types will be accepted. For example, _json_, _yaml_ or _yml_. |
| _`range_spec: langflow.field_typing.RangeSpec`_ | This is a requirement if the _`field_type`_ is _`float`_. Defines the range of values accepted and the step size. If none is defined, the default is _`[-1, 1, 0.1]`_. |
| _`title_case: bool`_ | Formats the name of the field when _`display_name`_ is not defined. Set it to False to keep the name as you set it in the _`build`_ method. |
| _`refresh_button: bool`_ | If set to True a button will appear to the right of the field, and when clicked, it will call the _`update_build_config`_ method which takes in the _`build_config`_, the name of the field (_`field_name`_) and the latest value of the field (_`field_value`_). This is useful when you want to update the _`build_config`_ based on the value of the field. |
| _`real_time_refresh: bool`_ | If set to True, the _`update_build_config`_ method will be called every time the field value changes. |
| _`field_type: str`_ | You should never define this key. It is automatically set based on the type hint of the _`build`_ method. |
<Admonition type="info" label="Tip">
Keys _`options`_ and _`value`_ can receive a method or function that returns a list of strings or a string, respectively. This is useful when you want to dynamically generate the options or the default value of a field. A refresh button will appear next to the field in the component, allowing the user to update the options or the default value.
</Admonition>
<Admonition type="info" label="Tip">
By using the _`update_build_config`_ method, you can update the _`build_config`_ in whatever way you want based on the value of the field or not.
</Admonition>
- The CustomComponent class also provides helpful methods for specific tasks (e.g., to load and use other flows from the Langflow platform):

View file

@ -0,0 +1,87 @@
import Admonition from '@theme/Admonition';
# Data
### API Request
This component makes HTTP requests to the specified URLs.
**Params**
- **URLs:** URLs to make requests to.
- **Method:** The HTTP method to use.
- **Headers:** The headers to send with the request.
- **Body:** The body to send with the request (for POST, PATCH, PUT).
- **Timeout:** The timeout to use for the request.
<Admonition type="tip" title="Tip">
<p>
Use this component to make HTTP requests to external APIs or services and retrieve data.
</p>
<p>
Ensure that you provide valid URLs and configure the method, headers, body, and timeout appropriately.
</p>
</Admonition>
---
### Directory
This component recursively loads files from a directory.
**Params**
- **Path:** The path to the directory.
- **Types:** File types to load. Leave empty to load all types.
- **Depth:** Depth to search for files.
- **Max Concurrency:** The maximum number of concurrent file loading operations.
- **Load Hidden:** If true, hidden files will be loaded.
- **Recursive:** If true, the search will be recursive.
- **Silent Errors:** If true, errors will not raise an exception.
- **Use Multithreading:** If true, use multithreading for loading files.
<Admonition type="tip" title="Tip">
<p>
Use this component to load files from a directory, such as text files, JSON files, etc.
</p>
<p>
Ensure that you provide the correct path to the directory and configure other parameters as needed.
</p>
</Admonition>
---
### File
This component loads a generic file.
**Params**
- **Path:** The path to the file.
- **Silent Errors:** If true, errors will not raise an exception.
<Admonition type="tip" title="Tip">
<p>
Use this component to load a generic file, such as a text file, JSON file, etc.
</p>
<p>
Ensure that you provide the correct path to the file and configure other parameters as needed.
</p>
</Admonition>
---
### URL
This component fetches content from one or more URLs.
**Params**
- **URLs:** The URLs from which content will be fetched.
<Admonition type="tip" title="Tip">
<p>
Ensure that you provide valid URLs and configure other parameters as needed.
</p>
</Admonition>

View file

@ -2,19 +2,7 @@ import Admonition from "@theme/Admonition";
# Embeddings
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may
contain some rough edges. Share your feedback or report issues to help us
improve! 🛠️📝
</p>
</Admonition>
Embeddings are vector representations of text that capture the semantic meaning of the text. They are created using text embedding models and allow us to think about the text in a vector space, enabling us to perform tasks like semantic search, where we look for pieces of text that are most similar in the vector space.
---
### BedrockEmbeddings
### Amazon Bedrock Embeddings
Used to load [Amazon Bedrockss](https://aws.amazon.com/bedrock/) embedding models.
@ -30,7 +18,7 @@ Used to load [Amazon Bedrockss](https://aws.amazon.com/bedrock/) embedding mo
---
### CohereEmbeddings
### Cohere Embeddings
Used to load [Coheres](https://cohere.com/) embedding models.
@ -44,57 +32,93 @@ Used to load [Coheres](https://cohere.com/) embedding models.
---
### HuggingFaceEmbeddings
### Azure OpenAI Embeddings
Generate embeddings using Azure OpenAI models.
**Params**
- **Azure Endpoint:** Your Azure endpoint, including the resource. Example: `https://example-resource.azure.openai.com/`
- **Deployment Name:** The name of the deployment.
- **API Version:** The API version to use. (Options: 2022-12-01, 2023-03-15-preview, 2023-05-15, 2023-06-01-preview, 2023-07-01-preview, 2023-08-01-preview)
- **API Key:** The API key to access the Azure OpenAI service.
---
### Hugging Face API Embeddings
Generate embeddings using Hugging Face Inference API models.
**Params**
- **API Key:** API key for accessing the Hugging Face Inference API. (Type: str)
- **API URL:** URL of the Hugging Face Inference API. (Default: http://localhost:8080)
- **Model Name:** Name of the model to use. (Default: BAAI/bge-large-en-v1.5)
- **Cache Folder:** Folder path to cache Hugging Face models. (Advanced)
- **Encode Kwargs:** Additional arguments for the encoding process. (Type: dict, Advanced)
- **Model Kwargs:** Additional arguments for the model. (Type: dict, Advanced)
- **Multi Process:** Whether to use multiple processes. (Default: False, Advanced)
---
### Hugging Face Embeddings
Used to load [HuggingFaces](https://huggingface.co) embedding models.
**Params**
- **cache_folder:** Used to specify the folder where the embeddings will be cached. When embeddings are computed for a text, they can be stored in the cache folder so that they can be reused later without the need to recompute them. This can improve the performance of the application by avoiding redundant computations.
- **encode_kwargs:** Used to pass additional keyword arguments to the encoding method of the underlying HuggingFace model. These keyword arguments can be used to customize the encoding process, such as specifying the maximum length of the input sequence or enabling truncation or padding.
- **model_kwargs:** Used to customize the behavior of the model, such as specifying the model architecture, the tokenizer, or any other model-specific configuration options. By using `model_kwargs`, the user can configure the HuggingFace model according to specific needs and preferences.
- **model_name:** Used to specify the name or identifier of the HuggingFace model that will be used for generating embeddings. It allows users to choose a specific pre-trained model from the Hugging Face model hub — defaults to `sentence-transformers/all-mpnet-base-v2`.
- **Cache Folder:** Folder path to cache HuggingFace models.
- **Encode Kwargs:** Additional arguments for the encoding process. (Type: dict)
- **Model Kwargs:** Additional arguments for the model. (Type: dict)
- **Model Name:** Name of the HuggingFace model to use. (Default: sentence-transformers/all-mpnet-base-v2)
- **Multi Process:** Whether to use multiple processes. (Default: False)
---
### OpenAIEmbeddings
### Ollama Embeddings
Generate embeddings using Ollama models.
**Params**
- **Ollama Model:** Name of the Ollama model to use. (Default: llama2)
- **Ollama Base URL:** Base URL of the Ollama API. (Default: http://localhost:11434)
- **Model Temperature:** Temperature parameter for the model. (Type: float)
---
### OpenAI Embeddings
Used to load [OpenAIs](https://openai.com/) embedding models.
**Params**
- **chunk_size:** Determines the maximum size of each chunk of text that is processed for embedding. If any of the incoming text chunks exceeds `chunk_size` characters, it will be split into multiple chunks of size `chunk_size` or less before being embedded — defaults to `1000`.
- **deployment:** Used to specify the deployment name or identifier of the text embedding model. It allows the user to choose a specific deployment of the model to use for embedding. When the deployment is provided, this can be useful when the user has multiple deployments of the same model with different configurations or versions — defaults to `text-embedding-ada-002`.
- **embedding_ctx_length:** This parameter determines the maximum context length for the text embedding model. It specifies the number of tokens that the model considers when generating embeddings for a piece of text — defaults to `8191` (this means that the model will consider up to 8191 tokens when generating embeddings).
- **max_retries:** Determines the maximum number of times to retry a request if the model provider returns an error from their API — defaults to `6`.
- **model:** Defines which pre-trained text embedding model to use — defaults to `text-embedding-ada-002`.
- **openai_api_base:** Refers to the base URL for the Azure OpenAI resource. It is used to configure the API to connect to the Azure OpenAI service. The base URL can be found in the Azure portal under the user Azure OpenAI resource.
- **openai_api_key:** Is used to authenticate and authorize access to the OpenAI service.
- **openai_api_type:** Is used to specify the type of OpenAI API being used, either the regular OpenAI API or the Azure OpenAI API. This parameter allows the `OpenAIEmbeddings` class to connect to the appropriate API service.
- **openai_api_version:** Is used to specify the version of the OpenAI API being used. This parameter allows the `OpenAIEmbeddings` class to connect to the appropriate version of the OpenAI API service.
- **openai_organization:** Is used to specify the organization associated with the OpenAI API key. If not provided, the default organization associated with the API key will be used.
- **openai_proxy:** Proxy enables better budgeting and cost management for making OpenAI API calls, including more transparency into pricing.
- **request_timeout:** Used to specify the maximum amount of time, in milliseconds, to wait for a response from the OpenAI API when generating embeddings for a given text.
- **tiktoken_model_name:** Used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name.
- **OpenAI API Key:** The API key to use for accessing the OpenAI API. (Type: str)
- **Default Headers:** Default headers for the HTTP requests. (Type: Dict[str, str], Optional)
- **Default Query:** Default query parameters for the HTTP requests. (Type: NestedDict, Optional)
- **Allowed Special:** Special tokens allowed for processing. (Type: List[str], Default: [])
- **Disallowed Special:** Special tokens disallowed for processing. (Type: List[str], Default: ["all"])
- **Chunk Size:** Chunk size for processing. (Type: int, Default: 1000)
- **Client:** HTTP client for making requests. (Type: Any, Optional)
- **Deployment:** Deployment name for the model. (Type: str, Default: "text-embedding-3-small")
- **Embedding Context Length:** Length of embedding context. (Type: int, Default: 8191)
- **Max Retries:** Maximum number of retries for failed requests. (Type: int, Default: 6)
- **Model:** Name of the model to use. (Type: str, Default: "text-embedding-3-small")
- **Model Kwargs:** Additional keyword arguments for the model. (Type: NestedDict, Optional)
- **OpenAI API Base:** Base URL of the OpenAI API. (Type: str, Optional)
- **OpenAI API Type:** Type of the OpenAI API. (Type: str, Optional)
- **OpenAI API Version:** Version of the OpenAI API. (Type: str, Optional)
- **OpenAI Organization:** Organization associated with the API key. (Type: str, Optional)
- **OpenAI Proxy:** Proxy server for the requests. (Type: str, Optional)
- **Request Timeout:** Timeout for the HTTP requests. (Type: float, Optional)
- **Show Progress Bar:** Whether to show a progress bar for processing. (Type: bool, Default: False)
- **Skip Empty:** Whether to skip empty inputs. (Type: bool, Default: False)
- **TikToken Enable:** Whether to enable TikToken. (Type: bool, Default: True)
- **TikToken Model Name:** Name of the TikToken model. (Type: str, Optional)
---
### VertexAIEmbeddings
### VertexAI Embeddings
Wrapper around [Google Vertex AI](https://cloud.google.com/vertex-ai) [Embeddings API](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings).
@ -113,11 +137,3 @@ Vertex AI is a cloud computing platform offered by Google Cloud Platform (GCP).
- **top_p:** Tokens are selected from most probable to least until the sum of their defaults to `0.95`.
- **tuned_model_name:** The name of a tuned model. If provided, model_name is ignored.
- **verbose:** This parameter is used to control the level of detail in the output of the chain. When set to True, it will print out some internal states of the chain while it is being run, which can help debug and understand the chain's behavior. If set to False, it will suppress the verbose output defaults to `False`.
### OllamaEmbeddings
Used to load [Ollamas](https://ollama.ai/) embedding models. Wrapper around LangChain's [Ollama API](https://python.langchain.com/docs/integrations/text_embedding/ollama).
- **model** The name of the Ollama model to use defaults to `llama2`.
- **base_url** The base URL for the Ollama API defaults to `http://localhost:11434`.
- **temperature** Tunes the degree of randomness in text generations. Should be a non-negative value defaults to `0`.

View file

@ -0,0 +1,250 @@
import Admonition from '@theme/Admonition';
# Experimental
Experimental are components that are currently in a beta phase. This means they have undergone initial development and testing but have not yet reached a stable or fully supported status. Users are encouraged to explore these components, provide feedback, and report any issues encountered during their usage.
### Clear Message History Component
This component is designed to clear the message history associated with a specific session ID.
**Beta:** This component is currently in beta.
**Parameters**
- **Session ID:**
- **Display Name:** Session ID
- **Info:** The session ID to clear the message history.
**Usage**
To use this component, provide the session ID for which you want to clear the message history.
---
### Extract Key From Record
This component extracts specified keys from a record.
**Parameters**
- **Record:**
- **Display Name:** Record
- **Info:** The record from which to extract the keys.
- **Keys:**
- **Display Name:** Keys
- **Info:** The keys to extract from the record.
- **Silent Errors:**
- **Display Name:** Silent Errors
- **Info:** If True, errors will not be raised.
- **Advanced:** True
**Usage**
To use this component, provide the record from which you want to extract keys, specify the keys to extract, and optionally set whether to raise errors for missing keys.
---
### Flow as Tool
This component constructs a Tool from a function that runs the loaded Flow.
**Parameters**
- **Flow Name:**
- **Display Name:** Flow Name
- **Info:** The name of the flow to run.
- **Options:** List of available flow names.
- **Real-time Refresh:** True
- **Refresh Button:** True
- **Name:**
- **Display Name:** Name
- **Description:** The name of the tool.
- **Description:**
- **Display Name:** Description
- **Description:** The description of the tool.
- **Return Direct:**
- **Display Name:** Return Direct
- **Description:** Return the result directly from the Tool.
- **Advanced:** True
**Usage**
To use this component, select the desired flow from the available options, provide a name and description for the tool, and specify whether to return the result directly from the tool.
---
### Listen
This component listens for a notification.
**Parameters**
- **Name:**
- **Display Name:** Name
- **Info:** The name of the notification to listen for.
**Usage**
To use this component, specify the name of the notification to listen for.
---
### List Flows
This component lists all available flows.
**Usage**
To use this component, simply call it without any parameters.
---
### Merge Records
**Parameters**
- **Records:**
- **Display Name:** Records
**Usage**
To use this component, provide a list of records to merge.
---
### Notify
This component generates a notification to the Get Notified component.
**Parameters**
- **Name:**
- **Display Name:** Name
- **Info:** The name of the notification.
- **Record:**
- **Display Name:** Record
- **Info:** The record to store.
- **Append:**
- **Display Name:** Append
- **Info:** If True, the record will be appended to the notification.
**Usage**
To use this component, specify the name of the notification, provide an optional record to store, and indicate whether to append the record to the notification.
---
### Run Flow
This component runs a flow.
**Parameters**
- **Input Value:**
- **Display Name:** Input Value
- **Multiline:** True
- **Flow Name:**
- **Display Name:** Flow Name
- **Info:** The name of the flow to run.
- **Options:** List of available flow names.
- **Refresh Button:** True
- **Tweaks:**
- **Display Name:** Tweaks
- **Info:** Tweaks to apply to the flow.
**Usage**
To use this component, provide the input value, specify the flow name to run, and optionally provide tweaks to apply to the flow.
---
### Runnable Executor
This component executes a runnable.
**Parameters**
- **Input Key:**
- **Display Name:** Input Key
- **Info:** The key to use for the input.
- **Inputs:**
- **Display Name:** Inputs
- **Info:** The inputs to pass to the runnable.
- **Runnable:**
- **Display Name:** Runnable
- **Info:** The runnable to execute.
- **Output Key:**
- **Display Name:** Output Key
- **Info:** The key to use for the output.
**Usage**
To use this component, specify the input key, provide the inputs to pass to the runnable, select the runnable to execute, and optionally specify the output key.
---
### SQL Executor
This component executes an SQL query.
**Parameters**
- **Database URL:**
- **Display Name:** Database URL
- **Info:** The URL of the database.
- **Include Columns:**
- **Display Name:** Include Columns
- **Info:** Include columns in the result.
- **Passthrough:**
- **Display Name:** Passthrough
- **Info:** If an error occurs, return the query instead of raising an exception.
- **Add Error:**
- **Display Name:** Add Error
- **Info:** Add the error to the result.
**Usage**
To use this component, provide the SQL query, specify the database URL, and optionally configure include columns, passthrough, and add error settings.
---
### SubFlow
This component dynamically generates a component from a flow. The output is a list of records with keys 'result' and 'message'.
**Parameters**
- **Input Value:**
- **Display Name:** Input Value
- **Multiline:** True
- **Flow Name:**
- **Display Name:** Flow Name
- **Info:** The name of the flow to run.
- **Options:** List of available flow names.
- **Real Time Refresh:** True
- **Refresh Button:** True
- **Tweaks:**
- **Display Name:** Tweaks
- **Info:** Tweaks to apply to the flow.
**Usage**
To use this component, specify the flow name and provide any necessary tweaks to apply to the flow.

View file

@ -0,0 +1,127 @@
import Admonition from '@theme/Admonition';
# Helpers
### Chat Memory
This component retrieves stored chat messages given a specific Session ID.
**Params**
- **Sender Type:** Choose the sender type from options like "Machine", "User", or "Machine and User".
- **Sender Name:** (Optional) The name of the sender.
- **Number of Messages:** Number of messages to retrieve.
- **Session ID:** The Session ID of the chat history.
- **Order:** Choose the order of the messages, either "Ascending" or "Descending".
- **Record Template:** (Optional) Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.
---
### Combine Text
This component concatenates two text sources into a single text chunk using a specified delimiter.
**Params**
- **First Text:** The first text input to concatenate.
- **Second Text:** The second text input to concatenate.
- **Delimiter:** A string used to separate the two text inputs. Defaults to a whitespace.
---
### Create Record
This component dynamically creates a Record with a specified number of fields.
**Params**
- **Number of Fields:** Number of fields to be added to the record.
- **Text Key:** Key to be used as text.
---
### Custom Component
Use this component as a template to create your own custom component.
**Params**
- **Parameter:** Describe the purpose of this parameter.
<Admonition type="info" title="Info">
<p>
Customize the <code>build_config</code> and <code>build</code> methods according to your requirements.
</p>
</Admonition>
Learn more about [Custom Component](http://docs.langflow.org/components/custom).
---
### Documents to Records
Convert LangChain Documents into Records.
**Parameters**
- **Documents:** Documents to be converted into Records.
---
### ID Generator
Generates a unique ID.
**Parameters**
- **Value:** Unique ID generated.
---
### Message History
Retrieves stored chat messages given a specific Session ID.
**Parameters**
- **Sender Type:** Options for the sender type.
- **Sender Name:** Sender name.
- **Number of Messages:** Number of messages to retrieve.
- **Session ID:** Session ID of the chat history.
- **Order:** Order of the messages.
---
### Records to Text
Convert Records into plain text following a specified template.
**Parameters**
- **Records:** The records to convert to text.
- **Template:** The template to use for formatting the records. It can contain the keys `{text}`, `{data}` or any other key in the Record.
---
### Split Text
Split text into chunks of a specified length.
**Parameters**
- **Texts:** Texts to split.
- **Separators:** The characters to split on. Defaults to [" "].
- **Max Chunk Size:** The maximum length (in number of characters) of each chunk.
- **Chunk Overlap:** The amount of character overlap between chunks.
- **Recursive:** Whether to split recursively.
---
### Update Record
Update Record with text-based key/value pairs, similar to updating a Python dictionary.
**Parameters**
- **Record:** The record to update.
- **New Data:** The new data to update the record with.

View file

@ -0,0 +1,164 @@
import Admonition from "@theme/Admonition";
import ZoomableImage from "/src/theme/ZoomableImage.js";
# Inputs
### Chat Input
This component is designed to get user input from the chat.
**Params**
- **Sender Type:** specifies the sender type. Defaults to _`"User"`_. Options are _`"Machine"`_ and _`"User"`_.
- **Sender Name:** specifies the name of the sender. Defaults to _`"User"`_.
- **Message:** specifies the message text. It is a multiline text input.
- **Session ID:** specifies the session ID of the chat history. If provided, the message will be saved in the Message History.
<Admonition type="note" title="Note">
<p>
If _`As Record`_ is _`true`_ and the _`Message`_ is a _`Record`_, the data
of the _`Record`_ will be updated with the _`Sender`_, _`Sender Name`_, and
_`Session ID`_.
</p>
</Admonition>
When you get it from the sidebar, it will look like the image below but that is because some fields are in the advanced section.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/chat-input.png",
dark: "img/chat-input.png",
}}
style={{ width: "50%", margin: "20px auto" }}
/>
If you expose all its fields, it will look like the image below.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/chat-input-expanded.png",
dark: "img/chat-input-expanded.png",
}}
style={{ width: "40%", margin: "20px auto" }}
/>
One key capability of the Chat Input component is how it transforms the Interaction Panel into a chat window. This feature is particularly useful for scenarios where user input is required to initiate or influence the flow.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/interaction-panel-with-chat-input.png",
dark: "img/interaction-panel-with-chat-input.png",
}}
style={{ width: "50%", margin: "20px auto" }}
/>
---
### Prompt
Create a prompt template with dynamic variables. This is a very useful component for structuring prompts and passing dynamic data to a language model.
**Parameters**
- **Template:** the template for the prompt. This field allows you to create other fields dynamically by using curly brackets `{}`. For example, if you have a template like this: _`"Hello {name}, how are you?"`_, a new field called _`name`_ will be created.
<Admonition type="note" title="Note">
<p>
Prompt variables can be created with any chosen name inside curly brackets,
e.g. `{variable_name}`
</p>
</Admonition>
Here is how it looks when you get it from the sidebar.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/prompt.png",
dark: "img/prompt.png",
}}
style={{ width: "50%", margin: "20px auto" }}
/>
And here when you add a Template with the value _`Hello {name}, how are you?`_.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/prompt-with-template.png",
dark: "img/prompt-with-template.png",
}}
style={{ width: "50%", margin: "20px auto" }}
/>
---
### Text Input
This component is designed for simple text input, allowing users to pass textual data to subsequent components in the workflow. It's particularly useful for scenarios where a brief user input is required to initiate or influence the flow.
**Params**
- **Value:** Specifies the text input value. This is where the user can input the text data that will be passed to the next component in the sequence. If no value is provided, it defaults to an empty string.
- **Record Template:** Specifies how a Record should be converted into Text.
<Admonition type="note" title="Note">
<p>
The `TextInput` component serves as a straightforward means for setting Text
input values in the chat window. It ensures that textual data can be
seamlessly passed to subsequent components in the flow.
</p>
</Admonition>
It should look like this when dropped directly from the sidebar.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/text-input.png",
dark: "img/text-input.png",
}}
style={{ width: "50%", margin: "20px auto", margin: "20px auto" }}
/>
And when you expose all its fields, it will look like the image below.
The **Record Template** field is used to specify how a Record should be converted into Text. This is particularly useful when you want to extract specific information from a Record and pass it as text to the next component in the sequence.
For example, if you have a Record with the following structure:
```json
{
"name": "John Doe",
"age": 30,
"email": "johndoe@email.com"
}
```
You can use a template like this: _`"Name: {name}, Age: {age}"`_ to convert the Record into a text string like this: _`"Name: John Doe, Age: 30"`_, and if you pass more than one Record, the text will be concatenated with a new line separator.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/text-input-expanded.png",
dark: "img/text-input-expanded.png",
}}
style={{ width: "50%", margin: "20px auto" }}
/>
The Text Input component gives you the possibility to add an Input field on the Interaction Panel. This is useful because it allows you to define parameters while running and testing your flow.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/interaction-panel-text-input.png",
dark: "img/interaction-panel-text-input.png",
}}
style={{ width: "50%", margin: "20px auto" }}
/>

View file

@ -12,6 +12,26 @@ Memory is a concept in chat-based applications that allows the system to remembe
---
### MessageHistory
This component is designed to retrieve stored messages based on various filters such as sender type, sender name, session ID, and a specific file path where messages are stored. It allows for a flexible retrieval of chat history, providing insights into past interactions.
**Params**
- **Sender Type:** (Optional) Specifies the type of the sender. Options are _`"Machine"`_, _`"User"`_, or _`"Machine and User"`_. Filters the messages by the type of the sender.
- **Sender Name:** (Optional) Specifies the name of the sender. Filters the messages by the name of the sender.
- **Session ID:** (Optional) Specifies the session ID of the chat history. Filters the messages belonging to a specific session.
- **Number of Messages:** Specifies the number of messages to retrieve. Defaults to _`5`_. Determines how many recent messages from the chat history to fetch.
<Admonition type="note" title="Note">
<p>
The component retrieves messages based on the provided criteria, including the specific file path for stored messages. If no specific criteria are provided, it will return the most recent messages up to the specified limit. This component can be used to review past interactions and analyze the flow of conversations.
</p>
</Admonition>
### ConversationBufferMemory
The `ConversationBufferMemory` component is a type of memory system that plainly stores the last few inputs and outputs of a conversation.
@ -27,7 +47,7 @@ The `ConversationBufferMemory` component is a type of memory system that plainly
### ConversationBufferWindowMemory
`ConversationBufferWindowMemory` is a variation of the `ConversationBufferMemory` that maintains a list of the recent interactions in a conversation. It only keeps the last K interactions in memory, which can be useful for maintaining a sliding window of the most recent interactions without letting the buffer get too large.
`ConversationBufferWindowMemory` is a variation of the `ConversationBufferMemory` that maintains a list of the recent interactions in a conversation. It only keeps the last K interactions in memory, which can be useful for maintaining a sliding window of the most recent interactions without letting the buffer get too large.
**Params**
@ -72,7 +92,7 @@ The `ConversationEntityMemory` component incorporates intricate memory structure
### ConversationSummaryMemory
The `ConversationSummaryMemory` is a memory component that creates a summary of the conversation over time. It condenses information from the conversation and stores the current summary in memory. It is particularly useful for longer conversations where keeping the entire message history in the prompt would take up too many tokens.
The `ConversationSummaryMemory` is a memory component that creates a summary of the conversation over time. It condenses information from the conversation and stores the current summary in memory. It is particularly useful for longer conversations where keeping the entire message history in the prompt would take up too many tokens.
**Params**

View file

@ -0,0 +1,346 @@
import Admonition from '@theme/Admonition';
# Models
### Amazon Bedrock
This component facilitates the generation of text using the LLM (Large Language Model) model from Amazon Bedrock.
**Params**
- **Input Value:** Specifies the input text for text generation.
- **System Message (Optional):** A system message to pass to the model.
- **Model ID (Optional):** Specifies the model ID to be used for text generation. Defaults to _`"anthropic.claude-instant-v1"`_. Available options include:
- _`"ai21.j2-grande-instruct"`_
- _`"ai21.j2-jumbo-instruct"`_
- _`"ai21.j2-mid"`_
- _`"ai21.j2-mid-v1"`_
- _`"ai21.j2-ultra"`_
- _`"ai21.j2-ultra-v1"`_
- _`"anthropic.claude-instant-v1"`_
- _`"anthropic.claude-v1"`_
- _`"anthropic.claude-v2"`_
- _`"cohere.command-text-v14"`_
- **Credentials Profile Name (Optional):** Specifies the name of the credentials profile.
- **Region Name (Optional):** Specifies the region name.
- **Model Kwargs (Optional):** Additional keyword arguments for the model.
- **Endpoint URL (Optional):** Specifies the endpoint URL.
- **Streaming (Optional):** Specifies whether to stream the response from the model. Defaults to _`False`_.
- **Cache (Optional):** Specifies whether to cache the response.
- **Stream (Optional):** Specifies whether to stream the response from the model. Defaults to _`False`_.
<Admonition type="note" title="Note">
<p>
Ensure that necessary credentials are provided to connect to the Amazon Bedrock API. If connection fails, a ValueError will be raised.
</p>
</Admonition>
---
### Anthropic
This component allows the generation of text using Anthropic Chat&Completion large language models.
**Params**
- **Model Name:** Specifies the name of the Anthropic model to be used for text generation. Available options include:
- _`"claude-2.1"`_
- _`"claude-2.0"`_
- _`"claude-instant-1.2"`_
- _`"claude-instant-1"`_
- **Anthropic API Key:** Your Anthropic API key.
- **Max Tokens (Optional):** Specifies the maximum number of tokens to generate. Defaults to _`256`_.
- **Temperature (Optional):** Specifies the sampling temperature. Defaults to _`0.7`_.
- **API Endpoint (Optional):** Specifies the endpoint of the Anthropic API. Defaults to _`"https://api.anthropic.com"`_ if not specified.
- **Input Value:** Specifies the input text for text generation.
- **Stream (Optional):** Specifies whether to stream the response from the model. Defaults to _`False`_.
- **System Message (Optional):** A system message to pass to the model.
For detailed documentation and integration guides, please refer to the [Anthropic Component Documentation](https://python.langchain.com/docs/integrations/chat/anthropic).
---
### Azure OpenAI
This component allows the generation of text using the LLM (Large Language Model) model from Azure OpenAI.
**Params**
- **Model Name:** Specifies the name of the Azure OpenAI model to be used for text generation. Available options include:
- _`"gpt-35-turbo"`_
- _`"gpt-35-turbo-16k"`_
- _`"gpt-35-turbo-instruct"`_
- _`"gpt-4"`_
- _`"gpt-4-32k"`_
- _`"gpt-4-vision"`_
- **Azure Endpoint:** Your Azure endpoint, including the resource. Example: `https://example-resource.azure.openai.com/`.
- **Deployment Name:** Specifies the name of the deployment.
- **API Version:** Specifies the version of the Azure OpenAI API to be used. Available options include:
- _`"2023-03-15-preview"`_
- _`"2023-05-15"`_
- _`"2023-06-01-preview"`_
- _`"2023-07-01-preview"`_
- _`"2023-08-01-preview"`_
- _`"2023-09-01-preview"`_
- _`"2023-12-01-preview"`_
- **API Key:** Your Azure OpenAI API key.
- **Temperature (Optional):** Specifies the sampling temperature. Defaults to _`0.7`_.
- **Max Tokens (Optional):** Specifies the maximum number of tokens to generate. Defaults to _`1000`_.
- **Input Value:** Specifies the input text for text generation.
- **Stream (Optional):** Specifies whether to stream the response from the model. Defaults to _`False`_.
- **System Message (Optional):** A system message to pass to the model.
For detailed documentation and integration guides, please refer to the [Azure OpenAI Component Documentation](https://python.langchain.com/docs/integrations/llms/azure_openai).
---
### Cohere
This component enables text generation using Cohere large language models.
**Params**
- **Cohere API Key:** Your Cohere API key.
- **Max Tokens (Optional):** Specifies the maximum number of tokens to generate. Defaults to _`256`_.
- **Temperature (Optional):** Specifies the sampling temperature. Defaults to _`0.75`_.
- **Input Value:** Specifies the input text for text generation.
- **Stream (Optional):** Specifies whether to stream the response from the model. Defaults to _`False`_.
- **System Message (Optional):** A system message to pass to the model.
---
### Google Generative AI
This component enables text generation using Google Generative AI.
**Params**
- **Google API Key:** Your Google API key to use for the Google Generative AI.
- **Model:** The name of the model to use. Supported examples are _`"gemini-pro"`_ and _`"gemini-pro-vision"`_.
- **Max Output Tokens (Optional):** The maximum number of tokens to generate.
- **Temperature:** Run inference with this temperature. Must be in the closed interval [0.0, 1.0].
- **Top K (Optional):** Decode using top-k sampling: consider the set of top_k most probable tokens. Must be positive.
- **Top P (Optional):** The maximum cumulative probability of tokens to consider when sampling.
- **N (Optional):** Number of chat completions to generate for each prompt. Note that the API may not return the full n completions if duplicates are generated.
- **Input Value:** The input to the model.
- **Stream (Optional):** Specifies whether to stream the response from the model. Defaults to _`False`_.
- **System Message (Optional):** A system message to pass to the model.
---
### Hugging Face API
This component facilitates text generation using LLM models from the Hugging Face Inference API.
**Params**
- **Endpoint URL:** The URL of the Hugging Face Inference API endpoint. Should be provided along with necessary authentication credentials.
- **Task:** Specifies the task for text generation. Options include _`"text2text-generation"`_, _`"text-generation"`_, and _`"summarization"`_.
- **API Token:** The API token required for authentication with the Hugging Face Hub.
- **Model Keyword Arguments (Optional):** Additional keyword arguments for the model. Should be provided as a Python dictionary.
- **Input Value:** The input text for text generation.
- **Stream (Optional):** Specifies whether to stream the response from the model. Defaults to _`False`_.
- **System Message (Optional):** A system message to pass to the model.
---
### LiteLLM Model
Generates text using the `LiteLLM` collection of large language models.
**Parameters**
- **Model name:** The name of the model to use. For example, `gpt-3.5-turbo`. (Type: str)
- **API key:** The API key to use for accessing the provider's API. (Type: str, Optional)
- **Provider:** The provider of the API key. (Type: str, Choices: "OpenAI", "Azure", "Anthropic", "Replicate", "Cohere", "OpenRouter")
- **Temperature:** Controls the randomness of the text generation. (Type: float, Default: 0.7)
- **Model kwargs:** Additional keyword arguments for the model. (Type: Dict, Optional)
- **Top p:** Filter responses to keep the cumulative probability within the top p tokens. (Type: float, Optional)
- **Top k:** Filter responses to only include the top k tokens. (Type: int, Optional)
- **N:** Number of chat completions to generate for each prompt. (Type: int, Default: 1)
- **Max tokens:** The maximum number of tokens to generate for each chat completion. (Type: int, Default: 256)
- **Max retries:** Maximum number of retries for failed requests. (Type: int, Default: 6)
- **Verbose:** Whether to print verbose output. (Type: bool, Default: False)
- **Input:** The input prompt for text generation. (Type: str)
- **Stream:** Whether to stream the output. (Type: bool, Default: False)
- **System message:** System message to pass to the model. (Type: str, Optional)
---
### Ollama
Generate text using Ollama Local LLMs.
**Parameters**
- **Base URL:** Endpoint of the Ollama API. Defaults to 'http://localhost:11434' if not specified.
- **Model Name:** The model name to use. Refer to [Ollama Library](https://ollama.ai/library) for more models.
- **Temperature:** Controls the creativity of model responses. (Default: 0.8)
- **Cache:** Enable or disable caching. (Default: False)
- **Format:** Specify the format of the output (e.g., json). (Advanced)
- **Metadata:** Metadata to add to the run trace. (Advanced)
- **Mirostat:** Enable/disable Mirostat sampling for controlling perplexity. (Default: Disabled)
- **Mirostat Eta:** Learning rate for Mirostat algorithm. (Default: None) (Advanced)
- **Mirostat Tau:** Controls the balance between coherence and diversity of the output. (Default: None) (Advanced)
- **Context Window Size:** Size of the context window for generating tokens. (Default: None) (Advanced)
- **Number of GPUs:** Number of GPUs to use for computation. (Default: None) (Advanced)
- **Number of Threads:** Number of threads to use during computation. (Default: None) (Advanced)
- **Repeat Last N:** How far back the model looks to prevent repetition. (Default: None) (Advanced)
- **Repeat Penalty:** Penalty for repetitions in generated text. (Default: None) (Advanced)
- **TFS Z:** Tail free sampling value. (Default: None) (Advanced)
- **Timeout:** Timeout for the request stream. (Default: None) (Advanced)
- **Top K:** Limits token selection to top K. (Default: None) (Advanced)
- **Top P:** Works together with top-k. (Default: None) (Advanced)
- **Verbose:** Whether to print out response text.
- **Tags:** Tags to add to the run trace. (Advanced)
- **Stop Tokens:** List of tokens to signal the model to stop generating text. (Advanced)
- **System:** System to use for generating text. (Advanced)
- **Template:** Template to use for generating text. (Advanced)
- **Input:** The input text.
- **Stream:** Whether to stream the response.
- **System Message:** System message to pass to the model. (Advanced)
---
### OpenAI
This component facilitates text generation using OpenAI's models.
**Params**
- **Input Value:** The input text for text generation.
- **Max Tokens (Optional):** The maximum number of tokens to generate. Defaults to _`256`_.
- **Model Kwargs (Optional):** Additional keyword arguments for the model. Should be provided as a nested dictionary.
- **Model Name (Optional):** The name of the model to use. Defaults to _`gpt-4-1106-preview`_. Supported options include: _`gpt-4-turbo-preview`_, _`gpt-4-0125-preview`_, _`gpt-4-1106-preview`_, _`gpt-4-vision-preview`_, _`gpt-3.5-turbo-0125`_, _`gpt-3.5-turbo-1106`_.
- **OpenAI API Base (Optional):** The base URL of the OpenAI API. Defaults to _`https://api.openai.com/v1`_.
- **OpenAI API Key (Optional):** The API key for accessing the OpenAI API.
- **Temperature:** Controls the creativity of model responses. Defaults to _`0.7`_.
- **Stream (Optional):** Specifies whether to stream the response from the model. Defaults to _`False`_.
- **System Message (Optional):** System message to pass to the model.
---
### Qianfan
This component facilitates the generation of text using Baidu Qianfan chat models.
**Params**
- **Model Name:** Specifies the name of the Qianfan chat model to be used for text generation. Available options include:
- _`"ERNIE-Bot"`_
- _`"ERNIE-Bot-turbo"`_
- _`"BLOOMZ-7B"`_
- _`"Llama-2-7b-chat"`_
- _`"Llama-2-13b-chat"`_
- _`"Llama-2-70b-chat"`_
- _`"Qianfan-BLOOMZ-7B-compressed"`_
- _`"Qianfan-Chinese-Llama-2-7B"`_
- _`"ChatGLM2-6B-32K"`_
- _`"AquilaChat-7B"`_
- **Qianfan Ak:** Your Baidu Qianfan access key, obtainable from [here](https://cloud.baidu.com/product/wenxinworkshop).
- **Qianfan Sk:** Your Baidu Qianfan secret key, obtainable from [here](https://cloud.baidu.com/product/wenxinworkshop).
- **Top p (Optional):** Model parameter. Specifies the top-p value. Only supported in ERNIE-Bot and ERNIE-Bot-turbo models. Defaults to _`0.8`_.
- **Temperature (Optional):** Model parameter. Specifies the sampling temperature. Only supported in ERNIE-Bot and ERNIE-Bot-turbo models. Defaults to _`0.95`_.
- **Penalty Score (Optional):** Model parameter. Specifies the penalty score. Only supported in ERNIE-Bot and ERNIE-Bot-turbo models. Defaults to _`1.0`_.
- **Endpoint (Optional):** Endpoint of the Qianfan LLM, required if custom model is used.
- **Input Value:** Specifies the input text for text generation.
- **Stream (Optional):** Specifies whether to stream the response from the model. Defaults to _`False`_.
- **System Message (Optional):** A system message to pass to the model.
---
### Vertex AI
The `ChatVertexAI` is a component for generating text using Vertex AI Chat large language models API.
**Params**
- **Credentials:** The JSON file containing the credentials for accessing the Vertex AI Chat API.
- **Project:** The name of the project associated with the Vertex AI Chat API.
- **Examples (Optional):** List of examples to provide context for text generation.
- **Location:** The location of the Vertex AI Chat API service. Defaults to _`us-central1`_.
- **Max Output Tokens:** The maximum number of tokens to generate. Defaults to _`128`_.
- **Model Name:** The name of the model to use. Defaults to _`chat-bison`_.
- **Temperature:** Controls the creativity of model responses. Defaults to _`0.0`_.
- **Input Value:** The input text for text generation.
- **Top K:** Limits token selection to top K. Defaults to _`40`_.
- **Top P:** Works together with top-k. Defaults to _`0.95`_.
- **Verbose:** Whether to print out response text. Defaults to _`False`_.
- **Stream (Optional):** Specifies whether to stream the response from the model. Defaults to _`False`_.
- **System Message (Optional):** System message to pass to the model.

View file

@ -0,0 +1,37 @@
import Admonition from '@theme/Admonition';
# Outputs
### Chat Output
This component is designed to send a message to the chat.
**Params**
- **Sender Type:** specifies the sender type. Defaults to _`"Machine"`_. Options are _`"Machine"`_ and _`"User"`_.
- **Sender Name:** specifies the name of the sender. Defaults to _`"AI"`_.
- **Session ID:** specifies the session ID of the chat history. If provided, the message will be saved in the Message History.
- **Message:** specifies the message text.
<Admonition type="note" title="Note">
<p>
If _`As Record`_ is _`true`_ and the _`Message`_ is a _`Record`_, the data of the _`Record`_ will be updated with the _`Sender`_, _`Sender Name`_, and _`Session ID`_.
</p>
</Admonition>
### Text Output
This component is designed to display text data to the user. It's particularly useful for scenarios where you don't want to send the text data to the chat, but still want to display it.
**Params**
- **Value:** Specifies the text data to be displayed. This is where the text data to be displayed is provided. If no value is provided, it defaults to an empty string.
<Admonition type="note" title="Note">
<p>
The `TextOutput` component serves as a straightforward means for displaying text data. It ensures that textual data can be seamlessly observed in the chat window throughout your flow.
</p>
</Admonition>

View file

@ -21,7 +21,7 @@ The `PromptTemplate` component allows users to create prompts and define variabl
<Admonition type="info">
Once a variable is defined in the prompt template, it becomes a component
input of its own. Check out [Prompt
Customization](../docs/guidelines/prompt-customization.mdx) to learn more.
Customization](../guidelines/prompt-customization) to learn more.
</Admonition>
- **template:** Template used to format an individual request.

View file

@ -74,3 +74,23 @@ Build a Document containing a JSON object using a key and another Document page
**Output**
- **List of Documents:** A list containing the Document with the JSON object.
## Unique ID Generator
Generates a unique identifier (UUID) for each instance it is invoked, providing a distinct and reliable identifier suitable for a variety of applications.
**Params**
- **Value:** This field displays the generated unique identifier (UUID). The UUID is generated dynamically for each instance of the component, ensuring uniqueness across different uses.
**Output**
- Returns a unique identifier (UUID) as a string. This UUID is generated using Python's `uuid` module, ensuring that each identifier is unique and can be used as a reliable reference in your application.
<Admonition type="note" title="Note">
<p>
The Unique ID Generator is crucial for scenarios requiring distinct identifiers, such as session management, transaction tracking, or any context where different instances or entities must be uniquely identified. The generated UUID is provided as a hexadecimal string, offering a high level of uniqueness and security for identification purposes.
</p>
</Admonition>
For additional information and examples, please consult the [Langflow Components Custom Documentation](http://docs.langflow.org/components/custom).

View file

@ -1,9 +1,635 @@
import Admonition from '@theme/Admonition';
import Admonition from "@theme/Admonition";
# Vector Stores
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
### Astra DB
The `Astra DB` is a component for initializing an Astra DB Vector Store from Records. It facilitates the creation of Astra DB-based vector indexes for efficient document storage and retrieval.
**Params**
- **Input:** The input documents or records.
- **Embedding:** The embedding model used by Astra DB.
- **Collection Name:** The name of the collection in Astra DB.
- **Token:** The token for Astra DB.
- **API Endpoint:** The API endpoint for Astra DB.
- **Namespace:** The namespace in Astra DB.
- **Metric:** The metric to use in Astra DB.
- **Batch Size:** The batch size for Astra DB.
- **Bulk Insert Batch Concurrency:** The bulk insert batch concurrency for Astra DB.
- **Bulk Insert Overwrite Concurrency:** The bulk insert overwrite concurrency for Astra DB.
- **Bulk Delete Concurrency:** The bulk delete concurrency for Astra DB.
- **Setup Mode:** The setup mode for the vector store.
- **Pre Delete Collection:** Pre delete collection.
- **Metadata Indexing Include:** Metadata indexing include.
- **Metadata Indexing Exclude:** Metadata indexing exclude.
- **Collection Indexing Policy:** Collection indexing policy.
<Admonition type="note" title="Note">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
Ensure that the required Astra DB token and API endpoint are properly configured.
</p>
</Admonition>
</Admonition>
---
### Astra DB Search
The `Astra DBSearch` is a component for searching an existing Astra DB Vector Store for similar documents. It extends the functionality of the `Astra DB` component to provide efficient document retrieval based on similarity metrics.
**Params**
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
- **Input Value:** The input value to search for.
- **Embedding:** The embedding model used by Astra DB.
- **Collection Name:** The name of the collection in Astra DB.
- **Token:** The token for Astra DB.
- **API Endpoint:** The API endpoint for Astra DB.
- **Namespace:** The namespace in Astra DB.
- **Metric:** The metric to use in Astra DB.
- **Batch Size:** The batch size for Astra DB.
- **Bulk Insert Batch Concurrency:** The bulk insert batch concurrency for Astra DB.
- **Bulk Insert Overwrite Concurrency:** The bulk insert overwrite concurrency for Astra DB.
- **Bulk Delete Concurrency:** The bulk delete concurrency for Astra DB.
- **Setup Mode:** The setup mode for the vector store.
- **Pre Delete Collection:** Pre delete collection.
- **Metadata Indexing Include:** Metadata indexing include.
- **Metadata Indexing Exclude:** Metadata indexing exclude.
- **Collection Indexing Policy:** Collection indexing policy.
---
### Chroma
The `Chroma` is a component designed for implementing a Vector Store using Chroma. This component allows users to utilize Chroma for efficient vector storage and retrieval within their language processing workflows.
**Params**
- **Collection Name:** The name of the collection.
- **Persist Directory:** The directory to persist the Vector Store to.
- **Server CORS Allow Origins (Optional):** The CORS allow origins for the Chroma server.
- **Server Host (Optional):** The host for the Chroma server.
- **Server Port (Optional):** The port for the Chroma server.
- **Server gRPC Port (Optional):** The gRPC port for the Chroma server.
- **Server SSL Enabled (Optional):** Whether to enable SSL for the Chroma server.
- **Input:** Input data for creating the Vector Store.
- **Embedding:** The embeddings to use for the Vector Store.
For detailed documentation and integration guides, please refer to the [Chroma Component Documentation](https://python.langchain.com/docs/integrations/vectorstores/chroma).
---
### Chroma Search
The `ChromaSearch` is a component designed for searching a Chroma collection for similar documents. This component integrates with Chroma to facilitate efficient document retrieval based on similarity metrics.
**Params**
- **Input:** The input text to search for similar documents.
- **Search Type:** The type of search to perform ("Similarity" or "MMR").
- **Collection Name:** The name of the Chroma collection.
- **Index Directory:** The directory where the Chroma index is stored.
- **Embedding:** The embedding model used to vectorize inputs (make sure to use the same as the index).
- **Server CORS Allow Origins (Optional):** The CORS allow origins for the Chroma server.
- **Server Host (Optional):** The host for the Chroma server.
- **Server Port (Optional):** The port for the Chroma server.
- **Server gRPC Port (Optional):** The gRPC port for the Chroma server.
- **Server SSL Enabled (Optional):** Whether SSL is enabled for the Chroma server.
---
### FAISS
The `FAISS` is a component designed for ingesting documents into a FAISS Vector Store. It facilitates efficient document indexing and retrieval using the FAISS library.
**Params**
- **Embedding:** The embedding model used to vectorize inputs.
- **Input:** The input documents to ingest into the FAISS Vector Store.
- **Folder Path:** The path to save the FAISS index. It will be relative to where Langflow is running.
- **Index Name:** The name of the FAISS index.
For detailed documentation and integration guides, please refer to the [FAISS Component Documentation](https://faiss.ai/index.html).
---
### FAISS Search
The `FAISSSearch` is a component for searching a FAISS Vector Store for similar documents. It enables efficient document retrieval based on similarity metrics using FAISS.
**Params**
- **Embedding:** The embedding model used by the FAISS Vector Store.
- **Folder Path:** The path from which to load the FAISS index. It will be relative to where Langflow is running.
- **Input:** The input value to search for similar documents.
- **Index Name:** The name of the FAISS index.
---
### MongoDB Atlas
The `MongoDBAtlas` is a component used to construct a MongoDB Atlas Vector Search vector store from Records. It facilitates the creation of MongoDB Atlas-based vector stores for efficient document storage and retrieval.
**Params**
- **Embedding:** The embedding model used by the MongoDB Atlas Vector Search.
- **Input:** The input documents or records.
- **Collection Name:** The name of the collection in the MongoDB Atlas database.
- **Database Name:** The name of the database in MongoDB Atlas.
- **Index Name:** The name of the index in MongoDB Atlas.
- **MongoDB Atlas Cluster URI:** The URI of the MongoDB Atlas cluster.
- **Search Kwargs:** Additional search arguments for MongoDB Atlas.
<Admonition type="note" title="Note">
<p>Ensure that pymongo is installed to use MongoDB Atlas Vector Store.</p>
</Admonition>
---
### MongoDB Atlas Search
The `MongoDBAtlasSearch` is a component for searching a MongoDB Atlas Vector Store for similar documents. It extends the functionality of the MongoDBAtlasComponent to provide efficient document retrieval based on similarity metrics.
**Params**
- **Search Type:** The type of search to perform. Options: "Similarity", "MMR".
- **Input:** The input value to search for.
- **Embedding:** The embedding model used by the MongoDB Atlas Vector Store.
- **Collection Name:** The name of the collection in the MongoDB Atlas database.
- **Database Name:** The name of the database in MongoDB Atlas.
- **Index Name:** The name of the index in MongoDB Atlas.
- **MongoDB Atlas Cluster URI:** The URI of the MongoDB Atlas cluster.
- **Search Kwargs:** Additional search arguments for MongoDB Atlas.
---
### PGVector
The `PGVector` is a component for implementing a Vector Store using PostgreSQL. It allows users to store and retrieve vectors efficiently within a PostgreSQL database.
**Params**
- **Input:** The input value to use for the Vector Store.
- **Embedding:** The embedding model used by the Vector Store.
- **PostgreSQL Server Connection String:** The URL for the PostgreSQL server.
- **Table:** The name of the table in the PostgreSQL database.
For detailed documentation and integration guides, please refer to the [PGVector Component Documentation](https://python.langchain.com/docs/integrations/vectorstores/pgvector).
<Admonition type="note" title="Note">
<p>
Ensure that the required PostgreSQL server is accessible and properly
configured.
</p>
</Admonition>
---
### PGVector Search
The `PGVectorSearch` is a component for searching a PGVector Store for similar documents. It extends the functionality of the PGVectorComponent to provide efficient document retrieval based on similarity metrics.
**Params**
- **Input:** The input value to search for.
- **Embedding:** The embedding model used by the Vector Store.
- **PostgreSQL Server Connection String:** The URL for the PostgreSQL server.
- **Table:** The name of the table in the PostgreSQL database.
- **Search Type:** The type of search to perform (e.g., "Similarity", "MMR").
---
### Pinecone
The `Pinecone` is a component used to construct a Pinecone wrapper from Records. It facilitates the creation of Pinecone-based vector indexes for efficient document storage and retrieval.
**Params**
- **Input:** The input documents or records.
- **Embedding:** The embedding model used by Pinecone.
- **Index Name:** The name of the index in Pinecone.
- **Namespace:** The namespace in Pinecone.
- **Pinecone API Key:** The API key for Pinecone.
- **Pinecone Environment:** The environment for Pinecone.
- **Search Kwargs:** Additional search keyword arguments for Pinecone.
- **Pool Threads:** The number of threads to use for Pinecone.
<Admonition type="note" title="Note">
<p>
Ensure that the required Pinecone API key and environment are properly
configured.
</p>
</Admonition>
---
### Pinecone Search
The `PineconeSearch` is a component used to search a Pinecone Vector Store for similar documents. It extends the functionality of the `PineconeComponent` to provide efficient document retrieval based on similarity metrics.
**Params**
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
- **Input Value:** The input value to search for.
- **Embedding:** The embedding model used by Pinecone.
- **Index Name:** The name of the index in Pinecone.
- **Namespace:** The namespace in Pinecone.
- **Pinecone API Key:** The API key for Pinecone.
- **Pinecone Environment:** The environment for Pinecone.
- **Search Kwargs:** Additional search keyword arguments for Pinecone.
- **Pool Threads:** The number of threads to use for Pinecone.
---
### Qdrant
The `Qdrant` is a component used to construct a Qdrant wrapper from a list of texts. It allows for efficient similarity search and retrieval operations based on the provided embeddings.
**Params**
- **Input:** The input documents or records.
- **Embedding:** The embedding model used by Qdrant.
- **API Key:** The API key for Qdrant (password field).
- **Collection Name:** The name of the collection in Qdrant.
- **Content Payload Key:** The key for the content payload in the documents (advanced).
- **Distance Function:** The distance function to use in Qdrant (advanced).
- **gRPC Port:** The gRPC port for Qdrant (advanced).
- **Host:** The host for Qdrant (advanced).
- **HTTPS:** Enable HTTPS for Qdrant (advanced).
- **Location:** The location for Qdrant (advanced).
- **Metadata Payload Key:** The key for the metadata payload in the documents (advanced).
- **Path:** The path for Qdrant (advanced).
- **Port:** The port for Qdrant (advanced).
- **Prefer gRPC:** Prefer gRPC for Qdrant (advanced).
- **Prefix:** The prefix for Qdrant (advanced).
- **Search Kwargs:** Additional search keyword arguments for Qdrant (advanced).
- **Timeout:** The timeout for Qdrant (advanced).
- **URL:** The URL for Qdrant (advanced).
---
### Qdrant Search
The `QdrantSearch` is a component used to search a Qdrant Vector Store for similar documents. It extends the functionality of the `QdrantComponent` to provide efficient document retrieval based on similarity metrics.
**Params**
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
- **Input Value:** The input value to search for.
- **Embedding:** The embedding model used by Qdrant.
- **API Key:** The API key for Qdrant (password field).
- **Collection Name:** The name of the collection in Qdrant.
- **Content Payload Key:** The key for the content payload in the documents (advanced).
- **Distance Function:** The distance function to use in Qdrant (advanced).
- **gRPC Port:** The gRPC port for Qdrant (advanced).
- **Host:** The host for Qdrant (advanced).
- **HTTPS:** Enable HTTPS for Qdrant (advanced).
- **Location:** The location for Qdrant (advanced).
- **Metadata Payload Key:** The key for the metadata payload in the documents (advanced).
- **Path:** The path for Qdrant (advanced).
- **Port:** The port for Qdrant (advanced).
- **Prefer gRPC:** Prefer gRPC for Qdrant (advanced).
- **Prefix:** The prefix for Qdrant (advanced).
- **Search Kwargs:** Additional search keyword arguments for Qdrant (advanced).
- **Timeout:** The timeout for Qdrant (advanced).
- **URL:** The URL for Qdrant (advanced).
---
### Redis
The `Redis` is a component for implementing a Vector Store using Redis. It provides functionality to store and retrieve vectors efficiently from a Redis database.
**Params**
- **Index Name:** The name of the index in Redis (default: your_index).
- **Input:** The input data to build the Redis Vector Store (input types: Document, Record).
- **Embedding:** The embedding model used by Redis.
- **Schema:** The schema file (.yaml) to define the structure of the documents (optional).
- **Redis Server Connection String:** The connection string for the Redis server.
- **Redis Index:** The name of the Redis index (optional).
For detailed documentation, please refer to the [Redis Documentation](https://python.langchain.com/docs/integrations/vectorstores/redis).
<Admonition type="note" title="Note">
<p>
Ensure that the required Redis server connection URL and index name are
properly configured. If no documents are provided, a schema must be
provided.
</p>
</Admonition>
---
### Redis Search
The `RedisSearch` is a component for searching a Redis Vector Store for similar documents.
**Params**
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
- **Input Value:** The input value to search for.
- **Index Name:** The name of the index in Redis (default: your_index).
- **Embedding:** The embedding model used by Redis.
- **Schema:** The schema file (.yaml) to define the structure of the documents (optional).
- **Redis Server Connection String:** The connection string for the Redis server.
- **Redis Index:** The name of the Redis index (optional).
---
### Supabase
The `Supabase` is a component for initializing a Supabase Vector Store from texts and embeddings.
**Params**
- **Input:** The input documents or records.
- **Embedding:** The embedding model used by Supabase.
- **Query Name:** The name of the query (optional).
- **Search Kwargs:** Additional search keyword arguments for Supabase (advanced).
- **Supabase Service Key:** The service key for Supabase.
- **Supabase URL:** The URL for the Supabase instance.
- **Table Name:** The name of the table in Supabase (advanced).
<Admonition type="note" title="Note">
<p>
Ensure that the required Supabase service key, Supabase URL, and table name
are properly configured.
</p>
</Admonition>
---
### Supabase Search
The `SupabaseSearch` is a component for searching a Supabase Vector Store for similar documents.
**Params**
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
- **Input Value:** The input value to search for.
- **Embedding:** The embedding model used by Supabase.
- **Query Name:** The name of the query (optional).
- **Search Kwargs:** Additional search keyword arguments for Supabase (advanced).
- **Supabase Service Key:** The service key for Supabase.
- **Supabase URL:** The URL for the Supabase instance.
- **Table Name:** The name of the table in Supabase (advanced).
---
### Vectara
The `Vectara` is a component for implementing a Vector Store using Vectara.
**Params**
- **Vectara Customer ID:** The customer ID for Vectara.
- **Vectara Corpus ID:** The corpus ID for Vectara.
- **Vectara API Key:** The API key for Vectara.
- **Files Url:** The URL(s) of the file(s) to be used for initializing the Vectara Vector Store (optional).
- **Input:** The input data to be upserted to the corpus (optional).
For detailed documentation and integration guides, please refer to the [Vectara Component Documentation](https://python.langchain.com/docs/integrations/vectorstores/vectara).
<Admonition type="note" title="Note">
<p>
If `inputs` are provided, they will be upserted to the corpus. If
`files_url` are provided, Vectara will process the files from the URLs.
</p>
</Admonition>
---
### Vectara Search
The `VectaraSearch` is a component for searching a Vectara Vector Store for similar documents.
**Params**
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
- **Input Value:** The input value to search for.
- **Vectara Customer ID:** The customer ID for Vectara.
- **Vectara Corpus ID:** The corpus ID for Vectara.
- **Vectara API Key:** The API key for Vectara.
- **Files Url:** The URL(s) of the file(s) to be used for initializing the Vectara Vector Store (optional).
---
### Weaviate
The `Weaviate` is a component for implementing a Vector Store using Weaviate.
**Params**
- **Weaviate URL:** The URL of the Weaviate instance (default: http://localhost:8080).
- **Search By Text:** Boolean indicating whether to search by text (default: False).
- **API Key:** The API key for authentication (optional).
- **Index name:** The name of the index in Weaviate (optional).
- **Text Key:** The key used to extract text from documents (default: "text").
- **Input:** The input document or record.
- **Embedding:** The embedding model used by Weaviate.
- **Attributes:** Additional attributes to consider during indexing (optional).
For detailed documentation and integration guides, please refer to the [Weaviate Component Documentation](https://python.langchain.com/docs/integrations/vectorstores/weaviate).
<Admonition type="note" title="Note">
<p>
Before using the Weaviate Vector Store component, ensure that you have a
Weaviate instance running and accessible at the specified URL. Additionally,
make sure to provide the correct API key for authentication if required.
Adjust the index name, text key, and attributes according to your dataset
and indexing requirements. Finally, ensure that the provided embeddings are
compatible with Weaviate's requirements.
</p>
</Admonition>
---
### Weaviate Search
The `WeaviateSearch` component facilitates searching a Weaviate Vector Store for similar documents.
**Params**
- **Search Type:** The type of search to perform (e.g., Similarity, MMR).
- **Input Value:** The input value to search for.
- **Weaviate URL:** The URL of the Weaviate instance (default: http://localhost:8080).
- **Search By Text:** Boolean indicating whether to search by text (default: False).
- **API Key:** The API key for authentication (optional).
- **Index name:** The name of the index in Weaviate (optional).
- **Text Key:** The key used to extract text from documents (default: "text").
- **Embedding:** The embedding model used by Weaviate.
- **Attributes:** Additional attributes to consider during indexing (optional).

View file

@ -1,20 +0,0 @@
import Admonition from '@theme/Admonition';
# Wrappers
<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
</Admonition>
### TextRequestsWrapper
This component is designed to work with the Python Requests module, which is a popular tool for making web requests. Used to fetch data from a particular website.
**Params**
- **header:** specifies the headers to be included in the HTTP request. Defaults to `{'Authorization': 'Bearer <token>'}`.
Headers are key-value pairs that provide additional information about the request or the client making the request. They can be used to send authentication credentials, specify the content type of the request, set cookies, and more. They allow the client and the server to communicate additional information beyond the basic request.

View file

@ -16,6 +16,12 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
light: "img/buffer-memory.png",
dark: "img/buffer-memory.png",
}}
style={{
width: "80%",
margin: "20px auto",
display: "flex",
justifyContent: "center",
}}
/>
#### <a target="\_blank" href="json_files/Buffer_Memory.json" download>Download Flow</a>

View file

@ -22,6 +22,13 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
light: "img/basic-chat.png",
dark: "img/basic-chat.png",
}}
style={{
width: "80%",
margin: "20px auto",
display: "flex",
justifyContent: "center",
}}
/>
#### <a target="\_blank" href="json_files/Basic_Chat.json" download>Download Flow</a>

View file

@ -34,6 +34,12 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
light: "img/csv-loader.png",
dark: "img/csv-loader.png",
}}
style={{
width: "80%",
margin: "20px auto",
display: "flex",
justifyContent: "center",
}}
/>
#### <a target="\_blank" href="json_files/CSV_Loader.json" download>Download Flow</a>

View file

@ -3,9 +3,6 @@ description: Custom Components
hide_table_of_contents: true
---
import ZoomableImage from "/src/theme/ZoomableImage.js";
import Admonition from "@theme/Admonition";
# FlowRunner Component
The CustomComponent class allows us to create components that interact with Langflow itself. In this example, we will make a component that runs other flows available in "My Collection".
@ -18,7 +15,7 @@ The CustomComponent class allows us to create components that interact with Lang
}}
style={{
width: "30%",
margin: "0 auto",
margin: "20px auto",
display: "flex",
justifyContent: "center",
}}
@ -35,7 +32,7 @@ We will cover how to:
<summary>Example Code</summary>
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
class FlowRunner(CustomComponent):
@ -75,7 +72,7 @@ class FlowRunner(CustomComponent):
<CH.Scrollycoding rows={20} className={""}>
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
class MyComponent(CustomComponent):
@ -95,7 +92,7 @@ The typical structure of a Custom Component is composed of _`display_name`_ and
---
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
# focus
@ -118,7 +115,7 @@ Let's start by defining our component's _`display_name`_ and _`description`_.
---
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
# focus
from langchain.schema import Document
@ -140,7 +137,7 @@ Second, we will import _`Document`_ from the [_langchain.schema_](https://docs.l
---
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
# focus
from langchain.schema import Document
@ -167,7 +164,7 @@ Now, let's add the [parameters](focus://11[20:55]) and the [return type](focus:/
---
```python focus=13:14
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
@ -189,7 +186,7 @@ We can now start writing the _`build`_ method. Let's list available flows in "My
---
```python focus=15:18
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
@ -222,7 +219,7 @@ And retrieve a flow that matches the selected name (we'll make a dropdown input
---
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
@ -250,7 +247,7 @@ You can load this flow using _`get_flow`_ and set a _`tweaks`_ dictionary to cus
---
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
@ -287,7 +284,7 @@ The content of a document can be extracted using the _`page_content`_ attribute,
---
```python focus=9:16
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
@ -366,3 +363,6 @@ Done! This is what our script and custom component looks like:
/>
</div>
import ZoomableImage from "/src/theme/ZoomableImage.js";
import Admonition from "@theme/Admonition";

View file

@ -1,28 +0,0 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
# 📚 How to Upload Examples?
We welcome all examples that can help our community learn and explore Langflow's capabilities.
Langflow Examples is a repository on [GitHub](https://github.com/logspace-ai/langflow_examples) that contains examples of flows that people can use for inspiration and learning.
{" "}
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/community-examples.png",
dark: "img/community-examples.png",
}}
style={{ width: "100%" }}
/>
To upload examples, please follow these steps:
1. **Create a Flow:** First, create a flow using Langflow. You can use any of the available templates or create a new flow from scratch.
2. **Export the Flow:** Once you have created a flow, export it as a JSON file. Make sure to give your file a descriptive name and include a brief description of what it does.
3. **Submit a Pull Request:** Finally, submit a pull request (PR) to the examples repo. Make sure to include your JSON file in the PR.
If your example uses any third-party libraries or packages, please include them in your PR and make sure that your example follows the [**⛓️ Langflow Code Of Conduct**](https://github.com/logspace-ai/langflow/blob/dev/CODE_OF_CONDUCT.md).

View file

@ -1,46 +0,0 @@
import Admonition from "@theme/Admonition";
# MidJourney Prompt Chain
The `MidJourneyPromptChain` can be used to generate imaginative and detailed MidJourney prompts.
For example, type something like:
```bash
Dragon
```
And get a response such as:
```text
Imagine a mysterious forest, the trees are tall and ancient, their branches reaching up to the sky. Through the darkness, a dragon emerges from the shadows, its scales shimmering in the moonlight. Its wingspan is immense, and its eyes glow with a fierce intensity. It is a majestic and powerful creature, one that commands both respect and fear.
```
<Admonition type="tip">
Notice that the `ConversationSummaryMemory` stores a summary of the
conversation over time. Try using it to create better prompts as the
conversation goes on.
</Admonition>
## ⛓️ Langflow Example
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/midjourney-prompt-chain.png",
dark: "img/midjourney-prompt-chain.png",
}}
/>
#### <a target="\_blank" href="json_files/MidJourney_Prompt_Chain.json" download>Download Flow</a>
<Admonition type="note" title="LangChain Components 🦜🔗">
- [`OpenAI`](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/openai)
- [`ConversationSummaryMemory`](https://python.langchain.com/docs/modules/memory/types/summary)
</Admonition>

View file

@ -1,58 +0,0 @@
import Admonition from "@theme/Admonition";
# Multiple Vector Stores
The example below shows an agent operating with two vector stores built upon different data sources.
The `TextLoader` loads a TXT file, while the `WebBaseLoader` pulls text from webpages into a document format to accessed downstream. The `Chroma` vector stores are created analogous to what we have demonstrated in our [CSV Loader](/examples/csv-loader.mdx) example. Finally, the `VectorStoreRouterAgent` constructs an agent that routes between the vector stores.
<Admonition type="info">
Get the TXT file used
[here](https://github.com/hwchase17/chat-your-data/blob/master/state_of_the_union.txt).
</Admonition>
URL used by the `WebBaseLoader`:
```text
https://pt.wikipedia.org/wiki/Harry_Potter
```
<Admonition type="tip">
When you build the flow, request information about one of the sources. The
agent should be able to use the correct source to generate a response.
</Admonition>
<Admonition type="info">
Learn more about Multiple Vector Stores
[here](https://python.langchain.com/docs/modules/data_connection/vectorstores/).
</Admonition>
## ⛓️ Langflow Example
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/multiple-vectorstores.png",
dark: "img/multiple-vectorstores.png",
}}
/>
#### <a target="\_blank" href="json_files/Multiple_Vector_Stores.json" download>Download Flow</a>
<Admonition type="note" title="LangChain Components 🦜🔗">
- [`WebBaseLoader`](https://python.langchain.com/docs/integrations/document_loaders/web_base)
- [`TextLoader`](https://python.langchain.com/docs/modules/data_connection/document_loaders/)
- [`CharacterTextSplitter`](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/character_text_splitter)
- [`OpenAIEmbedding`](https://python.langchain.com/docs/integrations/text_embedding/openai)
- [`Chroma`](https://python.langchain.com/docs/integrations/vectorstores/chroma)
- [`VectorStoreInfo`](https://python.langchain.com/docs/modules/data_connection/vectorstores/)
- [`OpenAI`](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/openai)
- [`VectorStoreRouterToolkit`](https://js.langchain.com/docs/modules/agents/tools/how_to/agents_with_vectorstores)
- [`VectorStoreRouterAgent`](https://js.langchain.com/docs/modules/agents/tools/how_to/agents_with_vectorstores)
</Admonition>

View file

@ -43,6 +43,12 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
light: "img/python-function.png",
dark: "img/python-function.png",
}}
style={{
width: "80%",
margin: "20px auto",
display: "flex",
justifyContent: "center",
}}
/>
#### <a target="\_blank" href="json_files/Python_Function.json" download>Download Flow</a>

View file

@ -37,6 +37,12 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
light: "img/serp-api-tool.png",
dark: "img/serp-api-tool.png",
}}
style={{
width: "80%",
margin: "20px auto",
display: "flex",
justifyContent: "center",
}}
/>
#### <a target="\_blank" href="json_files/SerpAPI_Tool.json" download>Download Flow</a>

View file

@ -0,0 +1,44 @@
# 🖥️ Command Line Interface (CLI)
## Overview
Langflow's Command Line Interface (CLI) is a powerful tool that allows you to interact with the Langflow server from the command line. The CLI provides a wide range of commands to help you shape Langflow to your needs.
Running the CLI without any arguments will display a list of available commands and options.
```bash
langflow --help
# or
langflow
```
Each option is detailed below:
- `--help`: Displays all available options.
- `--host`: Defines the host to bind the server to. Can be set using the `LANGFLOW_HOST` environment variable. The default is `127.0.0.1`.
- `--workers`: Sets the number of worker processes. Can be set using the `LANGFLOW_WORKERS` environment variable. The default is `1`.
- `--timeout`: Sets the worker timeout in seconds. The default is `60`.
- `--port`: Sets the port to listen on. Can be set using the `LANGFLOW_PORT` environment variable. The default is `7860`.
- `--config`: Defines the path to the configuration file. The default is `config.yaml`.
- `--env-file`: Specifies the path to the .env file containing environment variables. The default is `.env`.
- `--log-level`: Defines the logging level. Can be set using the `LANGFLOW_LOG_LEVEL` environment variable. The default is `critical`.
- `--components-path`: Specifies the path to the directory containing custom components. Can be set using the `LANGFLOW_COMPONENTS_PATH` environment variable. The default is `langflow/components`.
- `--log-file`: Specifies the path to the log file. Can be set using the `LANGFLOW_LOG_FILE` environment variable. The default is `logs/langflow.log`.
- `--cache`: Select the type of cache to use. Options are `InMemoryCache` and `SQLiteCache`. Can be set using the `LANGFLOW_LANGCHAIN_CACHE` environment variable. The default is `SQLiteCache`.
- `--dev/--no-dev`: Toggles the development mode. The default is `no-dev`.
- `--path`: Specifies the path to the frontend directory containing build files. This option is for development purposes only. Can be set using the `LANGFLOW_FRONTEND_PATH` environment variable.
- `--open-browser/--no-open-browser`: Toggles the option to open the browser after starting the server. Can be set using the `LANGFLOW_OPEN_BROWSER` environment variable. The default is `open-browser`.
- `--remove-api-keys/--no-remove-api-keys`: Toggles the option to remove API keys from the projects saved in the database. Can be set using the `LANGFLOW_REMOVE_API_KEYS` environment variable. The default is `no-remove-api-keys`.
- `--install-completion [bash|zsh|fish|powershell|pwsh]`: Installs completion for the specified shell.
- `--show-completion [bash|zsh|fish|powershell|pwsh]`: Shows completion for the specified shell, allowing you to copy it or customize the installation.
- `--backend-only`: This parameter, with a default value of `False`, allows running only the backend server without the frontend. It can also be set using the `LANGFLOW_BACKEND_ONLY` environment variable.
- `--store`: This parameter, with a default value of `True`, enables the store features, use `--no-store` to deactivate it. It can be configured using the `LANGFLOW_STORE` environment variable.
These parameters are important for users who need to customize the behavior of Langflow, especially in development or specialized deployment scenarios.
### Environment Variables
You can configure many of the CLI options using environment variables. These can be exported in your operating system or added to a `.env` file and loaded using the `--env-file` option.
A sample `.env` file named `.env.example` is included with the project. Copy this file to a new file named `.env` and replace the example values with your actual settings. If you're setting values in both your OS and the `.env` file, the `.env` settings will take precedence.

View file

@ -1,38 +0,0 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
# 🎨 Creating Flows
## Compose
Creating flows with Langflow is easy. Drag sidebar components onto the canvas and connect them together to create your pipeline. Langflow provides a range of [LangChain components](https://python.langchain.com/docs/modules/) to choose from, including LLMs, prompt serializers, agents, and chains.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/langflow_canvas.png",
dark: "img/langflow_canvas.png"
}}
/>
## Fork
The easiest way to start with Langflow is by forking a **community example**. Forking an example stores a copy in your project collection, allowing you to edit and save the modified version as a new flow.
<div
style={{ marginBottom: "20px", display: "flex", justifyContent: "center" }}
>
<ReactPlayer playing controls url="/videos/langflow_fork.mp4" />
</div>
## Build
Building a flow means validating if the components have prerequisites fulfilled and are properly instantiated. When a chat message is sent, the flow will run for the first time, executing the pipeline.
<div
style={{ marginBottom: "20px", display: "flex", justifyContent: "center" }}
>
<ReactPlayer playing controls url="/videos/langflow_build.mp4" />
</div>

View file

@ -1,20 +0,0 @@
# 🤗 HuggingFace Spaces
A fully featured version of Langflow can be accessed via HuggingFace spaces with no installation required.
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
{" "}
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/hugging-face.png",
dark: "img/hugging-face.png",
}}
style={{ width: "100%" }}
/>
Check out Langflow on [HuggingFace Spaces](https://huggingface.co/spaces/Logspace/Langflow).

View file

@ -1,15 +0,0 @@
# 📦 How to install?
## Installation
You can install Langflow from pip:
```bash
pip install langflow
```
Next, run:
```bash
langflow
```

View file

@ -0,0 +1,195 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import Admonition from "@theme/Admonition";
# 🌟 RAG with Astra DB
This guide will walk you through how to build a RAG (Retrieval Augmented Generation) application using **Astra DB** and **Langflow**.
[Astra DB](https://www.datastax.com/products/datastax-astra?utm_source=langflow-pre-release&utm_medium=referral&utm_campaign=langflow-announcement&utm_content=astradb) is a cloud-native database built on Apache Cassandra that is optimized for the cloud. It is a fully managed database-as-a-service that simplifies operations and reduces costs. Astra DB is built on the same technology that powers the largest Cassandra deployments in the world.
In this guide, we will use Astra DB as a vector store to store and retrieve the documents that will be used by the RAG application to generate responses.
<Admonition type="tip">
This guide assumes that you have Langflow up and running. If you are new to
Langflow, you can check out the [Getting Started](/) guide.
</Admonition>
TLDR;
- [Create a free Astra DB account](https://astra.datastax.com/signup?utm_source=langflow-pre-release&utm_medium=referral&utm_campaign=langflow-announcement&utm_content=create-a-free-astra-db-account)
- Duplicate our [Langflow 1.0 Space](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true)
- Create a new database, get a **Token** and the **API Endpoint**
- Click on the **New Project** button and look for Vector Store RAG. This will create a new project with the necessary components
- Import the project into Langflow by dropping it on the Canvas or My Collection page
- Update the **Token** and **API Endpoint** in the **Astra DB** components
- Update the OpenAI API key in the **OpenAI** components
- Run the ingestion flow which is the one that uses the **Astra DB** component
- Click on the ⚡ _Run_ button and start interacting with your RAG application
# First things first
## Create an Astra DB Database
To get started, you will need to [create an Astra DB database](https://astra.datastax.com/signup?utm_source=langflow-pre-release&utm_medium=referral&utm_campaign=langflow-announcement&utm_content=create-an-astradb-database).
Once you have created an account, you will be taken to the Astra DB dashboard. Click on the **Create Database** button.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-create-database.png",
dark: "img/astra-create-database.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
Now you will need to configure your database. Choose the **Serverless (Vector)** deployment type, and pick a Database name, provider and region.
After you have configured your database, click on the **Create Database** button.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-configure-deployment.png",
dark: "img/astra-configure-deployment.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
Once your database is initialized, to the right of the page, you will see the _Database Details_ section which contains a button for you to copy the **API Endpoint** and another to generate a **Token**.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-generate-token.png",
dark: "img/astra-generate-token.png",
}}
style={{ width: "50%", margin: "20px auto" }}
/>
Now we are all set to start building our RAG application using Astra DB and Langflow.
## (Optional) Duplicate the Langflow 1.0 HuggingFace Space
If you haven't already, now is the time to launch Langflow. To make things easier, you can duplicate our [Langflow 1.0 Space](https://huggingface.co/spaces/Langflow/Langflow-Preview?duplicate=true) which sets up a Langflow instance just for you.
## Open the Vector Store RAG Project
To get started, click on the **New Project** button and look for the **Vector Store RAG** project. This will open a starter project with the necessary components to run a RAG application using Astra DB.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/drag-and-drop-flow.png",
dark: "img/drag-and-drop-flow.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
This project consists of two flows. The simpler one is the **Ingestion Flow** which is responsible for ingesting the documents into the Astra DB database.
Your first step should be to understand what each flow does and how they interact with each other.
The ingestion flow consists of:
- **Files** component that uploads a text file to Langflow
- **Recursive Character Text Splitter** component that splits the text into smaller chunks
- **OpenAIEmbeddings** component that generates embeddings for the text chunks
- **Astra DB** component that stores the text chunks in the Astra DB database
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-ingestion-flow.png",
dark: "img/astra-ingestion-flow.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
Now, let's update the **Astra DB** and **Astra DB Search** components with the **Token** and **API Endpoint** that we generated earlier, and the OpenAI Embeddings components with your OpenAI API key.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-ingestion-fields.png",
dark: "img/astra-ingestion-fields.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
And run it! This will ingest the Text data from your file into the Astra DB database.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-ingestion-run.png",
dark: "img/astra-ingestion-run.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
Now, on to the **RAG Flow**. This flow is responsible for generating responses to your queries. It will define all of the steps from getting the User's input to generating a response and displaying it in the Interaction Panel.
The RAG flow is a bit more complex. It consists of:
- **Chat Input** component that defines where to put the user input coming from the Interaction Panel
- **OpenAI Embeddings** component that generates embeddings from the user input
- **Astra DB Search** component that retrieves the most relevant Records from the Astra DB database
- **Text Output** component that turns the Records into Text by concatenating them and also displays it in the Interaction Panel
- One interesting point you'll see here is that this component is named `Extracted Chunks`, and that is how it will appear in the Interaction Panel
- **Prompt** component that takes in the user input and the retrieved Records as text and builds a prompt for the OpenAI model
- **OpenAI** component that generates a response to the prompt
- **Chat Output** component that displays the response in the Interaction Panel
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-rag-flow.png",
dark: "img/astra-rag-flow.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
To run it all we have to do is click on the ⚡ _Run_ button and start interacting with your RAG application.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-rag-flow-run.png",
dark: "img/astra-rag-flow-run.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
This opens the Interaction Panel where you can chat your data.
Because this flow has a **Chat Input** and a **Text Output** component, the Panel displays a chat input at the bottom and the Extracted Chunks section on the left.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-rag-flow-interaction-panel.png",
dark: "img/astra-rag-flow-interaction-panel.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
Once we interact with it we get a response and the Extracted Chunks section is updated with the retrieved records.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/astra-rag-flow-interaction-panel-interaction.png",
dark: "img/astra-rag-flow-interaction-panel-interaction.png",
}}
style={{ width: "80%", margin: "20px auto" }}
/>
And that's it! You have successfully ran a RAG application using Astra DB and Langflow.
# Conclusion
In this guide, we have learned how to run a RAG application using Astra DB and Langflow.
We have seen how to create an Astra DB database, import the Astra DB RAG Flows project into Langflow, and run the ingestion and RAG flows.

View file

@ -1,5 +1,6 @@
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import Admonition from "@theme/Admonition";
# API Keys
@ -7,12 +8,17 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
Langflow offers an API Key functionality that allows users to access their individual components and flows without going through traditional login authentication. The API Key is a user-specific token that can be included in the request's header or query parameter to authenticate API calls. The following documentation outlines how to generate, use, and manage these API Keys in Langflow.
<Admonition type="warning">
This feature requires the `LANGFLOW_AUTO_LOGIN` environment variable to be set
to `False`. The default user and password are set using _`LANGFLOW_SUPERUSER`_
and _`LANGFLOW_SUPERUSER_PASSWORD`_ environment variables. Default values are
_`langflow`_ and _`langflow`_ respectively.
</Admonition>
## Generating an API Key
### Through Langflow UI
{/* add image img/api-key.png */}
<ZoomableImage
alt="Docusaurus themed image"
sources={{
@ -36,7 +42,7 @@ Include the `x-api-key` in the HTTP header when making API requests:
```bash
curl -X POST \
http://localhost:3000/api/v1/process/<your_flow_id> \
http://localhost:3000/api/v1/run/<your_flow_id> \
-H 'Content-Type: application/json'\
-H 'x-api-key: <your api key>'\
-d '{"inputs": {"text":""}, "tweaks": {}}'

View file

@ -1,73 +0,0 @@
import Admonition from "@theme/Admonition";
# Asynchronous Processing
## Introduction
Starting from version 0.5, Langflow introduces a new feature to its API: the _`sync`_ flag. This flag allows users to opt for asynchronous processing of their flows, freeing up resources and enabling better control over long-running tasks.
This feature supports running tasks in a Celery worker queue and AnyIO task groups for now.
<Admonition type="warning" caption="Experimental Feature">
This is an experimental feature. The default behavior of the API is still
synchronous processing. The API may change in the future.
</Admonition>
## The _`sync`_ Flag
The _`sync`_ flag can be included in the payload of your POST request to the _`/api/v1/process/<your_flow_id>`_ endpoint.
When set to _`false`_, the API will initiate an asynchronous task instead of processing the flow synchronously.
### API Request with _`sync`_ flag
```bash
curl -X POST \
http://localhost:3000/api/v1/process/<your_flow_id> \
-H 'Content-Type: application/json' \
-H 'x-api-key: <your_api_key>' \
-d '{"inputs": {"text": ""}, "tweaks": {}, "sync": false}'
```
Response:
```json
{
"result": {
"output": "..."
},
"task": {
"id": "...",
"href": "api/v1/task/<task_id>"
},
"session_id": "...",
"backend": "..." // celery or anyio
}
```
## Checking Task Status
You can check the status of an asynchronous task by making a GET request to the `/task/{task_id}` endpoint.
```bash
curl -X GET \
http://localhost:3000/api/v1/task/<task_id> \
-H 'x-api-key: <your_api_key>'
```
### Response
The endpoint will return the current status of the task and, if completed, the result of the task. Possible statuses include:
- _`PENDING`_: The task is waiting for execution.
- _`SUCCESS`_: The task has completed successfully.
- _`FAILURE`_: The task has failed.
Example response for a completed task:
```json
{
"status": "SUCCESS",
"result": {
"output": "..."
}
}
```

View file

@ -26,13 +26,14 @@ Components are the building blocks of the flows. They are made of inputs, output
</div>
{" "}
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: useBaseUrl("img/single-compenent.png"),
dark: useBaseUrl("img/single-compenent.png"),
}}
style={{ width: "100%", maxWidth: "800px", margin: "0 auto" }}
style={{ width: "100%", maxWidth: "800px", margin: "20px auto" }}
/>
<div style={{ marginBottom: "20px" }}>

View file

@ -30,7 +30,7 @@ Here is an example:
<CH.Code linuNumbers={false}>
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
class DocumentProcessor(CustomComponent):
@ -92,7 +92,7 @@ The Python script for every Custom Component should follow a set of rules. Let's
The script must contain a **single class** that inherits from _`CustomComponent`_.
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
class MyComponent(CustomComponent):
@ -113,7 +113,7 @@ class MyComponent(CustomComponent):
This class requires a _`build`_ method used to run the component and define its fields.
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
class MyComponent(CustomComponent):
@ -134,7 +134,7 @@ class MyComponent(CustomComponent):
The [Return Type Annotation](https://docs.python.org/3/library/typing.html) of the _`build`_ method defines the component type (e.g., Chain, BaseLLM, or basic Python types). Check out all supported types in the [component reference](../components/custom).
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
class MyComponent(CustomComponent):
@ -153,7 +153,7 @@ class MyComponent(CustomComponent):
---
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
class MyComponent(CustomComponent):
@ -179,7 +179,7 @@ Check out the [component reference](../components/custom) for more details on th
---
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
class MyComponent(CustomComponent):
@ -204,7 +204,7 @@ Let's create a custom component that processes a document (_`langchain.schema.Do
To start, let's choose a name for our component by adding a _`display_name`_ attribute. This name will appear on the canvas. The name of the class is not relevant, but let's call it _`DocumentProcessor`_.
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
# focus
@ -227,7 +227,7 @@ class DocumentProcessor(CustomComponent):
We can also write a description for it using a _`description`_ attribute.
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
class DocumentProcessor(CustomComponent):
@ -244,7 +244,7 @@ class DocumentProcessor(CustomComponent):
---
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
class DocumentProcessor(CustomComponent):
@ -283,11 +283,11 @@ The return type is _`Document`_.
The _`build_config`_ method is here defined to customize the component fields.
- _`options`_ determines that the field will be a dropdown menu. The list values and field type must be _`str`_.
- _`value`_ is the default option of the dropdown menu.
- _`value`_ is the default value of the field.
- _`display_name`_ is the name of the field to be displayed.
```python
from langflow import CustomComponent
from langflow.custom import CustomComponent
from langchain.schema import Document
class DocumentProcessor(CustomComponent):

View file

@ -1,9 +1,3 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# Features
<div style={{ marginBottom: "20px" }}>
@ -14,13 +8,14 @@ import Admonition from "@theme/Admonition";
</div>
{" "}
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: useBaseUrl("img/features.png"),
dark: useBaseUrl("img/features.png"),
}}
style={{ width: "100%", maxWidth: "800px", margin: "0 auto" }}
style={{ width: "100%", maxWidth: "800px", margin: "20px auto" }}
/>
<div style={{ marginBottom: "20px" }}>
@ -46,14 +41,12 @@ The Code button shows snippets to use your flow as a Python object or an API.
**Python Code**
Through the Langflow package, you can load a flow from a JSON file and use it as a LangChain object.
Through the Langflow package, you can run your flow from a JSON file. The example below shows how to run a flow from a JSON file.
```py
from langflow import load_flow_from_json
```python
from langflow.load import run_flow_from_json
flow = load_flow_from_json("path/to/flow.json")
# Now you can use it like any chain
flow("Hey, have you heard of Langflow?")
results = run_flow_from_json("path/to/flow.json", input_value="Hello, World!")
```
**API**
@ -67,3 +60,9 @@ The example below shows a Python script making a POST request to a local API end
>
<ReactPlayer playing controls url="/videos/langflow_api.mp4" />
</div>
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";

View file

@ -105,7 +105,7 @@ Users can change their profile settings by clicking on the profile icon in the t
light: useBaseUrl("img/my-account.png"),
dark: useBaseUrl("img/my-account.png"),
}}
style={{ width: "50%", maxWidth: "600px", margin: "0 auto" }}
style={{ width: "50%", maxWidth: "600px", margin: "20px auto" }}
/>
By clicking on **Profile Settings**, the user is taken to the profile settings page, where they can change their password and their profile picture.
@ -116,10 +116,11 @@ By clicking on **Profile Settings**, the user is taken to the profile settings p
light: useBaseUrl("img/profile-settings.png"),
dark: useBaseUrl("img/profile-settings.png"),
}}
style={{ maxWidth: "600px", margin: "0 auto" }}
style={{ maxWidth: "600px", margin: "20px auto" }}
/>
By clicking on **Admin Page**, the superuser is taken to the admin page, where they can manage users and groups.
By clicking on **Admin Page**, the superuser is taken to the admin page, where they
can manage users and groups.
<ZoomableImage
alt="Docusaurus themed image"

View file

@ -1,44 +0,0 @@
import Admonition from "@theme/Admonition";
# Async API
## Introduction
<Admonition type="info" caption="In development">
This implementation is still in development. Contributions are welcome!
</Admonition>
The Async API is an implementation of the Langflow API that uses [Celery](https://docs.celeryproject.org/en/stable/)
to run the tasks asynchronously, using a message broker to send and receive messages, a result backend to store the results and a cache to store the task states and session data.
### Configuration
The folder _`./deploy`_ in the [Github repository](https://github.com/logspace-ai/langflow) contains a _`.env.example`_ file that can be used to configure a Langflow deployment.
The file contains the variables required to configure a Celery worker queue, Redis cache and result backend and a RabbitMQ message broker.
To set it up locally you can copy the file to _`.env`_ and run the following command:
```bash
docker compose up -d
```
This will set up the following containers:
- Langflow API
- Celery worker
- RabbitMQ message broker
- Redis cache
- PostgreSQL database
- PGAdmin
- Flower
- Traefik
- Grafana
- Prometheus
### Testing
To run the tests for the Async API, you can run the following command:
```bash
docker compose -f docker-compose.with_tests.yml up --exit-code-from tests tests result_backend broker celeryworker db --build
```

View file

@ -1,7 +0,0 @@
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
Now, we need to explain what are the permissions the superuser gets. Once logged in, they can activate new users,
edit them,

View file

@ -1,11 +1,13 @@
# 👋 Welcome to Langflow
Langflow is an easy way to create flows. The drag-and-drop feature allows quick and effortless experimentation, while the built-in chat interface facilitates real-time interaction. It provides options to edit prompt parameters, create chains and agents, track thought processes, and export flows.
import ThemedImage from "@theme/ThemedImage";
import useBaseUrl from "@docusaurus/useBaseUrl";
import ZoomableImage from "/src/theme/ZoomableImage.js";
# 👋 Welcome to Langflow
Langflow is an easy way to build from simple to complex AI applications. It is a low-code platform that allows you to integrate AI into everything you do.
{" "}
{" "}
<ZoomableImage
@ -16,3 +18,78 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
}}
style={{ width: "100%" }}
/>
## 🚀 First steps
## Installation
Make sure you have **Python 3.10** installed on your system.
You can install **Langflow** with [pipx](https://pipx.pypa.io/stable/installation/) or with pip.
Pipx can fetch the missing Python version for you, but you can also install it manually.
```bash
pip install langflow -U
# or
pipx install langflow --python python3.10 --fetch-missing-python
```
Or you can install a pre-release version using:
```bash
pip install langflow --pre --force-reinstall
# or
pipx install langflow --python python3.10 --fetch-missing-python --pip-args="--pre --force-reinstall"
```
We recommend using --force-reinstall to ensure you have the latest version of Langflow and its dependencies.
### ⛓️ Running Langflow
Langflow can be run in a variety of ways, including using the command-line interface (CLI) or HuggingFace Spaces.
```bash
langflow run # or langflow --help
```
#### 🤗 HuggingFace Spaces
Hugging Face provides a great alternative for running Langflow in their Spaces environment. This means you can run Langflow without any local installation required.
The first step is to go to the [Langflow Space](https://huggingface.co/spaces/Logspace/Langflow?duplicate=true).
Remember to use a Chromium-based browser for the best experience. You'll be presented with the following screen:
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/duplicate-space.png",
dark: "img/duplicate-space.png",
}}
style={{ width: "100%", margin: "20px auto" }}
/>
From here, just name your Space, define the visibility (Public or Private), and click on `Duplicate Space` to start the installation process. When that is done, you'll be redirected to the Space's main page to start using Langflow right away!
Once you get Langflow running, click on New Project in the top right corner of the screen. Langflow provides a range of example flows to help you get started.
To quickly try one of them, open a starter example, set up your API keys and click ⚡ Run, on the bottom right corner of the canvas. This will open up Langflow's Interaction Panel with the chat console, text inputs, and outputs.
### 🖥️ Command Line Interface (CLI)
Langflow provides a command-line interface (CLI) for easy management and configuration.
#### Usage
You can run the Langflow using the following command:
```bash
langflow run [OPTIONS]
```
Find more information about the available options by running:
```bash
langflow --help
```

View file

@ -0,0 +1,44 @@
import Admonition from '@theme/Admonition';
# Compatibility with Previous Versions
## TLDR;
- You'll need to add a few components to your flow to make it compatible with the new version of Langflow.
- Add a Runnable Executor, connect it to the last component (a Chain or an Agent) in your flow, and connect a Chat Input and a Chat Output to the Runnable Executor. This should work *most of the time*.
- You might also need to update the Chain or Agent component to the latest version.
- Most Components will work as they are, but you'll need to add an Input and an Output to your flow.
- You can use the Runnable Executor to run a LangChain runnable (which is the output of many components before 1.0)
- We need your feedback on this, so please let us know how it goes and what you think.
## Introduction
Langflow now works best with a flow that has an Input and an Output and that is mostly what you'll need to add to your existing flows.
Hopefully, you'll find that even though you still can work with your current flows, updating all your components to the new version of Langflow will be worth it.
We've tried to make it as easy as possible for you to adapt your existing flows to work seamlessly in the new version of Langflow.
## How to Adapt Your Existing Flows
The steps to take are few but not always simple. Here's how you can adapt your existing flows to work seamlessly in the new version of Langflow:
<Admonition type="caution">
<p>**Caution:**</p>
<p>While this should work most of the time, it might not work for all flows. You might need to update the Chain or Agent component to the latest version. Please let us know if you encounter any issues.</p>
</Admonition>
1. **Check if your flow ends with a Chain or Agent component**.
- If it does not, it *should* work as it is because it probably was not a chat flow.
2. **Add a Runnable Executor**.
- Add a Runnable Executor to the end of your flow.
- Connect the last component (a Chain or an Agent) in your flow to the Runnable Executor.
3. **Add a Chat Input and a Chat Output**.
- Add a Chat Input and a Chat Output to your flow.
- Connect the Chat Input to the Runnable Executor.
- Connect the Chat Output to the Runnable Executor.
{/* Add picture of the flow */}

View file

@ -0,0 +1,65 @@
import ZoomableImage from "/src/theme/ZoomableImage.js";
import Admonition from "@theme/Admonition";
# Global Variables
Global Variables are a really useful feature of Langflow.
They allow you to define reusable variables that can be accessed from any Text field in your project.
The first thing you need to do is find a **Text field** in a Component, so let's talk about what a Text field is.
## Text Fields
Text fields are the fields in a Component where you can write text but that does not allow you to open a Text Area.
The easiest way to find fields that are Text fields, though, is to look for fields that have a 🌐 button.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/ollama-gv.png",
dark: "img/ollama-gv.png",
}}
style={{ width: "50%" }}
/>
## Creating a Global Variable
To create a Global Variable, you need to click on the 🌐 button in a Text field and that will open a dropdown showing your currently available variables and at the end of it **+ Add New Variable**.
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/add-new-variable.png",
dark: "img/add-new-variable.png",
}}
style={{ width: "60%" }}
/>
Click on **+ Add New Variable** and a window will open where you can define your new Global Variable.
In it, you can define the **Name** of the variable, the optional **Type** of the variable, and the **Value** of the variable.
The **Name** is the name that you will use to refer to the variable in your Text fields.
The **Type** is optional for now but will be used in the future to allow for more advanced features.
The **Value** is the value that the variable will have.
{/* say that all variables are encrypted */}
<Admonition type="warning">
All Global Variables are encrypted and cannot be accessed by anyone but you.
</Admonition>
<ZoomableImage
alt="Docusaurus themed image"
sources={{
light: "img/create-variable-window.png",
dark: "img/create-variable-window.png",
}}
style={{ width: "60%" }}
/>
After you have defined your variable, click on **Save Variable** and your variable will be created.
After that, once you click on the 🌐 button in a Text field, you will see your new variable in the dropdown.

View file

@ -0,0 +1,36 @@
# Inputs and Outputs
TL;DR: Inputs and Outputs are a category of components that are used to define where data comes in and out of your flow. They also
dynamically change the Interaction Panel and can be renamed to make it easier to build and maintain your flows.
## Introduction
Langflow 1.0 introduces new categories of components called Inputs and Outputs. They are used to make it easier to understand and interact with your flows.
Let's start with what they have in common:
- Components in these categories connect to components that have Text or Record inputs or outputs. Some can connect to both but you have to pick what type of data you want to output or input.
- They can be renamed to help you identify them more easily in the Interaction Panel and while using the API.
- They dynamically change the Interaction Panel to make it easier to understand and interact with your flows.
Native Langflow Components were created to be powerful tools that work around Langflow's features. They are designed to be easy to use and understand, and to help you build your flows faster.
Let's dive into Inputs and Outputs.
## Inputs
Inputs are components that are used to define where data comes into your flow. They can be used to receive data from the user, from a database, or from any other source that can be converted to Text or Record.
The difference between Chat Input and other Input components is the format of the output, the number of configurable fields, and the way they are displayed in the Interaction Panel.
Chat Input components can output Text or Record. When you want to pass the sender name, or sender to the next component, you can use the Record output, and when you want to pass the message only you can use the Text output. This is useful when saving the message to a database or a memory system like Zep.
You can find out more about it and the other Inputs [here](../components/inputs).
## Outputs
Outputs are components that are used to define where data comes out of your flow. They can be used to send data to the user, to the Interaction Panel, or to define how the data will be displayed in the Interaction Panel.
The Chat Output works similarly to the Chat Input but does not have a field that allows for written input. It is used as an Output definition and can be used to send data to the user.
You can find out more about it and the other Outputs [here](../components/outputs).

View file

@ -0,0 +1,45 @@
# Text and Record
In Langflow 1.0 we added two main input and output types: Text and Record. Text is a simple string input and output type, while Record is a structure very similar to a dictionary in Python. It is a key-value pair data structure.
We've created a few components to help you work with these types. Let's see how a few of them work.
### Records To Text
This is a Component that takes in Records and outputs a Text. It does this using a template string and concatenating the values of the Record, one per line.
If we have the following Records:
```json
{
"sender_name": "Alice",
"message": "Hello!"
}
{
"sender_name": "John",
"message": "Hi!"
}
```
And the template string is: _`{sender_name}: {message}`_
```
Alice: Hello!
John: Hi!
```
### Create Record
This Component allows you to create a Record from a number of inputs. You can add as many key-value pairs as you want (as long as it is less than 15 😅). Once you've picked that number you'll need to write the name of the Key and can pass Text values from other components to it.
### Documents To Records
This Component takes in a [LangChain](https://langchain.com) Document and outputs a Record. It does this by extracting the _`page_content`_ and the _`metadata`_ from the Document and adding them to the Record as _`text`_ and _`data`_ respectively.
## Why is this useful?
The idea was to create a unified way to work with complex data in Langflow, and to make it easier to work with data that is not just a simple string. This way you can create more complex workflows and use the data in more ways.
## What's next?
We are planning to integrate an array of modalities to Langflow, such as images, audio, and video. This will allow you to create even more complex workflows and use cases. Stay tuned for more updates! 🚀

View file

@ -0,0 +1,96 @@
# A new chapter for Langflow
# First things first
Thank you all for being part of the Langflow community. The journey so far has been amazing and we are happy to have you with us.
We have some exciting news to share with you. Langflow is changing, and we want to tell you all about it.
## Where have we been?
We spent the last few months working on a new version of Langflow. We wanted to make it more powerful, more flexible, and easier to use.
We're moving from version 0.6 straight to 1.0 (preview). This is a big change, and we want to explain why we're doing it and what it means for you.
## Why?
In the past year, we learned a lot from the community and our users. We saw the potential of Langflow and the need for a more powerful and flexible tool for building conversational AI applications (and beyond).
We realized that Langflow was hiding things from you that would really help you build better and more complex conversational AI applications. So we decided to make a big change.
## The only way to go is forward
From all the people we talked to, we learned that the most important thing for (most of) them is to have a tool that is easy to use, but also powerful and controllable. They also told us that Langflow's transparency could be improved.
In those points, we saw an opportunity to make Langflow much more powerful and flexible, while also making it easier to use and understand.
One key change you'll notice is that projects now require you to define Inputs and Outputs.
This is a big change, but it's also a big improvement.
It allows you to define the structure of your conversation and the data that flows through it.
This makes it easier to understand and control your conversation.
This change comes with a new way of visualizing your projects. Before 1.0 you would connect Components to ultimately build one final Component that was processed behind the scenes.
Now, each step of the process is defined by you, is visible on the canvas, and can be monitored and controlled by you. This makes it so that Composition is now just another way of building in Langflow. **Now data flows through your project more transparently**.
The caveat is existing projects may need some new Components to get them back to their full functionality.
[We've made this as easy as possible](../migration/compatibility), and there will be improvements to it as we get feedback in our Discord server and on GitHub.
## Custom Interactions
The moment we decided to make this change, we saw the potential to make Langflow even more yours.
By having a clear definition of Inputs and Outputs, we could build the experience around that which led us to create the **Interaction Panel**.
When building a project testing and debugging is crucial. The Interaction Panel is a tool that changes dynamically based on the Inputs and Outputs you defined in your project.
For example, let's say you are building a simple RAG application. Generally, you have an Input, some references that come from a Vector Store Search, a Prompt and the answer.
Now, you could plug the output of your Prompt into a [Text Output](../components/outputs#Text-Output), rename that to "Prompt Result" and see the output of your Prompt in the Interaction Panel.
{/* Add image here of the described above */}
This is just one example of how the Interaction Panel can help you build and debug your projects.
We have many planned features for the Interaction Panel, and we're excited to see how you use it and what you think of it.
## An easier start
The experience for the first-time user is also something we wanted to improve.
Meet the new and improved **New Project** screen. It's now easier to start a new project, and you can choose from a list of starter projects to get you started.
{/* Add new project image */}
We wanted to create start projects that would help you learn about new features and also give you a head start on your projects.
For now, we have:
- **[Basic Prompting (Hello, world!)](/getting-started/basic-prompting)**: A simple flow that shows you how to use the Prompt Component and how to talk like a pirate.
- **[Vector Store RAG](/getting-started/rag-with-astradb)**: A flow that shows you how to ingest data into a Vector Store and then use it to run a RAG application.
- **[Memory Chatbot](/getting-started/memory-chatbot)**: This one shows you how to create a simple chatbot that can remember things about the user.
- **[Document QA](/getting-started/document-qa)**: This flow shows you how to build a simple flow that helps you get answers about a document.
- **[Blog Writer](/getting-started/blog-writer)**: Shows you how you can expand on the Prompt variables and be creative about what inputs you add to it.
As always, your feedback is invaluable, so please let us know what you think of the new starter projects and what you would like to see in the future.
## Less is more
We added many new Components to Langflow and updated some of the existing ones, and we will deprecate some of them.
The idea is that Langflow has evolved, and we want to make sure that the Components you use are the best they can be.
Some of them don't work well with the others, and some of them are just not needed anymore.
We are working on a list of Components that will be deprecated.
In the preview stages of 1.0, we will have a smaller list of Components so that we make sure that the ones we have are the best they can be.
Regardless, community feedback is very important in this matter, so please let us know what you think of the new Components and which ones you miss.
We are aiming at having a more stable and reliable set of Components that helps you get quickly to useful results.
This also means that your contributions in the [Langflow Store](https://langflow.store) and throughout the community are more important than ever.
## What's next?
Langflow went through a big change, and we are excited to see how you use it and what you think of it.
We plan to add more types of Input and Output like Image and Audio, and we also plan to add more Components to help you build more complex projects.
We also have some experimental features like a State Management System (so cool!) and a new way of building Grouped Components that we are excited to show you.
## Reach out
One last time, we want to thank you for being part of the Langflow community. Your feedback is invaluable, and we want to hear from you.

View file

@ -0,0 +1 @@
# A New Customization and Control

View file

@ -0,0 +1 @@
# Debugging Reimagined

View file

@ -0,0 +1,125 @@
# Migrating to Langflow 1.0: A Guide
Langflow 1.0 is a significant update that brings many exciting changes and improvements to the platform.
This guide will walk you through the key improvements and help you migrate your existing projects to the new version.
If you have any questions or need assistance during the migration process, please don't hesitate to reach out to in our [Discord](https://discord.gg/wZSWQaukgJ) or [GitHub](https://github.com/logspace-ai/langflow/issues) community.
We have a special channel in our Discord server dedicated to Langflow 1.0 migration, where you can ask questions, share your experiences, and get help from the community.
## TLDR;
- Inputs and Outputs of Components have changed
- We've surfaced steps that were previously run in the background
- Continued support for LangChain and new support for multiple frameworks
- Redesigned sidebar and customizable interaction panel
- New Native Categories and Components
- Improved user experience with Text and Record modes
- CustomComponent for all components
- Compatibility with previous versions using Runnable Executor
- Multiple flows in the canvas
- Improved component status
- Ability to connect Output components to any other Component
- Rename and edit component descriptions
- Pass tweaks and inputs in the API using Display Name
- Global Variables for Text Fields
- Experimental components like SubFlow and Flow as Tool
- Experimental State Management system with Notify and Listen components
## Inputs and Outputs of Components
Langflow 1.0 introduces adds the concept of Inputs and Outputs to flows, allowing a clear definition of the data flow between components. Discover how to use Inputs and Outputs to pass data between components and create more dynamic flows.
[Learn more about Inputs and Outputs of Components](../migration/inputs-and-outputs)
## To Compose or Not to Compose: the choice is yours
Even though composition is still possible in Langflow 1.0, the new standard is getting data moving through the flow. This allows for more flexibility and control over the data flow in your projects.
We will create guides on how to interweave LangChain components with our Core components soon.
## Continued Support for LangChain and Multiple Frameworks
Langflow 1.0 continues to support LangChain while also introducing support for multiple frameworks. This is another important boon that adding the paradigm of data flow brings to the table. Find out how to leverage the power of different frameworks in your projects.
[Learn more about Supported Frameworks](../migration/supported-frameworks)
## Sidebar Redesign and Customizable Interaction Panel
We've expanded on the chat experience by creating a customizable interaction panel that allows you to design a panel that fits your needs and interact with it. The sidebar has also been redesigned to provide a more intuitive and user-friendly experience. Explore the new sidebar and interaction panel features to enhance your workflow.
[Learn more about some of the UI updates](../migration/sidebar-and-interaction-panel)
## New Native Categories and Components
Langflow 1.0 introduces many new native categories, including Inputs, Outputs, Helpers, Experimental, Models, and more. Discover the new components available, such as Chat Input, Prompt, Files, API Request, and others.
[Learn more about New Categories and Components](../migration/new-categories-and-components)
## New Way of Using Langflow: Text and Record (and more to come)
With the introduction of Text and Record types connections between Components are more intuitive and easier to understand. This is the first step in a series of improvements to the way you interact with Langflow. Learn how to use Text, and Record and how they help you build better flows.
[Learn more about Text and Record](../migration/text-and-record)
## CustomComponent for All Components
Almost all components in Langflow 1.0 are now CustomComponents, allowing you to check and modify the code of each component. Discover how to leverage this feature to customize your components to your specific needs.
[Learn more about CustomComponent](../migration/custom-component)
## Compatibility with Previous Versions
To use flows built in previous versions of Langflow, you can utilize the experimental component Runnable Executor along with an Input and Output. **We'd love your feedback on this**. Learn how to adapt your existing flows to work seamlessly in the new version of Langflow.
[Learn more about Compatibility with Previous Versions](../migration/compatibility)
## Multiple Flows in the Canvas
Langflow 1.0 allows you to have more than one flow in the canvas and run them separately. Discover how to create and manage multiple flows within a single project.
[Learn more about Multiple Flows](../migration/multiple-flows)
## Improved Component Status
Each component now displays its status more clearly, allowing you to quickly identify any issues or errors. Explore how to use the new component status feature to troubleshoot and optimize your flows.
[Learn more about Component Status](../migration/component-status-and-data-passing)
## Connecting Output Components
You can now connect Output components to any other component (that has a Text output), providing a better understanding of the data flow. Explore the possibilities of connecting Output components and how it enhances your flow's functionality.
[Learn more about Connecting Output Components](../migration/connecting-output-components)
## Renaming and Editing Component Descriptions
Langflow 1.0 allows you to rename and edit the description of each component, making it easier to understand and interact with the flow. Learn how to customize your component names and descriptions for improved clarity.
[Learn more about Renaming and Editing Components](../migration/renaming-and-editing-components)
## Passing Tweaks and Inputs in the API
Things got a whole lot easier. You can now pass tweaks and inputs in the API by referencing the Display Name of the component. Discover how to leverage this feature to dynamically control your flow's behavior.
[Learn more about Passing Tweaks and Inputs](../migration/passing-tweaks-and-inputs)
## Global Variables for Text Fields
Global Variables can be used in any Text Field across your projects. Learn how to define and utilize Global Variables to streamline your workflow.
[Learn more about Global Variables](../migration/global-variables)
## Experimental Components
Explore the experimental components available in Langflow 1.0, such as SubFlow, which allows you to load a flow as a component dynamically, and Flow as Tool, which enables you to use a flow as a tool for an Agent.
[Learn more about Experimental Components](../migration/experimental-components)
## Experimental State Management System
We are experimenting with a State Management system for flows that allows components to trigger other components and pass messages between them using the Notify and Listen components. Discover how to leverage this system to create more dynamic and interactive flows.
[Learn more about State Management](../migration/state-management)
We hope this guide helps you navigate the changes and improvements in Langflow 1.0. If you have any questions or need further assistance, please don't hesitate to reach out to us in our [Discord](https://discord.gg/wZSWQaukgJ).

View file

@ -0,0 +1 @@
# Simplification Through Standardization

View file

@ -14,6 +14,7 @@ module.exports = {
organizationName: "logspace-ai",
projectName: "langflow",
trailingSlash: false,
staticDirectories: ["static"],
customFields: {
mendableAnonKey: process.env.MENDABLE_ANON_KEY,
},
@ -42,6 +43,10 @@ module.exports = {
path: "docs",
// sidebarPath: 'sidebars.js',
},
gtag: {
trackingID: 'G-XHC7G628ZP',
anonymizeIP: true,
},
theme: {
customCss: [
require.resolve("@code-hike/mdx/styles.css"),

940
docs/package-lock.json generated

File diff suppressed because it is too large Load diff

View file

@ -16,11 +16,12 @@
"dependencies": {
"@babel/preset-react": "^7.22.3",
"@code-hike/mdx": "^0.9.0",
"@docusaurus/core": "3.0.1",
"@docusaurus/plugin-ideal-image": "^3.0.1",
"@docusaurus/preset-classic": "3.0.1",
"@docusaurus/theme-classic": "^3.0.1",
"@docusaurus/theme-search-algolia": "^3.0.1",
"@docusaurus/core": "^3.2.0",
"@docusaurus/plugin-google-gtag": "^3.2.0",
"@docusaurus/plugin-ideal-image": "^3.2.0",
"@docusaurus/preset-classic": "^3.2.0",
"@docusaurus/theme-classic": "^3.2.0",
"@docusaurus/theme-search-algolia": "^3.2.0",
"@mdx-js/react": "^2.3.0",
"@mendable/search": "^0.0.154",
"@pbe/react-yandex-maps": "^1.2.4",
@ -47,7 +48,7 @@
"tailwindcss": "^3.3.2"
},
"devDependencies": {
"@docusaurus/module-type-aliases": "2.4.1",
"@docusaurus/module-type-aliases": "^3.2.0",
"css-loader": "^6.8.1",
"docusaurus-node-polyfills": "^1.0.0",
"node-sass": "^9.0.0",
@ -69,4 +70,4 @@
"engines": {
"node": ">=16.14"
}
}
}

View file

@ -2,13 +2,49 @@ module.exports = {
docs: [
{
type: "category",
label: "Getting Started",
label: " Getting Started",
collapsed: false,
items: [
"index",
"getting-started/installation",
"getting-started/hugging-face-spaces",
"getting-started/creating-flows",
"getting-started/cli",
"getting-started/basic-prompting",
"getting-started/document-qa",
"getting-started/blog-writer",
"getting-started/memory-chatbot",
"getting-started/rag-with-astradb",
],
},
{
type: "category",
label: " What's New",
collapsed: false,
items: [
"whats-new/a-new-chapter-langflow",
"whats-new/migrating-to-one-point-zero",
],
},
{
type: "category",
label: " Migration Guides",
collapsed: false,
items: [
// "migration/flow-of-data",
"migration/inputs-and-outputs",
// "migration/supported-frameworks",
// "migration/sidebar-and-interaction-panel",
// "migration/new-categories-and-components",
"migration/text-and-record",
// "migration/custom-component",
"migration/compatibility",
// "migration/multiple-flows",
// "migration/component-status-and-data-passing",
// "migration/connecting-output-components",
// "migration/renaming-and-editing-components",
// "migration/passing-tweaks-and-inputs",
"migration/global-variables",
// "migration/experimental-components",
// "migration/state-management",
],
},
{
@ -18,7 +54,6 @@ module.exports = {
items: [
"guidelines/login",
"guidelines/api",
"guidelines/async-api",
"guidelines/components",
"guidelines/features",
"guidelines/collection",
@ -30,47 +65,42 @@ module.exports = {
},
{
type: "category",
label: "Component Reference",
label: "Step-by-Step Guides",
collapsed: false,
items: ["guides/langfuse_integration"],
},
{
type: "category",
label: "Core Components",
collapsed: false,
items: [
"components/agents",
"components/chains",
"components/custom",
"components/embeddings",
"components/llms",
"components/loaders",
"components/memories",
"components/prompts",
"components/retrievers",
"components/text-splitters",
"components/toolkits",
"components/tools",
"components/utilities",
"components/inputs",
"components/outputs",
"components/data",
"components/models",
"components/helpers",
"components/vector-stores",
"components/wrappers",
"components/embeddings",
],
},
{
type: "category",
label: "Step-by-Step Guides",
label: "Extended Components",
collapsed: false,
items: [
"guides/async-tasks",
"guides/loading_document",
"guides/chatprompttemplate_guide",
"guides/langfuse_integration",
"components/agents",
"components/chains",
"components/loaders",
"components/experimental",
"components/utilities",
"components/memories",
"components/model_specs",
"components/retrievers",
"components/text-splitters",
"components/toolkits",
"components/tools",
],
},
// {
// type: 'category',
// label: 'Components',
// collapsed: false,
// items: [
// 'components/agents', 'components/chains', 'components/loaders', 'components/embeddings', 'components/llms',
// 'components/memories', 'components/prompts','components/text-splitters', 'components/toolkits', 'components/tools',
// 'components/utilities', 'components/vector-stores', 'components/wrappers',
// ],
// },
{
type: "category",
label: "Examples",
@ -79,13 +109,10 @@ module.exports = {
"examples/flow-runner",
"examples/conversation-chain",
"examples/buffer-memory",
"examples/midjourney-prompt-chain",
"examples/csv-loader",
"examples/searchapi-tool",
"examples/serp-api-tool",
"examples/multiple-vectorstores",
"examples/python-function",
"examples/how-upload-examples",
],
},
{

View file

@ -0,0 +1,29 @@
const DownloadableJsonFile = ({ source, title }) => {
const handleDownload = (event) => {
event.preventDefault();
fetch(source)
.then((response) => response.blob())
.then((blob) => {
const url = window.URL.createObjectURL(
new Blob([blob], { type: "application/json" })
);
const link = document.createElement("a");
link.href = url;
link.setAttribute("download", title);
document.body.appendChild(link);
link.click();
link.parentNode.removeChild(link);
})
.catch((error) => {
console.error("Error downloading file:", error);
});
};
return (
<a href={source} download={title} onClick={handleDownload}>
{title}
</a>
);
};
export default DownloadableJsonFile;

3403
docs/static/data/AstraDB-RAG-Flows.json vendored Normal file

File diff suppressed because one or more lines are too long

BIN
docs/static/img/add-new-variable.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 202 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

BIN
docs/static/img/astra-generate-token.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 220 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

BIN
docs/static/img/astra-ingestion-flow.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

BIN
docs/static/img/astra-ingestion-run.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

BIN
docs/static/img/astra-rag-flow-dark.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 161 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 354 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 165 KiB

BIN
docs/static/img/astra-rag-flow-run.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

BIN
docs/static/img/astra-rag-flow.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 149 KiB

BIN
docs/static/img/chat-input-expanded.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

BIN
docs/static/img/chat-input.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Some files were not shown because too many files have changed in this diff Show more