Langflow is a powerful tool for building and deploying AI-powered agents and workflows. http://www.langflow.org
Find a file
Gabriel Luiz Freitas Almeida 18b4e33062 🐛 fix(flows.py): remove unused import statement to improve code cleanliness and maintainability
🐛 fix(flows.py): change Flow.from_orm() to Flow.model_validate() to ensure data integrity and validation
🐛 fix(users.py): remove unused import statements to improve code cleanliness and maintainability
🐛 fix(users.py): change User.from_orm() to User.model_validate() to ensure data integrity and validation
🐛 fix(LLMChain.py): remove unused import statements to improve code cleanliness and maintainability
🐛 fix(LLMChain.py): remove unnecessary line breaks to improve code readability
🐛 fix(base.py): remove unused import statements to improve code cleanliness and maintainability
🐛 fix(base.py): remove unnecessary line breaks to improve code readability
🐛 fix(base.py): fix condition to append vertex_id to top_level_vertices to avoid appending non-string values
🐛 fix(vertex/base.py): add parent_node_id attribute to Vertex class to support hierarchical graph structures
🐛 fix(base.py): remove unused import statements to improve code cleanliness and maintainability

🚀 feat(GroupTest): add a new node for a simple chat with a custom prompt template and conversational memory buffer

ℹ️ This commit adds a new node to the GroupTest project. The node is a genericNode with the following properties:
- Width: 384
- Height: 621
- ID: ChatOpenAI-rUJ1b
- Type: genericNode
- Position: x: 170.87326389541306, y: 465.8628482073749
- Data:
  - Type: ChatOpenAI
  - Node:
    - Template:
      - Callbacks:
        - Required: false
        - Placeholder: ""
        - Show: false
        - Multiline: false
        - Password: false
        - Name: callbacks
        - Advanced: false
        - Dynamic: false
        - Info: ""
        - Type: langchain.callbacks.base.BaseCallbackHandler
        - List: true
      - Cache:
        - Required: false
        - Placeholder: ""
        - Show: false
        - Multiline: false
        - Password: false
        - Name: cache
        - Advanced: false
        - Dynamic: false
        - Info: ""
        - Type: bool
        - List: false
      - Client:
        - Required: false
        - Placeholder: ""
        - Show: false
        - Multiline: false
        - Password: false
        - Name: client
        - Advanced: false
        - Dynamic: false
        - Info: ""
        - Type: Any
        - List: false
      - Max Retries:
        - Required: false
        - Placeholder: ""
        - Show: false
        - Multiline: false
        - Value: 6
        - Password: false
        - Name: max_retries
        - Advanced: false
        - Dynamic: false
        - Info: ""
        - Type: int
        - List: false
      - Max Tokens:
        - Required: false
        - Placeholder: ""
        - Show: true
        - Multiline: false
        - Password: true
        - Name: max_tokens
        - Advanced: false
        - Dynamic: false
        - Info: ""
        - Type: int
        - List: false

🔧 chore: fix formatting issue in code
📝 docs: update documentation link for `OpenAI` Chat large language models API

🔧 chore: update prompt template configuration in LLMChain node
📝 docs: add documentation link for PromptTemplate in the description

📝 chore(grouped_chat.json): add grouped_chat.json test data file

This commit adds the `grouped_chat.json` file to the `tests/data` directory. The file contains a JSON object representing grouped chat data. This file is necessary for testing and will be used in the test suite.

📝 chore(one_group_chat.json): add one_group_chat.json test data file

This commit adds the one_group_chat.json file, which contains a simple chat with a custom prompt template and conversational memory buffer. This file is used for testing purposes.

🔧 chore: update node configuration for ConversationBufferMemory, ChatOpenAI, and LLMChain
📝 docs: update documentation links for ConversationBufferMemory and LLMChain

🔧 fix: update prompt template in LLMChain to include conversation history and text input variables
🔧 fix: update ConversationBufferMemory node to include description and documentation link

🎨 style: format and organize code for better readability and maintainability

🆕 feat(Vector Store): add Vector Store agent and Vector Store Info node

The Vector Store agent allows querying a Vector Store. It can be used to construct an agent from a Vector Store. The Vector Store Info node provides information about a Vector Store.

The Vector Store agent and Vector Store Info node are added to support the functionality of querying a Vector Store.

🔧 chore: update configuration options in the OpenAI API client

The configuration options in the OpenAI API client have been updated. This commit includes changes to the following options:

- `max_tokens`: Removed the `required` flag and set `show` to `true`
- `metadata`: Set `show` to `false`
- `model_kwargs`: Set `show` to `true` and `advanced` to `true`
- `model_name`: Added options `gpt-3.5-turbo-0613`, `gpt-3.5-turbo`, `gpt-3.5-turbo-16k-0613`, `gpt-3.5-turbo-16k`, `gpt-4-0613`, `gpt-4-32k-0613`, `gpt-4`, `gpt-4-32k`
- `n`: Removed the `show` flag
- `openai_api_base`: Added `display_name` as "OpenAI API Base" and updated `info` with additional details
- `openai_api_key`: Removed the `required` flag and set `show` to `true`
- `openai_organization`: Removed the `show` flag
- `openai_proxy`: Removed the `show` flag
- `request_timeout`: Removed the `show` flag
- `streaming`: Removed the `show` flag
- `tags`: Removed the `show` flag
- `temperature`: Removed the `show` flag
- `tiktoken_model_name`: Removed the `show` flag
- `verbose`: Removed the `show` flag

🔧 chore: update configuration for ChatOpenAI and Chroma nodes

The configuration for the ChatOpenAI and Chroma nodes has been updated. This includes changes to the allowed_special, disallowed_special, chunk_size, client, deployment, embedding_ctx_length, and max_retries properties. These changes were made to improve the functionality and performance of the nodes.

🔧 chore(config): update OpenAIEmbeddings-YwSvx configuration options

The OpenAIEmbeddings-YwSvx configuration options have been updated to include new fields and values. This commit updates the configuration file to reflect these changes.

🔧 chore(config): update configuration options for OpenAIEmbeddings and Chroma

🔧 chore(config): update configuration options for OpenAIEmbeddings and Chroma to improve flexibility and customization

🔧 chore: update configuration options for RecursiveCharacterTextSplitter and WebBaseLoader in flow

The configuration options for RecursiveCharacterTextSplitter and WebBaseLoader in the flow have been updated. The changes include:

- Persist Directory - Chroma: The persist directory option for Chroma has been modified.
- Search Kwargs - Chroma: The search kwargs option for Chroma has been modified.
- Chunk Overlap - RecursiveCharacterTextSplitter: The chunk overlap option for RecursiveCharacterTextSplitter has been modified.
- Chunk Size - RecursiveCharacterTextSplitter: The chunk size option for RecursiveCharacterTextSplitter has been modified.
- Separator Type - RecursiveCharacterTextSplitter: The separator type option for RecursiveCharacterTextSplitter has been modified.
- Separator - RecursiveCharacterTextSplitter: The separator option for RecursiveCharacterTextSplitter has been modified.
- Metadata - WebBaseLoader: The metadata option for WebBaseLoader has been modified.
- Web Page - WebBaseLoader: The web page option for WebBaseLoader has been modified.

🔧 chore(OpenAIEmbeddings): update OpenAIEmbeddings configuration options

The OpenAIEmbeddings node configuration options have been updated to include the following changes:
- `allowed_special` and `disallowed_special` now accept a list of values instead of a single value
- `chunk_size` now accepts an integer value
- `deployment` now accepts a string value
- `embedding_ctx_length` now accepts an integer value
- `headers` now supports multiline values
- `max_retries` now accepts an integer value
- `model` now accepts a string value
- `model_kwargs` now accepts code input
- `openai_api_base` now accepts a password input
- `openai_api_key` now accepts a password input
- `openai_api_type` now accepts a password input
- `openai_api_version` now accepts a password input
- `openai_organization` has been removed from the configuration options

🔧 chore: update OpenAIEmbeddings configuration options in the UI

The OpenAIEmbeddings configuration options in the UI have been updated to include the following changes:
- Added the `openai_organization` option to specify the OpenAI organization.
- Added the `openai_proxy` option to configure the OpenAI proxy.
- Added the `request_timeout` option to set the request timeout.
- Added the `show_progress_bar` option to control the visibility of the progress bar.
- Changed the `tiktoken_model_name` option to be a password field.
- Updated the documentation link for OpenAIEmbeddings.

This commit updates the configuration options to improve the usability and functionality of the OpenAIEmbeddings module in the UI.

🔧 chore: clean up unused code and remove unnecessary fields in the configuration file
📝 docs: update documentation link for the Chroma vectorstore module

🔧 chore: update configuration options for RecursiveCharacterTextSplitter in flow

The configuration options for the RecursiveCharacterTextSplitter node in the flow have been updated. The following changes were made:

- `chunk_size` option: The default value has been changed to 1000.
- `separator_type` option: The available options have been updated to include "Text", "cpp", "go", "html", "java", "js", "latex", "markdown", "php", "proto", "python", "rst", "ruby", "rust", "scala", "sol", and "swift".
- `separators` option: The default value has been changed to ".".

These changes were made to improve the usability and flexibility of the RecursiveCharacterTextSplitter node in the flow.

📝 chore(vector_store_grouped.json): add vector_store_grouped.json test data file

🔀 chore(vector_store_grouped.json): add vector_store_grouped.json test data file

🔨 refactor(test_graph.py): reformat import statements and improve code readability
🔨 refactor(test_prompts_template.py): change dynamic attribute to True for input variables, output parser, partial variables, template, and validate template
🔨 refactor(test_template.py): reformat import statements and remove duplicate import of BaseModel
🔨 refactor(test_template.py): update value for options in format_dict test
2023-12-12 16:46:41 -03:00
.devcontainer Revert "Release 0.5.6" 2023-11-06 23:18:49 -03:00
.githooks pre-commit make init fix 2023-08-31 11:49:00 -03:00
.github Merge remote-tracking branch 'origin/dev' into merge 2023-12-12 15:45:53 -03:00
.vscode 🔧 chore(launch.json): add "envFile" configuration to specify the path to the .env file for environment variable loading during debugging 2023-10-24 16:11:53 -03:00
deploy Update Celery command options 2023-11-28 19:38:16 -03:00
docker_example 📦 chore(Dockerfile): update langflow package version to 0.5.0 for both Dockerfiles 2023-11-06 14:23:30 -03:00
docs Merge remote-tracking branch 'origin/dev' into merge 2023-12-12 15:45:53 -03:00
img Revert "Release 0.5.6" 2023-11-06 23:18:49 -03:00
scripts/gcp 📦 chore(deploy_langflow_gcp.sh): add script to deploy Langflow on Google Cloud Platform 2023-08-16 15:39:08 -03:00
src 🐛 fix(flows.py): remove unused import statement to improve code cleanliness and maintainability 2023-12-12 16:46:41 -03:00
tests 🐛 fix(flows.py): remove unused import statement to improve code cleanliness and maintainability 2023-12-12 16:46:41 -03:00
.dockerignore add support for VertexAIEmbeddings node 2023-08-08 17:35:50 -05:00
.env.example Merge remote-tracking branch 'origin/dev' into merge 2023-12-12 15:45:53 -03:00
.gitattributes 🔧 chore(.gitattributes): update file extensions to be treated as text or binary to improve repository consistency and performance 2023-09-27 15:49:08 -03:00
.gitignore Merge remote-tracking branch 'origin/dev' into merge 2023-12-12 15:45:53 -03:00
.readthedocs.yaml fixed readthedocs yaml position 2023-06-23 14:31:50 -03:00
base.Dockerfile Update poetry version to 1.7.1 2023-11-28 14:34:02 -03:00
CODE_OF_CONDUCT.md Create CODE_OF_CONDUCT.md 2023-03-07 17:18:11 -03:00
CONTRIBUTING.md 📝 docs(CONTRIBUTING.md): add branch structure information to CONTRIBUTING.md file for better understanding of the repository structure 2023-08-07 12:07:45 -03:00
dev.Dockerfile Update dev.Dockerfile to fix uvicorn factory path 2023-08-07 08:20:34 -05:00
docker-compose.debug.yml added client_settings tho the returned Chroma object 2023-08-22 13:37:34 +00:00
docker-compose.yml Update docker-compose.yml to fix uvicorn factory path 2023-08-07 08:19:37 -05:00
Dockerfile 📦 chore(Dockerfile): update langflow package version to 0.5.0 for both Dockerfiles 2023-11-06 14:23:30 -03:00
example.har fixing tests 2023-12-11 16:53:52 -03:00
GCP_DEPLOYMENT.md Update GCP_DEPLOYMENT.md correct url for opening spot 2023-07-06 04:16:24 -04:00
lcserve.Dockerfile feat: deploy langflow using langchain-serve 2023-05-15 17:48:02 +05:30
LICENSE docs: Add img, LICENSE 2023-02-23 20:29:53 -03:00
Makefile Merge remote-tracking branch 'origin/dev' into merge 2023-12-12 15:45:53 -03:00
package-lock.json Added combobox and command 2023-11-16 10:46:29 -03:00
package.json Added combobox and command 2023-11-16 10:46:29 -03:00
poetry.lock Merge remote-tracking branch 'origin/dev' into merge 2023-12-12 15:45:53 -03:00
pyproject.toml Update dependencies in pyproject.toml 2023-12-12 10:01:44 -03:00
README.md 📝 docs(README.md): update documentation to include new parameters for completeness and clarity 2023-12-12 13:50:47 -03:00
render.yaml 🐛 fix(render.yaml): fix indentation of value for LANGFLOW_DATABASE_URL environment variable to match the key 2023-09-14 17:19:13 -03:00

⛓️ Langflow

~ An effortless way to experiment and prototype LangChain pipelines ~

GitHub Contributors GitHub Last Commit GitHub Issues GitHub Pull Requests Github License

Discord Server HuggingFace Spaces

Table of Contents

📦 Installation

Locally

You can install Langflow from pip:

# This installs the package without dependencies for local models
pip install langflow

To use local models (e.g llama-cpp-python) run:

pip install langflow[local]

This will install the following dependencies:

You can still use models from projects like LocalAI

Next, run:

python -m langflow

or

langflow run # or langflow --help

HuggingFace Spaces

You can also check it out on HuggingFace Spaces and run it in your browser! You can even clone it and have your own copy of Langflow to play with.

🖥️ Command Line Interface (CLI)

Langflow provides a command-line interface (CLI) for easy management and configuration.

Usage

You can run the Langflow using the following command:

langflow run [OPTIONS]

Each option is detailed below:

  • --help: Displays all available options.
  • --host: Defines the host to bind the server to. Can be set using the LANGFLOW_HOST environment variable. The default is 127.0.0.1.
  • --workers: Sets the number of worker processes. Can be set using the LANGFLOW_WORKERS environment variable. The default is 1.
  • --timeout: Sets the worker timeout in seconds. The default is 60.
  • --port: Sets the port to listen on. Can be set using the LANGFLOW_PORT environment variable. The default is 7860.
  • --config: Defines the path to the configuration file. The default is config.yaml.
  • --env-file: Specifies the path to the .env file containing environment variables. The default is .env.
  • --log-level: Defines the logging level. Can be set using the LANGFLOW_LOG_LEVEL environment variable. The default is critical.
  • --components-path: Specifies the path to the directory containing custom components. Can be set using the LANGFLOW_COMPONENTS_PATH environment variable. The default is langflow/components.
  • --log-file: Specifies the path to the log file. Can be set using the LANGFLOW_LOG_FILE environment variable. The default is logs/langflow.log.
  • --cache: Selects the type of cache to use. Options are InMemoryCache and SQLiteCache. Can be set using the LANGFLOW_LANGCHAIN_CACHE environment variable. The default is SQLiteCache.
  • --dev/--no-dev: Toggles the development mode. The default is no-dev.
  • --path: Specifies the path to the frontend directory containing build files. This option is for development purposes only. Can be set using the LANGFLOW_FRONTEND_PATH environment variable.
  • --open-browser/--no-open-browser: Toggles the option to open the browser after starting the server. Can be set using the LANGFLOW_OPEN_BROWSER environment variable. The default is open-browser.
  • --remove-api-keys/--no-remove-api-keys: Toggles the option to remove API keys from the projects saved in the database. Can be set using the LANGFLOW_REMOVE_API_KEYS environment variable. The default is no-remove-api-keys.
  • --install-completion [bash|zsh|fish|powershell|pwsh]: Installs completion for the specified shell.
  • --show-completion [bash|zsh|fish|powershell|pwsh]: Shows completion for the specified shell, allowing you to copy it or customize the installation.
  • --backend-only: This parameter, with a default value of False, allows running only the backend server without the frontend. It can also be set using the LANGFLOW_BACKEND_ONLY environment variable.
  • store: This parameter, with a default value of True, enables the store features, use --no-store to deactivate it. It can be configured using the LANGFLOW_STORE environment variable.

These parameters are important for users who need to customize the behavior of Langflow, especially in development or specialized deployment scenarios. You may want to update the documentation to include these parameters for completeness and clarity.

Environment Variables

You can configure many of the CLI options using environment variables. These can be exported in your operating system or added to a .env file and loaded using the --env-file option.

A sample .env file named .env.example is included with the project. Copy this file to a new file named .env and replace the example values with your actual settings. If you're setting values in both your OS and the .env file, the .env settings will take precedence.

Deployment

Deploy Langflow on Google Cloud Platform

Follow our step-by-step guide to deploy Langflow on Google Cloud Platform (GCP) using Google Cloud Shell. The guide is available in the Langflow in Google Cloud Platform document.

Alternatively, click the "Open in Cloud Shell" button below to launch Google Cloud Shell, clone the Langflow repository, and start an interactive tutorial that will guide you through the process of setting up the necessary resources and deploying Langflow on your GCP project.

Open in Cloud Shell

Deploy on Railway

Deploy on Railway

Deploy on Render

Deploy to Render

🎨 Creating Flows

Creating flows with Langflow is easy. Simply drag sidebar components onto the canvas and connect them together to create your pipeline. Langflow provides a range of LangChain components to choose from, including LLMs, prompt serializers, agents, and chains.

Explore by editing prompt parameters, link chains and agents, track an agent's thought process, and export your flow.

Once you're done, you can export your flow as a JSON file to use with LangChain. To do so, click the "Export" button in the top right corner of the canvas, then in Python, you can load the flow with:

from langflow import load_flow_from_json

flow = load_flow_from_json("path/to/flow.json")
# Now you can use it like any chain
flow("Hey, have you heard of Langflow?")

👋 Contributing

We welcome contributions from developers of all levels to our open-source project on GitHub. If you'd like to contribute, please check our contributing guidelines and help make Langflow more accessible.


Join our Discord server to ask questions, make suggestions and showcase your projects! 🦾

Star History Chart

📄 License

Langflow is released under the MIT License. See the LICENSE file for details.