langflow/docs/static/data/AstraDB-RAG-Flows.json
Gabriel Luiz Freitas Almeida 05cd6e4fd7
1.0 Alpha (#1599)
* Update model kwargs and temperature values

* Update keyboard shortcuts for advanced editing

* make Message field have no handles

* Update OpenAI API Key handling in OpenAIEmbeddingsComponent

* Remove unnecessary field_type key from CustomComponent class

* Update required field behavior in CustomComponent class

* Refactor AzureOpenAIModel.py: Removed unnecessary "required" attribute from input parameters

* Update BaiduQianfanChatModel and OpenAIModel configurations

* Fix range_spec step type validation

* Update RangeSpec step_type default value to "float"

* Fix Save debounce

* Update parameterUtils to use debounce instead of throttle

* Update input type options in schemas and graph base classes

* Refactor run_flow_with_caching endpoint to include simplified and experimental versions

* Add PythonFunctionComponent and test case for it

* Add nest_asyncio to fix event loop issue

* Refactor test_initial_setup.py to use RunOutputs instead of ResultData

* Remove unused code in test_endpoints.py

* Add asyncio loop to uvicorn command

* Refactor load_session method to handle coroutine result

* Fixed saving

* Fixed debouncing

* Add InputType and OutputType literals to schema.py

* Update input type in Graph class

* Add new schema for simplified API request

* Add delete_messages function and update test_successful_run assertions

* Add STREAM_INFO_TEXT constant to model components

* Add session_id to simplified_run_flow_with_caching endpoint

* Add field_typing import to OpenAIModel.py

* update starter projects

* Add constants for Langflow base module

* Update setup.py to include latest component versions

* Update Starter Examples

* sets starter_project fixture to Basic Prompting

* Refactor test_endpoints.py: Update test names and add new tests for different output types

* Update HuggingFace Spaces link and add image for dark mode

* Remove filepath reference

* Update Vertex params in base.py

* Add tests for different input types

* Add type annotations and improve test coverage

* Add duplicate space link to README

* Update HuggingFace Spaces badge in README

* Add Python 3.10 installation requirement to README

* Refactor flow running endpoints

* Refactor SimplifiedAPIRequest and add documentation for Tweaks

* Refactor input_request parameter in simplified_run_flow function

* Add support for retrieving specific component output

* Add custom Uvicorn worker for Langflow application

* Add asyncio loop to LangflowApplication initialization

* Update Makefile with new variables and start command

* Fix indentation in Makefile

* Refactor run_graph function to add support for running a JSON flow

* Refactor getChatInputField function and update API code

* Update HuggingFace Spaces documentation with duplication process

* Add asyncio event loop to uvicorn command

* Add installation of backend in start target

* udpate some starter projects

* Fix formatting in hugging-face-spaces.mdx

* Update installation instructions for Langflow

* set examples order

* Update start command in Makefile

* Add installation and usage instructions for Langflow

* Update Langflow installation and usage instructions

* Fix langflow command in README.md

* Fix broken link to HuggingFace Spaces guide

* Add new SVG assets for blog post, chat bot, and cloud docs

* Refactor example rendering in NewFlowModal

* Add new SVG file for short bio section

* Remove unused import and add new component

* Update title in usage.mdx

* Update HuggingFace Spaces heading in usage.mdx

* Update usage instructions in getting-started/usage.mdx

* Update cache option in usage documentation

* Remove 'advanced' flag from 'n_messages' parameter in MemoryComponent.py

* Refactor code to improve performance and readability

* Update project names and flow examples

* fix document qa example

* Remove commented out code in sidebars.js

* Delete unused documentation files

* Fix bug in login functionality

* Remove global variables from components

* Fix bug in login functionality

* fix modal returning to input

* Update max-width of chat message sender name

* Update styling for chat message component

* Refactor OpenAIEmbeddingsComponent signature

* Update usage.mdx file

* Update path in Makefile

* Add new migration and what's new documentation files

* Add new chapters and migration guides

* Update version to 0.0.13 in pyproject.toml

* new locks

* Update dependencies in pyproject.toml

* general fixes

* Update dependencies in pyproject.toml and poetry.lock files

* add padding to modal

*  (undrawCards/index.tsx): update the SVG used for BasicPrompt component to undraw_short_bio_re_fmx0.svg to match the desired design
♻️ (undrawCards/index.tsx): adjust the width and height of the BasicPrompt SVG to 65% to improve the visual appearance

* Commented out components/data in sidebars.js

* Refactor component names in outputs.mdx

* Update embedded chat script URL

* Add data component and fix formatting in outputs component

* Update dependencies in poetry.lock and pyproject.toml

* Update dependencies in poetry.lock and pyproject.toml

* Refactor code to improve performance and readability

* Update dependencies in poetry.lock and pyproject.toml

* Fixed IO Modal updates

* Remove dead code at API Modal

* Fixed overflow at CodeTabsComponent tweaks page

*  (NewFlowModal/index.tsx): update the name of the example from "Blog Writter" to "Blog Writer" for better consistency and clarity

* Update dependencies versions

* Update langflow-base to version 0.0.15 and fix setup_env script

* Update dependencies in pyproject.toml

* Lock dependencies in parallel

* Add logging statement to setup_app function

* Fix Ace not having type="module" and breaking build

* Update authentication settings for access token cookie

* Update package versions in package-lock.json

* Add scripts directory to Dockerfile

* Add setup_env command to build_and_run target

* Remove unnecessary make command in setup_env

* Remove unnecessary installation step in build_and_run

* Add debug configuration for CLI

* 🔧 chore(Makefile): refactor build_langflow target to use a separate script for updating dependencies and building
 feat(update_dependencies.py): add script to update pyproject.toml dependency version based on langflow-base version in src/backend/base/pyproject.toml

* Add number_of_results parameter to AstraDBSearchComponent

* Update HuggingFace Spaces links

* Remove duplicate imports in hugging-face-spaces.mdx

* Add number_of_results parameter to vector search components

* Fixed supabase not commited

* Revert "Fixed supabase not commited"

This reverts commit afb10a6262.

* Update duplicate-space.png image

* Delete unused files and components

* Add/update script to update dependencies

* Add .bak files to .gitignore

* Update version numbers and remove unnecessary dependencies

* Update langflow-base dependency path

* Add Text import to VertexAiModel.py

* Update langflow-base version to 0.0.16 and update dependencies

* Delete start projects and commit session in delete_start_projects function

* Refactor backend startup script to handle autologin option

* Update poetry installation script to include pipx update check

* Update pipx installation script for different operating systems

* Update Makefile to improve setup process

* Add error handling on streaming and fix streaming bug on error

* Added description to Blog Writer

* Sort base classes alphabetically

* Update duplicate-space.png image

* update position on langflow prompt chaining

* Add Langflow CLI and first steps documentation

* Add exception handling for missing 'content' field in search_with_vector_store method

* Remove unused import and update type hinting

* fix bug on egdes after creating group component

* Refactor APIRequest class and update model imports

* Remove unused imports and fix formatting issues

* Refactor reactflowUtils and styleUtils

* Add CLI documentation to getting-started/cli.mdx

* Add CLI usage instructions

* Add ZoomableImage component to first-steps.mdx

* Update CLI and first steps documentation

* Remove duplicate import and add new imports for ThemedImage and useBaseUrl

* Update Langflow CLI documentation link

* Remove first-steps.mdx and update index.mdx and sidebars.js

* Update Docusaurus dependencies

* Add AstraDB RAG Flow guide

* Remove unused imports

* Remove unnecessary import statement

* Refactor guide for better readability

* Add data component documentation

* Update component headings and add prompt template

* Fix logging level and version display

* Add datetime import and buffer for alembic log

* Update flow names in NewFlowModal and documentation

* Add starter projects to sidebars.js

* Fix error handling in DirectoryReader class

* Handle exception when loading components in setup.py

* Update version numbers in pyproject.toml files

* Update build_langflow_base and build_langflow_backup in Makefile

* Added docs

* Update dependencies and build process

* Add Admonition component for API Key documentation

* Update API endpoint in async-api.mdx

* Remove async-api guidelines

* Fix UnicodeDecodeError in DirectoryReader

* Update dependency version and fix encoding issues

* Add conditional build and publish for base and main projects

* Update version to 1.0.0a2 in pyproject.toml

* Remove duplicate imports and unnecessary code in custom-component.mdx

* Fix poetry lock command in Makefile

* Update package versions in pyproject.toml

* Remove unused components and update imports

* 📦 chore(pre-release-base.yml): add pre-release workflow for base project
📦 chore(pre-release-langflow.yml): add pre-release workflow for langflow project

* Add ChatLiteLLMModelComponent to models package

* Add frontend installation and build steps

* Add Dockerfile for building and pushing base image

* Add emoji package and nest-asyncio dependency

* 📝 (components.mdx): update margin style of ZoomableImage to improve spacing
📝 (features.mdx): update margin style of ZoomableImage to improve spacing
📝 (login.mdx): update margin style of ZoomableImage to improve spacing

* Fix module import error in validate.py

* Fix error message in directory_reader.py

* Update version import and handle ImportError

* Add cryptography and langchain-openai dependencies

* Update poetry installation and remove poetry-monorepo-dependency-plugin

* Update workflow and Dockerfile for Langflow base pre-release

* Update display names and descriptions for AstraDB components

* Update installation instructions for Langflow

* Update Astra DB links and remove unnecessary imports

* Rename AstraDB

* Add new components and images

* Update HuggingFace Spaces URLs

* Update Langflow documentation and add new starter projects

* Update flow name to "Basic Prompting (Hello, world!)" in relevant files

* Update Basic Prompting flow name to "Ahoy World!"

* Remove HuggingFace Spaces documentation

* Add new files and update sidebars.js

* Remove async-tasks.mdx and update sidebars.js

* Update starter project URLs

* Enable migration of global variables

* Update OpenAIEmbeddings deployment and model

* 📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment

📝 (rag-with-astradb.mdx): add margin to image styles to improve spacing and readability

* Update welcome message in index.mdx

* Add global variable feature to Langflow documentation

* Reorganized sidebar categories

* Update migration documentation

* Refactor SplitTextComponent class to accept inputs of type Record and Text

* Adjust embeddings docs

*  (cardComponent/index.tsx): add a minimum height to the card component to ensure consistent layout and prevent content from overlapping when the card is empty or has minimal content

* Update flow name from "Ahoy World!" to "Hello, world!"

* Update documentation for embeddings, models, and vector stores

* Update CreateRecordComponent and parameterUtils.ts

* Add documentation for Text and Record types

* Remove commented lines in sidebars.js

* Add run_flow_from_json function to load.py

* Update Langflow package to run flow from JSON file

* Fix type annotations and import errors

* Refactor tests and fix test data

---------

Co-authored-by: Rodrigo Nader <rodrigosilvanader@gmail.com>
Co-authored-by: anovazzi1 <otavio2204@gmail.com>
Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
Co-authored-by: carlosrcoelho <carlosrodrigo.coelho@gmail.com>
Co-authored-by: cristhianzl <cristhian.lousa@gmail.com>
Co-authored-by: Matheus <jacquesmats@gmail.com>
2024-04-04 02:46:44 -03:00

3403 lines
No EOL
200 KiB
JSON

{
"id": "51e2b78a-199b-4054-9f32-e288eef6924c",
"data": {
"nodes": [
{
"id": "ChatInput-yxMKE",
"type": "genericNode",
"position": {
"x": 1195.5276981160775,
"y": 209.421875
},
"data": {
"type": "ChatInput",
"node": {
"template": {
"code": {
"type": "code",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "from typing import Optional, Union\n\nfrom langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.schema import Record\n\n\nclass ChatInput(ChatComponent):\n display_name = \"Chat Input\"\n description = \"Get chat inputs from the Interaction Panel.\"\n icon = \"ChatInput\"\n\n def build_config(self):\n build_config = super().build_config()\n build_config[\"input_value\"] = {\n \"input_types\": [],\n \"display_name\": \"Message\",\n \"multiline\": True,\n }\n\n return build_config\n\n def build(\n self,\n sender: Optional[str] = \"User\",\n sender_name: Optional[str] = \"User\",\n input_value: Optional[str] = None,\n session_id: Optional[str] = None,\n return_record: Optional[bool] = False,\n ) -> Union[Text, Record]:\n return super().build(\n sender=sender,\n sender_name=sender_name,\n input_value=input_value,\n session_id=session_id,\n return_record=return_record,\n )\n",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "code",
"advanced": true,
"dynamic": true,
"info": "",
"load_from_db": false,
"title_case": false
},
"input_value": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "input_value",
"display_name": "Message",
"advanced": false,
"input_types": [],
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"value": "what is a line"
},
"return_record": {
"type": "bool",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "return_record",
"display_name": "Return Record",
"advanced": true,
"dynamic": false,
"info": "Return the message as a record containing the sender, sender_name, and session_id.",
"load_from_db": false,
"title_case": false
},
"sender": {
"type": "str",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"value": "User",
"fileTypes": [],
"file_path": "",
"password": false,
"options": [
"Machine",
"User"
],
"name": "sender",
"display_name": "Sender Type",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"sender_name": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": "User",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "sender_name",
"display_name": "Sender Name",
"advanced": false,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"session_id": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "session_id",
"display_name": "Session ID",
"advanced": true,
"dynamic": false,
"info": "If provided, the message will be stored in the memory.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"_type": "CustomComponent"
},
"description": "Get chat inputs from the Interaction Panel.",
"icon": "ChatInput",
"base_classes": [
"Text",
"str",
"object",
"Record"
],
"display_name": "Chat Input",
"documentation": "",
"custom_fields": {
"sender": null,
"sender_name": null,
"input_value": null,
"session_id": null,
"return_record": null
},
"output_types": [
"Text",
"Record"
],
"field_formatters": {},
"frozen": false,
"field_order": [],
"beta": false
},
"id": "ChatInput-yxMKE"
},
"selected": false,
"width": 384,
"height": 383
},
{
"id": "TextOutput-BDknO",
"type": "genericNode",
"position": {
"x": 2322.600672827879,
"y": 604.9467307442569
},
"data": {
"type": "TextOutput",
"node": {
"template": {
"input_value": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": "",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "input_value",
"display_name": "Value",
"advanced": false,
"input_types": [
"Record",
"Text"
],
"dynamic": false,
"info": "Text or Record to be passed as output.",
"load_from_db": false,
"title_case": false
},
"code": {
"type": "code",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "from typing import Optional\n\nfrom langflow.base.io.text import TextComponent\nfrom langflow.field_typing import Text\n\n\nclass TextOutput(TextComponent):\n display_name = \"Text Output\"\n description = \"Display a text output in the Interaction Panel.\"\n icon = \"type\"\n\n def build_config(self):\n return {\n \"input_value\": {\n \"display_name\": \"Value\",\n \"input_types\": [\"Record\", \"Text\"],\n \"info\": \"Text or Record to be passed as output.\",\n },\n \"record_template\": {\n \"display_name\": \"Record Template\",\n \"multiline\": True,\n \"info\": \"Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.\",\n \"advanced\": True,\n },\n }\n\n def build(self, input_value: Optional[Text] = \"\", record_template: str = \"\") -> Text:\n return super().build(input_value=input_value, record_template=record_template)\n",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "code",
"advanced": true,
"dynamic": true,
"info": "",
"load_from_db": false,
"title_case": false
},
"record_template": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "{text}",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "record_template",
"display_name": "Record Template",
"advanced": true,
"dynamic": false,
"info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"_type": "CustomComponent"
},
"description": "Display a text output in the Interaction Panel.",
"icon": "type",
"base_classes": [
"object",
"Text",
"str"
],
"display_name": "Extracted Chunks",
"documentation": "",
"custom_fields": {
"input_value": null,
"record_template": null
},
"output_types": [
"Text"
],
"field_formatters": {},
"frozen": false,
"field_order": [],
"beta": false
},
"id": "TextOutput-BDknO"
},
"selected": false,
"width": 384,
"height": 289,
"positionAbsolute": {
"x": 2322.600672827879,
"y": 604.9467307442569
},
"dragging": false
},
{
"id": "OpenAIEmbeddings-ZlOk1",
"type": "genericNode",
"position": {
"x": 1183.667250865064,
"y": 687.3171828430261
},
"data": {
"type": "OpenAIEmbeddings",
"node": {
"template": {
"allowed_special": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": [],
"fileTypes": [],
"file_path": "",
"password": false,
"name": "allowed_special",
"display_name": "Allowed Special",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"chunk_size": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": 1000,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "chunk_size",
"display_name": "Chunk Size",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"client": {
"type": "Any",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "client",
"display_name": "Client",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"code": {
"type": "code",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "from typing import Any, Dict, List, Optional\n\nfrom langchain_openai.embeddings.base import OpenAIEmbeddings\n\nfrom langflow.field_typing import Embeddings, NestedDict\nfrom langflow.interface.custom.custom_component import CustomComponent\n\n\nclass OpenAIEmbeddingsComponent(CustomComponent):\n display_name = \"OpenAI Embeddings\"\n description = \"Generate embeddings using OpenAI models.\"\n\n def build_config(self):\n return {\n \"allowed_special\": {\n \"display_name\": \"Allowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"default_headers\": {\n \"display_name\": \"Default Headers\",\n \"advanced\": True,\n \"field_type\": \"dict\",\n },\n \"default_query\": {\n \"display_name\": \"Default Query\",\n \"advanced\": True,\n \"field_type\": \"NestedDict\",\n },\n \"disallowed_special\": {\n \"display_name\": \"Disallowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"chunk_size\": {\"display_name\": \"Chunk Size\", \"advanced\": True},\n \"client\": {\"display_name\": \"Client\", \"advanced\": True},\n \"deployment\": {\"display_name\": \"Deployment\", \"advanced\": True},\n \"embedding_ctx_length\": {\n \"display_name\": \"Embedding Context Length\",\n \"advanced\": True,\n },\n \"max_retries\": {\"display_name\": \"Max Retries\", \"advanced\": True},\n \"model\": {\n \"display_name\": \"Model\",\n \"advanced\": False,\n \"options\": [\n \"text-embedding-3-small\",\n \"text-embedding-3-large\",\n \"text-embedding-ada-002\",\n ],\n },\n \"model_kwargs\": {\"display_name\": \"Model Kwargs\", \"advanced\": True},\n \"openai_api_base\": {\n \"display_name\": \"OpenAI API Base\",\n \"password\": True,\n \"advanced\": True,\n },\n \"openai_api_key\": {\"display_name\": \"OpenAI API Key\", \"password\": True},\n \"openai_api_type\": {\n \"display_name\": \"OpenAI API Type\",\n \"advanced\": True,\n \"password\": True,\n },\n \"openai_api_version\": {\n \"display_name\": \"OpenAI API Version\",\n \"advanced\": True,\n },\n \"openai_organization\": {\n \"display_name\": \"OpenAI Organization\",\n \"advanced\": True,\n },\n \"openai_proxy\": {\"display_name\": \"OpenAI Proxy\", \"advanced\": True},\n \"request_timeout\": {\"display_name\": \"Request Timeout\", \"advanced\": True},\n \"show_progress_bar\": {\n \"display_name\": \"Show Progress Bar\",\n \"advanced\": True,\n },\n \"skip_empty\": {\"display_name\": \"Skip Empty\", \"advanced\": True},\n \"tiktoken_model_name\": {\n \"display_name\": \"TikToken Model Name\",\n \"advanced\": True,\n },\n \"tiktoken_enable\": {\"display_name\": \"TikToken Enable\", \"advanced\": True},\n }\n\n def build(\n self,\n openai_api_key: str,\n default_headers: Optional[Dict[str, str]] = None,\n default_query: Optional[NestedDict] = {},\n allowed_special: List[str] = [],\n disallowed_special: List[str] = [\"all\"],\n chunk_size: int = 1000,\n client: Optional[Any] = None,\n deployment: str = \"text-embedding-ada-002\",\n embedding_ctx_length: int = 8191,\n max_retries: int = 6,\n model: str = \"text-embedding-ada-002\",\n model_kwargs: NestedDict = {},\n openai_api_base: Optional[str] = None,\n openai_api_type: Optional[str] = None,\n openai_api_version: Optional[str] = None,\n openai_organization: Optional[str] = None,\n openai_proxy: Optional[str] = None,\n request_timeout: Optional[float] = None,\n show_progress_bar: bool = False,\n skip_empty: bool = False,\n tiktoken_enable: bool = True,\n tiktoken_model_name: Optional[str] = None,\n ) -> Embeddings:\n # This is to avoid errors with Vector Stores (e.g Chroma)\n if disallowed_special == [\"all\"]:\n disallowed_special = \"all\" # type: ignore\n\n return OpenAIEmbeddings(\n tiktoken_enabled=tiktoken_enable,\n default_headers=default_headers,\n default_query=default_query,\n allowed_special=set(allowed_special),\n disallowed_special=\"all\",\n chunk_size=chunk_size,\n client=client,\n deployment=deployment,\n embedding_ctx_length=embedding_ctx_length,\n max_retries=max_retries,\n model=model,\n model_kwargs=model_kwargs,\n base_url=openai_api_base,\n api_key=openai_api_key,\n openai_api_type=openai_api_type,\n api_version=openai_api_version,\n organization=openai_organization,\n openai_proxy=openai_proxy,\n timeout=request_timeout,\n show_progress_bar=show_progress_bar,\n skip_empty=skip_empty,\n tiktoken_model_name=tiktoken_model_name,\n )\n",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "code",
"advanced": true,
"dynamic": true,
"info": "",
"load_from_db": false,
"title_case": false
},
"default_headers": {
"type": "dict",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "default_headers",
"display_name": "Default Headers",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"default_query": {
"type": "NestedDict",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": {},
"fileTypes": [],
"file_path": "",
"password": false,
"name": "default_query",
"display_name": "Default Query",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"deployment": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": "text-embedding-ada-002",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "deployment",
"display_name": "Deployment",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"disallowed_special": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": [
"all"
],
"fileTypes": [],
"file_path": "",
"password": false,
"name": "disallowed_special",
"display_name": "Disallowed Special",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"embedding_ctx_length": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": 8191,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "embedding_ctx_length",
"display_name": "Embedding Context Length",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"max_retries": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": 6,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "max_retries",
"display_name": "Max Retries",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"model": {
"type": "str",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"value": "text-embedding-ada-002",
"fileTypes": [],
"file_path": "",
"password": false,
"options": [
"text-embedding-3-small",
"text-embedding-3-large",
"text-embedding-ada-002"
],
"name": "model",
"display_name": "Model",
"advanced": false,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"model_kwargs": {
"type": "NestedDict",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": {},
"fileTypes": [],
"file_path": "",
"password": false,
"name": "model_kwargs",
"display_name": "Model Kwargs",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"openai_api_base": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": true,
"name": "openai_api_base",
"display_name": "OpenAI API Base",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"openai_api_key": {
"type": "str",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": true,
"name": "openai_api_key",
"display_name": "OpenAI API Key",
"advanced": false,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
],
"value": ""
},
"openai_api_type": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": true,
"name": "openai_api_type",
"display_name": "OpenAI API Type",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"openai_api_version": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "openai_api_version",
"display_name": "OpenAI API Version",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"openai_organization": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "openai_organization",
"display_name": "OpenAI Organization",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"openai_proxy": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "openai_proxy",
"display_name": "OpenAI Proxy",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"request_timeout": {
"type": "float",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "request_timeout",
"display_name": "Request Timeout",
"advanced": true,
"dynamic": false,
"info": "",
"rangeSpec": {
"step_type": "float",
"min": -1,
"max": 1,
"step": 0.1
},
"load_from_db": false,
"title_case": false
},
"show_progress_bar": {
"type": "bool",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "show_progress_bar",
"display_name": "Show Progress Bar",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"skip_empty": {
"type": "bool",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "skip_empty",
"display_name": "Skip Empty",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"tiktoken_enable": {
"type": "bool",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": true,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "tiktoken_enable",
"display_name": "TikToken Enable",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"tiktoken_model_name": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "tiktoken_model_name",
"display_name": "TikToken Model Name",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"_type": "CustomComponent"
},
"description": "Generate embeddings using OpenAI models.",
"base_classes": [
"Embeddings"
],
"display_name": "OpenAI Embeddings",
"documentation": "",
"custom_fields": {
"openai_api_key": null,
"default_headers": null,
"default_query": null,
"allowed_special": null,
"disallowed_special": null,
"chunk_size": null,
"client": null,
"deployment": null,
"embedding_ctx_length": null,
"max_retries": null,
"model": null,
"model_kwargs": null,
"openai_api_base": null,
"openai_api_type": null,
"openai_api_version": null,
"openai_organization": null,
"openai_proxy": null,
"request_timeout": null,
"show_progress_bar": null,
"skip_empty": null,
"tiktoken_enable": null,
"tiktoken_model_name": null
},
"output_types": [
"Embeddings"
],
"field_formatters": {},
"frozen": false,
"field_order": [],
"beta": false
},
"id": "OpenAIEmbeddings-ZlOk1"
},
"selected": false,
"width": 384,
"height": 383,
"dragging": false
},
{
"id": "OpenAIModel-EjXlN",
"type": "genericNode",
"position": {
"x": 3410.117202077183,
"y": 431.2038048137648
},
"data": {
"type": "OpenAIModel",
"node": {
"template": {
"input_value": {
"type": "str",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "input_value",
"display_name": "Input",
"advanced": false,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"code": {
"type": "code",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "from typing import Optional\n\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.field_typing import NestedDict, Text\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n field_order = [\n \"max_tokens\",\n \"model_kwargs\",\n \"model_name\",\n \"openai_api_base\",\n \"openai_api_key\",\n \"temperature\",\n \"input_value\",\n \"system_message\",\n \"stream\",\n ]\n\n def build_config(self):\n return {\n \"input_value\": {\"display_name\": \"Input\"},\n \"max_tokens\": {\n \"display_name\": \"Max Tokens\",\n \"advanced\": True,\n },\n \"model_kwargs\": {\n \"display_name\": \"Model Kwargs\",\n \"advanced\": True,\n },\n \"model_name\": {\n \"display_name\": \"Model Name\",\n \"advanced\": False,\n \"options\": [\n \"gpt-4-turbo-preview\",\n \"gpt-3.5-turbo\",\n \"gpt-4-0125-preview\",\n \"gpt-4-1106-preview\",\n \"gpt-4-vision-preview\",\n \"gpt-3.5-turbo-0125\",\n \"gpt-3.5-turbo-1106\",\n ],\n \"value\": \"gpt-4-turbo-preview\",\n },\n \"openai_api_base\": {\n \"display_name\": \"OpenAI API Base\",\n \"advanced\": True,\n \"info\": (\n \"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\n\"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\"\n ),\n },\n \"openai_api_key\": {\n \"display_name\": \"OpenAI API Key\",\n \"info\": \"The OpenAI API Key to use for the OpenAI model.\",\n \"advanced\": False,\n \"password\": True,\n },\n \"temperature\": {\n \"display_name\": \"Temperature\",\n \"advanced\": False,\n \"value\": 0.1,\n },\n \"stream\": {\n \"display_name\": \"Stream\",\n \"info\": STREAM_INFO_TEXT,\n \"advanced\": True,\n },\n \"system_message\": {\n \"display_name\": \"System Message\",\n \"info\": \"System message to pass to the model.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n input_value: Text,\n openai_api_key: str,\n temperature: float,\n model_name: str,\n max_tokens: Optional[int] = 256,\n model_kwargs: NestedDict = {},\n openai_api_base: Optional[str] = None,\n stream: bool = False,\n system_message: Optional[str] = None,\n ) -> Text:\n if not openai_api_base:\n openai_api_base = \"https://api.openai.com/v1\"\n output = ChatOpenAI(\n max_tokens=max_tokens,\n model_kwargs=model_kwargs,\n model=model_name,\n base_url=openai_api_base,\n api_key=openai_api_key,\n temperature=temperature,\n )\n\n return self.get_chat_result(output, stream, input_value, system_message)\n",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "code",
"advanced": true,
"dynamic": true,
"info": "",
"load_from_db": false,
"title_case": false
},
"max_tokens": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": 256,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "max_tokens",
"display_name": "Max Tokens",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"model_kwargs": {
"type": "NestedDict",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": {},
"fileTypes": [],
"file_path": "",
"password": false,
"name": "model_kwargs",
"display_name": "Model Kwargs",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"model_name": {
"type": "str",
"required": true,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"value": "gpt-3.5-turbo",
"fileTypes": [],
"file_path": "",
"password": false,
"options": [
"gpt-4-turbo-preview",
"gpt-3.5-turbo",
"gpt-4-0125-preview",
"gpt-4-1106-preview",
"gpt-4-vision-preview",
"gpt-3.5-turbo-0125",
"gpt-3.5-turbo-1106"
],
"name": "model_name",
"display_name": "Model Name",
"advanced": false,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"openai_api_base": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "openai_api_base",
"display_name": "OpenAI API Base",
"advanced": true,
"dynamic": false,
"info": "The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\n\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"openai_api_key": {
"type": "str",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": true,
"name": "openai_api_key",
"display_name": "OpenAI API Key",
"advanced": false,
"dynamic": false,
"info": "The OpenAI API Key to use for the OpenAI model.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
],
"value": ""
},
"stream": {
"type": "bool",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "stream",
"display_name": "Stream",
"advanced": true,
"dynamic": false,
"info": "Stream the response from the model. Streaming works only in Chat.",
"load_from_db": false,
"title_case": false
},
"system_message": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "system_message",
"display_name": "System Message",
"advanced": true,
"dynamic": false,
"info": "System message to pass to the model.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"temperature": {
"type": "float",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": 0.1,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "temperature",
"display_name": "Temperature",
"advanced": false,
"dynamic": false,
"info": "",
"rangeSpec": {
"step_type": "float",
"min": -1,
"max": 1,
"step": 0.1
},
"load_from_db": false,
"title_case": false
},
"_type": "CustomComponent"
},
"description": "Generates text using OpenAI LLMs.",
"icon": "OpenAI",
"base_classes": [
"object",
"Text",
"str"
],
"display_name": "OpenAI",
"documentation": "",
"custom_fields": {
"input_value": null,
"openai_api_key": null,
"temperature": null,
"model_name": null,
"max_tokens": null,
"model_kwargs": null,
"openai_api_base": null,
"stream": null,
"system_message": null
},
"output_types": [
"Text"
],
"field_formatters": {},
"frozen": false,
"field_order": [
"max_tokens",
"model_kwargs",
"model_name",
"openai_api_base",
"openai_api_key",
"temperature",
"input_value",
"system_message",
"stream"
],
"beta": false
},
"id": "OpenAIModel-EjXlN"
},
"selected": true,
"width": 384,
"height": 563,
"positionAbsolute": {
"x": 3410.117202077183,
"y": 431.2038048137648
},
"dragging": false
},
{
"id": "Prompt-xeI6K",
"type": "genericNode",
"position": {
"x": 2969.0261961391298,
"y": 442.1613649809069
},
"data": {
"type": "Prompt",
"node": {
"template": {
"code": {
"type": "code",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "from langchain_core.prompts import PromptTemplate\n\nfrom langflow.field_typing import Prompt, TemplateField, Text\nfrom langflow.interface.custom.custom_component import CustomComponent\n\n\nclass PromptComponent(CustomComponent):\n display_name: str = \"Prompt\"\n description: str = \"Create a prompt template with dynamic variables.\"\n icon = \"prompts\"\n\n def build_config(self):\n return {\n \"template\": TemplateField(display_name=\"Template\"),\n \"code\": TemplateField(advanced=True),\n }\n\n def build(\n self,\n template: Prompt,\n **kwargs,\n ) -> Text:\n from langflow.base.prompts.utils import dict_values_to_string\n\n prompt_template = PromptTemplate.from_template(Text(template))\n kwargs = dict_values_to_string(kwargs)\n kwargs = {k: \"\\n\".join(v) if isinstance(v, list) else v for k, v in kwargs.items()}\n try:\n formated_prompt = prompt_template.format(**kwargs)\n except Exception as exc:\n raise ValueError(f\"Error formatting prompt: {exc}\") from exc\n self.status = f'Prompt:\\n\"{formated_prompt}\"'\n return formated_prompt\n",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "code",
"advanced": true,
"dynamic": true,
"info": "",
"load_from_db": false,
"title_case": false
},
"template": {
"type": "prompt",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": "{context}\n\n---\n\nGiven the context above, answer the question as best as possible.\n\nQuestion: {question}\n\nAnswer: ",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "template",
"display_name": "Template",
"advanced": false,
"input_types": [
"Text"
],
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"_type": "CustomComponent",
"context": {
"field_type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "context",
"display_name": "context",
"advanced": false,
"input_types": [
"Document",
"BaseOutputParser",
"Record",
"Text"
],
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"type": "str"
},
"question": {
"field_type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "question",
"display_name": "question",
"advanced": false,
"input_types": [
"Document",
"BaseOutputParser",
"Record",
"Text"
],
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"type": "str"
}
},
"description": "Create a prompt template with dynamic variables.",
"icon": "prompts",
"is_input": null,
"is_output": null,
"is_composition": null,
"base_classes": [
"object",
"Text",
"str"
],
"name": "",
"display_name": "Prompt",
"documentation": "",
"custom_fields": {
"template": [
"context",
"question"
]
},
"output_types": [
"Text"
],
"full_path": null,
"field_formatters": {},
"frozen": false,
"field_order": [],
"beta": false,
"error": null
},
"id": "Prompt-xeI6K",
"description": "Create a prompt template with dynamic variables.",
"display_name": "Prompt"
},
"selected": false,
"width": 384,
"height": 477,
"positionAbsolute": {
"x": 2969.0261961391298,
"y": 442.1613649809069
},
"dragging": false
},
{
"id": "ChatOutput-Q39I8",
"type": "genericNode",
"position": {
"x": 3887.2073667611485,
"y": 588.4801225794856
},
"data": {
"type": "ChatOutput",
"node": {
"template": {
"code": {
"type": "code",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "from typing import Optional, Union\n\nfrom langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.schema import Record\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Interaction Panel.\"\n icon = \"ChatOutput\"\n\n def build(\n self,\n sender: Optional[str] = \"Machine\",\n sender_name: Optional[str] = \"AI\",\n input_value: Optional[str] = None,\n session_id: Optional[str] = None,\n return_record: Optional[bool] = False,\n record_template: Optional[str] = \"{text}\",\n ) -> Union[Text, Record]:\n return super().build(\n sender=sender,\n sender_name=sender_name,\n input_value=input_value,\n session_id=session_id,\n return_record=return_record,\n record_template=record_template,\n )\n",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "code",
"advanced": true,
"dynamic": true,
"info": "",
"load_from_db": false,
"title_case": false
},
"input_value": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "input_value",
"display_name": "Message",
"advanced": false,
"input_types": [
"Text"
],
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"record_template": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "{text}",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "record_template",
"display_name": "Record Template",
"advanced": true,
"dynamic": false,
"info": "In case of Message being a Record, this template will be used to convert it to text.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"return_record": {
"type": "bool",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "return_record",
"display_name": "Return Record",
"advanced": true,
"dynamic": false,
"info": "Return the message as a record containing the sender, sender_name, and session_id.",
"load_from_db": false,
"title_case": false
},
"sender": {
"type": "str",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"value": "Machine",
"fileTypes": [],
"file_path": "",
"password": false,
"options": [
"Machine",
"User"
],
"name": "sender",
"display_name": "Sender Type",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"sender_name": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": "AI",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "sender_name",
"display_name": "Sender Name",
"advanced": false,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"session_id": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "session_id",
"display_name": "Session ID",
"advanced": true,
"dynamic": false,
"info": "If provided, the message will be stored in the memory.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"_type": "CustomComponent"
},
"description": "Display a chat message in the Interaction Panel.",
"icon": "ChatOutput",
"base_classes": [
"object",
"Text",
"Record",
"str"
],
"display_name": "Chat Output",
"documentation": "",
"custom_fields": {
"sender": null,
"sender_name": null,
"input_value": null,
"session_id": null,
"return_record": null,
"record_template": null
},
"output_types": [
"Text",
"Record"
],
"field_formatters": {},
"frozen": false,
"field_order": [],
"beta": false
},
"id": "ChatOutput-Q39I8"
},
"selected": false,
"width": 384,
"height": 383,
"positionAbsolute": {
"x": 3887.2073667611485,
"y": 588.4801225794856
},
"dragging": false
},
{
"id": "File-t0a6a",
"type": "genericNode",
"position": {
"x": 2257.233450682836,
"y": 1747.5389618367233
},
"data": {
"type": "File",
"node": {
"template": {
"path": {
"type": "file",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [
".txt",
".md",
".mdx",
".csv",
".json",
".yaml",
".yml",
".xml",
".html",
".htm",
".pdf",
".docx"
],
"file_path": "51e2b78a-199b-4054-9f32-e288eef6924c/Langflow conversation.pdf",
"password": false,
"name": "path",
"display_name": "Path",
"advanced": false,
"dynamic": false,
"info": "Supported file types: txt, md, mdx, csv, json, yaml, yml, xml, html, htm, pdf, docx",
"load_from_db": false,
"title_case": false,
"value": ""
},
"code": {
"type": "code",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "from pathlib import Path\nfrom typing import Any, Dict\n\nfrom langflow.base.data.utils import TEXT_FILE_TYPES, parse_text_file_to_record\nfrom langflow.interface.custom.custom_component import CustomComponent\nfrom langflow.schema import Record\n\n\nclass FileComponent(CustomComponent):\n display_name = \"File\"\n description = \"A generic file loader.\"\n icon = \"file-text\"\n\n def build_config(self) -> Dict[str, Any]:\n return {\n \"path\": {\n \"display_name\": \"Path\",\n \"field_type\": \"file\",\n \"file_types\": TEXT_FILE_TYPES,\n \"info\": f\"Supported file types: {', '.join(TEXT_FILE_TYPES)}\",\n },\n \"silent_errors\": {\n \"display_name\": \"Silent Errors\",\n \"advanced\": True,\n \"info\": \"If true, errors will not raise an exception.\",\n },\n }\n\n def load_file(self, path: str, silent_errors: bool = False) -> Record:\n resolved_path = self.resolve_path(path)\n path_obj = Path(resolved_path)\n extension = path_obj.suffix[1:].lower()\n if extension == \"doc\":\n raise ValueError(\"doc files are not supported. Please save as .docx\")\n if extension not in TEXT_FILE_TYPES:\n raise ValueError(f\"Unsupported file type: {extension}\")\n record = parse_text_file_to_record(resolved_path, silent_errors)\n self.status = record if record else \"No data\"\n return record or Record()\n\n def build(\n self,\n path: str,\n silent_errors: bool = False,\n ) -> Record:\n record = self.load_file(path, silent_errors)\n self.status = record\n return record\n",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "code",
"advanced": true,
"dynamic": true,
"info": "",
"load_from_db": false,
"title_case": false
},
"silent_errors": {
"type": "bool",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "silent_errors",
"display_name": "Silent Errors",
"advanced": true,
"dynamic": false,
"info": "If true, errors will not raise an exception.",
"load_from_db": false,
"title_case": false
},
"_type": "CustomComponent"
},
"description": "A generic file loader.",
"icon": "file-text",
"base_classes": [
"Record"
],
"display_name": "File",
"documentation": "",
"custom_fields": {
"path": null,
"silent_errors": null
},
"output_types": [
"Record"
],
"field_formatters": {},
"frozen": false,
"field_order": [],
"beta": false
},
"id": "File-t0a6a"
},
"selected": false,
"width": 384,
"height": 281,
"positionAbsolute": {
"x": 2257.233450682836,
"y": 1747.5389618367233
},
"dragging": false
},
{
"id": "RecursiveCharacterTextSplitter-tR9QM",
"type": "genericNode",
"position": {
"x": 2791.013514133929,
"y": 1462.9588953494142
},
"data": {
"type": "RecursiveCharacterTextSplitter",
"node": {
"template": {
"inputs": {
"type": "Document",
"required": true,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "inputs",
"display_name": "Input",
"advanced": false,
"input_types": [
"Document",
"Record"
],
"dynamic": false,
"info": "The texts to split.",
"load_from_db": false,
"title_case": false
},
"chunk_overlap": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": 200,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "chunk_overlap",
"display_name": "Chunk Overlap",
"advanced": false,
"dynamic": false,
"info": "The amount of overlap between chunks.",
"load_from_db": false,
"title_case": false
},
"chunk_size": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": 1000,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "chunk_size",
"display_name": "Chunk Size",
"advanced": false,
"dynamic": false,
"info": "The maximum length of each chunk.",
"load_from_db": false,
"title_case": false
},
"code": {
"type": "code",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "from typing import Optional\n\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom langchain_core.documents import Document\n\nfrom langflow.interface.custom.custom_component import CustomComponent\nfrom langflow.schema import Record\nfrom langflow.utils.util import build_loader_repr_from_records, unescape_string\n\n\nclass RecursiveCharacterTextSplitterComponent(CustomComponent):\n display_name: str = \"Recursive Character Text Splitter\"\n description: str = \"Split text into chunks of a specified length.\"\n documentation: str = \"https://docs.langflow.org/components/text-splitters#recursivecharactertextsplitter\"\n\n def build_config(self):\n return {\n \"inputs\": {\n \"display_name\": \"Input\",\n \"info\": \"The texts to split.\",\n \"input_types\": [\"Document\", \"Record\"],\n },\n \"separators\": {\n \"display_name\": \"Separators\",\n \"info\": 'The characters to split on.\\nIf left empty defaults to [\"\\\\n\\\\n\", \"\\\\n\", \" \", \"\"].',\n \"is_list\": True,\n },\n \"chunk_size\": {\n \"display_name\": \"Chunk Size\",\n \"info\": \"The maximum length of each chunk.\",\n \"field_type\": \"int\",\n \"value\": 1000,\n },\n \"chunk_overlap\": {\n \"display_name\": \"Chunk Overlap\",\n \"info\": \"The amount of overlap between chunks.\",\n \"field_type\": \"int\",\n \"value\": 200,\n },\n \"code\": {\"show\": False},\n }\n\n def build(\n self,\n inputs: list[Document],\n separators: Optional[list[str]] = None,\n chunk_size: Optional[int] = 1000,\n chunk_overlap: Optional[int] = 200,\n ) -> list[Record]:\n \"\"\"\n Split text into chunks of a specified length.\n\n Args:\n separators (list[str]): The characters to split on.\n chunk_size (int): The maximum length of each chunk.\n chunk_overlap (int): The amount of overlap between chunks.\n length_function (function): The function to use to calculate the length of the text.\n\n Returns:\n list[str]: The chunks of text.\n \"\"\"\n\n if separators == \"\":\n separators = None\n elif separators:\n # check if the separators list has escaped characters\n # if there are escaped characters, unescape them\n separators = [unescape_string(x) for x in separators]\n\n # Make sure chunk_size and chunk_overlap are ints\n if isinstance(chunk_size, str):\n chunk_size = int(chunk_size)\n if isinstance(chunk_overlap, str):\n chunk_overlap = int(chunk_overlap)\n splitter = RecursiveCharacterTextSplitter(\n separators=separators,\n chunk_size=chunk_size,\n chunk_overlap=chunk_overlap,\n )\n documents = []\n for _input in inputs:\n if isinstance(_input, Record):\n documents.append(_input.to_lc_document())\n else:\n documents.append(_input)\n docs = splitter.split_documents(documents)\n records = self.to_records(docs)\n self.repr_value = build_loader_repr_from_records(records)\n return records\n",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "code",
"advanced": true,
"dynamic": true,
"info": "",
"load_from_db": false,
"title_case": false
},
"separators": {
"type": "str",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "separators",
"display_name": "Separators",
"advanced": false,
"dynamic": false,
"info": "The characters to split on.\nIf left empty defaults to [\"\\n\\n\", \"\\n\", \" \", \"\"].",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
],
"value": [
""
]
},
"_type": "CustomComponent"
},
"description": "Split text into chunks of a specified length.",
"base_classes": [
"Record"
],
"display_name": "Recursive Character Text Splitter",
"documentation": "https://docs.langflow.org/components/text-splitters#recursivecharactertextsplitter",
"custom_fields": {
"inputs": null,
"separators": null,
"chunk_size": null,
"chunk_overlap": null
},
"output_types": [
"Record"
],
"field_formatters": {},
"frozen": false,
"field_order": [],
"beta": false
},
"id": "RecursiveCharacterTextSplitter-tR9QM"
},
"selected": false,
"width": 384,
"height": 501,
"positionAbsolute": {
"x": 2791.013514133929,
"y": 1462.9588953494142
},
"dragging": false
},
{
"id": "AstraDBSearch-41nRz",
"type": "genericNode",
"position": {
"x": 1723.976434815103,
"y": 277.03317407245913
},
"data": {
"type": "AstraDBSearch",
"node": {
"template": {
"embedding": {
"type": "Embeddings",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "embedding",
"display_name": "Embedding",
"advanced": false,
"dynamic": false,
"info": "Embedding to use",
"load_from_db": false,
"title_case": false
},
"input_value": {
"type": "str",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "input_value",
"display_name": "Input Value",
"advanced": false,
"dynamic": false,
"info": "Input value to search",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"api_endpoint": {
"type": "str",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "api_endpoint",
"display_name": "API Endpoint",
"advanced": false,
"dynamic": false,
"info": "API endpoint URL for the Astra DB service.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
],
"value": ""
},
"batch_size": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "batch_size",
"display_name": "Batch Size",
"advanced": true,
"dynamic": false,
"info": "Optional number of records to process in a single batch.",
"load_from_db": false,
"title_case": false
},
"bulk_delete_concurrency": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "bulk_delete_concurrency",
"display_name": "Bulk Delete Concurrency",
"advanced": true,
"dynamic": false,
"info": "Optional concurrency level for bulk delete operations.",
"load_from_db": false,
"title_case": false
},
"bulk_insert_batch_concurrency": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "bulk_insert_batch_concurrency",
"display_name": "Bulk Insert Batch Concurrency",
"advanced": true,
"dynamic": false,
"info": "Optional concurrency level for bulk insert operations.",
"load_from_db": false,
"title_case": false
},
"bulk_insert_overwrite_concurrency": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "bulk_insert_overwrite_concurrency",
"display_name": "Bulk Insert Overwrite Concurrency",
"advanced": true,
"dynamic": false,
"info": "Optional concurrency level for bulk insert operations that overwrite existing records.",
"load_from_db": false,
"title_case": false
},
"code": {
"type": "code",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "from typing import List, Optional\n\nfrom langflow.components.vectorstores.AstraDB import AstraDBVectorStoreComponent\nfrom langflow.components.vectorstores.base.model import LCVectorStoreComponent\nfrom langflow.field_typing import Embeddings, Text\nfrom langflow.schema import Record\n\n\nclass AstraDBSearchComponent(LCVectorStoreComponent):\n display_name = \"Astra DB Search\"\n description = \"Searches an existing Astra DB Vector Store.\"\n icon = \"AstraDB\"\n field_order = [\"token\", \"api_endpoint\", \"collection_name\", \"input_value\", \"embedding\"]\n\n def build_config(self):\n return {\n \"search_type\": {\n \"display_name\": \"Search Type\",\n \"options\": [\"Similarity\", \"MMR\"],\n },\n \"input_value\": {\n \"display_name\": \"Input Value\",\n \"info\": \"Input value to search\",\n },\n \"embedding\": {\"display_name\": \"Embedding\", \"info\": \"Embedding to use\"},\n \"collection_name\": {\n \"display_name\": \"Collection Name\",\n \"info\": \"The name of the collection within Astra DB where the vectors will be stored.\",\n },\n \"token\": {\n \"display_name\": \"Token\",\n \"info\": \"Authentication token for accessing Astra DB.\",\n \"password\": True,\n },\n \"api_endpoint\": {\n \"display_name\": \"API Endpoint\",\n \"info\": \"API endpoint URL for the Astra DB service.\",\n },\n \"namespace\": {\n \"display_name\": \"Namespace\",\n \"info\": \"Optional namespace within Astra DB to use for the collection.\",\n \"advanced\": True,\n },\n \"metric\": {\n \"display_name\": \"Metric\",\n \"info\": \"Optional distance metric for vector comparisons in the vector store.\",\n \"advanced\": True,\n },\n \"batch_size\": {\n \"display_name\": \"Batch Size\",\n \"info\": \"Optional number of records to process in a single batch.\",\n \"advanced\": True,\n },\n \"bulk_insert_batch_concurrency\": {\n \"display_name\": \"Bulk Insert Batch Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations.\",\n \"advanced\": True,\n },\n \"bulk_insert_overwrite_concurrency\": {\n \"display_name\": \"Bulk Insert Overwrite Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations that overwrite existing records.\",\n \"advanced\": True,\n },\n \"bulk_delete_concurrency\": {\n \"display_name\": \"Bulk Delete Concurrency\",\n \"info\": \"Optional concurrency level for bulk delete operations.\",\n \"advanced\": True,\n },\n \"setup_mode\": {\n \"display_name\": \"Setup Mode\",\n \"info\": \"Configuration mode for setting up the vector store, with options like “Sync”, “Async”, or “Off”.\",\n \"options\": [\"Sync\", \"Async\", \"Off\"],\n \"advanced\": True,\n },\n \"pre_delete_collection\": {\n \"display_name\": \"Pre Delete Collection\",\n \"info\": \"Boolean flag to determine whether to delete the collection before creating a new one.\",\n \"advanced\": True,\n },\n \"metadata_indexing_include\": {\n \"display_name\": \"Metadata Indexing Include\",\n \"info\": \"Optional list of metadata fields to include in the indexing.\",\n \"advanced\": True,\n },\n \"metadata_indexing_exclude\": {\n \"display_name\": \"Metadata Indexing Exclude\",\n \"info\": \"Optional list of metadata fields to exclude from the indexing.\",\n \"advanced\": True,\n },\n \"collection_indexing_policy\": {\n \"display_name\": \"Collection Indexing Policy\",\n \"info\": \"Optional dictionary defining the indexing policy for the collection.\",\n \"advanced\": True,\n },\n \"number_of_results\": {\n \"display_name\": \"Number of Results\",\n \"info\": \"Number of results to return.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n embedding: Embeddings,\n collection_name: str,\n input_value: Text,\n token: str,\n api_endpoint: str,\n search_type: str = \"Similarity\",\n number_of_results: int = 4,\n namespace: Optional[str] = None,\n metric: Optional[str] = None,\n batch_size: Optional[int] = None,\n bulk_insert_batch_concurrency: Optional[int] = None,\n bulk_insert_overwrite_concurrency: Optional[int] = None,\n bulk_delete_concurrency: Optional[int] = None,\n setup_mode: str = \"Sync\",\n pre_delete_collection: bool = False,\n metadata_indexing_include: Optional[List[str]] = None,\n metadata_indexing_exclude: Optional[List[str]] = None,\n collection_indexing_policy: Optional[dict] = None,\n ) -> List[Record]:\n vector_store = AstraDBVectorStoreComponent().build(\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n try:\n return self.search_with_vector_store(input_value, search_type, vector_store, k=number_of_results)\n except KeyError as e:\n if \"content\" in str(e):\n raise ValueError(\n \"You should ingest data through Langflow (or LangChain) to query it in Langflow. Your collection does not contain a field name 'content'.\"\n )\n else:\n raise e\n",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "code",
"advanced": true,
"dynamic": true,
"info": "",
"load_from_db": false,
"title_case": false
},
"collection_indexing_policy": {
"type": "dict",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "collection_indexing_policy",
"display_name": "Collection Indexing Policy",
"advanced": true,
"dynamic": false,
"info": "Optional dictionary defining the indexing policy for the collection.",
"load_from_db": false,
"title_case": false
},
"collection_name": {
"type": "str",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "collection_name",
"display_name": "Collection Name",
"advanced": false,
"dynamic": false,
"info": "The name of the collection within Astra DB where the vectors will be stored.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
],
"value": "langflow"
},
"metadata_indexing_exclude": {
"type": "str",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "metadata_indexing_exclude",
"display_name": "Metadata Indexing Exclude",
"advanced": true,
"dynamic": false,
"info": "Optional list of metadata fields to exclude from the indexing.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"metadata_indexing_include": {
"type": "str",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "metadata_indexing_include",
"display_name": "Metadata Indexing Include",
"advanced": true,
"dynamic": false,
"info": "Optional list of metadata fields to include in the indexing.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"metric": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "metric",
"display_name": "Metric",
"advanced": true,
"dynamic": false,
"info": "Optional distance metric for vector comparisons in the vector store.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"namespace": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "namespace",
"display_name": "Namespace",
"advanced": true,
"dynamic": false,
"info": "Optional namespace within Astra DB to use for the collection.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"number_of_results": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": 4,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "number_of_results",
"display_name": "Number of Results",
"advanced": true,
"dynamic": false,
"info": "Number of results to return.",
"load_from_db": false,
"title_case": false
},
"pre_delete_collection": {
"type": "bool",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "pre_delete_collection",
"display_name": "Pre Delete Collection",
"advanced": true,
"dynamic": false,
"info": "Boolean flag to determine whether to delete the collection before creating a new one.",
"load_from_db": false,
"title_case": false
},
"search_type": {
"type": "str",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"value": "Similarity",
"fileTypes": [],
"file_path": "",
"password": false,
"options": [
"Similarity",
"MMR"
],
"name": "search_type",
"display_name": "Search Type",
"advanced": false,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"setup_mode": {
"type": "str",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"value": "Sync",
"fileTypes": [],
"file_path": "",
"password": false,
"options": [
"Sync",
"Async",
"Off"
],
"name": "setup_mode",
"display_name": "Setup Mode",
"advanced": true,
"dynamic": false,
"info": "Configuration mode for setting up the vector store, with options like “Sync”, “Async”, or “Off”.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"token": {
"type": "str",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": true,
"name": "token",
"display_name": "Token",
"advanced": false,
"dynamic": false,
"info": "Authentication token for accessing Astra DB.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
],
"value": ""
},
"_type": "CustomComponent"
},
"description": "Searches an existing Astra DB Vector Store.",
"icon": "AstraDB",
"base_classes": [
"Record"
],
"display_name": "Astra DB Search",
"documentation": "",
"custom_fields": {
"embedding": null,
"collection_name": null,
"input_value": null,
"token": null,
"api_endpoint": null,
"search_type": null,
"number_of_results": null,
"namespace": null,
"metric": null,
"batch_size": null,
"bulk_insert_batch_concurrency": null,
"bulk_insert_overwrite_concurrency": null,
"bulk_delete_concurrency": null,
"setup_mode": null,
"pre_delete_collection": null,
"metadata_indexing_include": null,
"metadata_indexing_exclude": null,
"collection_indexing_policy": null
},
"output_types": [
"Record"
],
"field_formatters": {},
"frozen": false,
"field_order": [
"token",
"api_endpoint",
"collection_name",
"input_value",
"embedding"
],
"beta": false
},
"id": "AstraDBSearch-41nRz"
},
"selected": false,
"width": 384,
"height": 713,
"dragging": false,
"positionAbsolute": {
"x": 1723.976434815103,
"y": 277.03317407245913
}
},
{
"id": "AstraDB-eUCSS",
"type": "genericNode",
"position": {
"x": 3372.04958055989,
"y": 1611.0742035495277
},
"data": {
"type": "AstraDB",
"node": {
"template": {
"embedding": {
"type": "Embeddings",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "embedding",
"display_name": "Embedding",
"advanced": false,
"dynamic": false,
"info": "Embedding to use",
"load_from_db": false,
"title_case": false
},
"inputs": {
"type": "Record",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "inputs",
"display_name": "Inputs",
"advanced": false,
"dynamic": false,
"info": "Optional list of records to be processed and stored in the vector store.",
"load_from_db": false,
"title_case": false
},
"api_endpoint": {
"type": "str",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "api_endpoint",
"display_name": "API Endpoint",
"advanced": false,
"dynamic": false,
"info": "API endpoint URL for the Astra DB service.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
],
"value": ""
},
"batch_size": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "batch_size",
"display_name": "Batch Size",
"advanced": true,
"dynamic": false,
"info": "Optional number of records to process in a single batch.",
"load_from_db": false,
"title_case": false
},
"bulk_delete_concurrency": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "bulk_delete_concurrency",
"display_name": "Bulk Delete Concurrency",
"advanced": true,
"dynamic": false,
"info": "Optional concurrency level for bulk delete operations.",
"load_from_db": false,
"title_case": false
},
"bulk_insert_batch_concurrency": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "bulk_insert_batch_concurrency",
"display_name": "Bulk Insert Batch Concurrency",
"advanced": true,
"dynamic": false,
"info": "Optional concurrency level for bulk insert operations.",
"load_from_db": false,
"title_case": false
},
"bulk_insert_overwrite_concurrency": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "bulk_insert_overwrite_concurrency",
"display_name": "Bulk Insert Overwrite Concurrency",
"advanced": true,
"dynamic": false,
"info": "Optional concurrency level for bulk insert operations that overwrite existing records.",
"load_from_db": false,
"title_case": false
},
"code": {
"type": "code",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "from typing import List, Optional\n\nfrom langchain_astradb import AstraDBVectorStore\nfrom langchain_astradb.utils.astradb import SetupMode\n\nfrom langflow.custom import CustomComponent\nfrom langflow.field_typing import Embeddings, VectorStore\nfrom langflow.schema import Record\n\n\nclass AstraDBVectorStoreComponent(CustomComponent):\n display_name = \"Astra DB\"\n description = \"Builds or loads an Astra DB Vector Store.\"\n icon = \"AstraDB\"\n field_order = [\"token\", \"api_endpoint\", \"collection_name\", \"inputs\", \"embedding\"]\n\n def build_config(self):\n return {\n \"inputs\": {\n \"display_name\": \"Inputs\",\n \"info\": \"Optional list of records to be processed and stored in the vector store.\",\n },\n \"embedding\": {\"display_name\": \"Embedding\", \"info\": \"Embedding to use\"},\n \"collection_name\": {\n \"display_name\": \"Collection Name\",\n \"info\": \"The name of the collection within Astra DB where the vectors will be stored.\",\n },\n \"token\": {\n \"display_name\": \"Token\",\n \"info\": \"Authentication token for accessing Astra DB.\",\n \"password\": True,\n },\n \"api_endpoint\": {\n \"display_name\": \"API Endpoint\",\n \"info\": \"API endpoint URL for the Astra DB service.\",\n },\n \"namespace\": {\n \"display_name\": \"Namespace\",\n \"info\": \"Optional namespace within Astra DB to use for the collection.\",\n \"advanced\": True,\n },\n \"metric\": {\n \"display_name\": \"Metric\",\n \"info\": \"Optional distance metric for vector comparisons in the vector store.\",\n \"advanced\": True,\n },\n \"batch_size\": {\n \"display_name\": \"Batch Size\",\n \"info\": \"Optional number of records to process in a single batch.\",\n \"advanced\": True,\n },\n \"bulk_insert_batch_concurrency\": {\n \"display_name\": \"Bulk Insert Batch Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations.\",\n \"advanced\": True,\n },\n \"bulk_insert_overwrite_concurrency\": {\n \"display_name\": \"Bulk Insert Overwrite Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations that overwrite existing records.\",\n \"advanced\": True,\n },\n \"bulk_delete_concurrency\": {\n \"display_name\": \"Bulk Delete Concurrency\",\n \"info\": \"Optional concurrency level for bulk delete operations.\",\n \"advanced\": True,\n },\n \"setup_mode\": {\n \"display_name\": \"Setup Mode\",\n \"info\": \"Configuration mode for setting up the vector store, with options like “Sync”, “Async”, or “Off”.\",\n \"options\": [\"Sync\", \"Async\", \"Off\"],\n \"advanced\": True,\n },\n \"pre_delete_collection\": {\n \"display_name\": \"Pre Delete Collection\",\n \"info\": \"Boolean flag to determine whether to delete the collection before creating a new one.\",\n \"advanced\": True,\n },\n \"metadata_indexing_include\": {\n \"display_name\": \"Metadata Indexing Include\",\n \"info\": \"Optional list of metadata fields to include in the indexing.\",\n \"advanced\": True,\n },\n \"metadata_indexing_exclude\": {\n \"display_name\": \"Metadata Indexing Exclude\",\n \"info\": \"Optional list of metadata fields to exclude from the indexing.\",\n \"advanced\": True,\n },\n \"collection_indexing_policy\": {\n \"display_name\": \"Collection Indexing Policy\",\n \"info\": \"Optional dictionary defining the indexing policy for the collection.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n embedding: Embeddings,\n token: str,\n api_endpoint: str,\n collection_name: str,\n inputs: Optional[List[Record]] = None,\n namespace: Optional[str] = None,\n metric: Optional[str] = None,\n batch_size: Optional[int] = None,\n bulk_insert_batch_concurrency: Optional[int] = None,\n bulk_insert_overwrite_concurrency: Optional[int] = None,\n bulk_delete_concurrency: Optional[int] = None,\n setup_mode: str = \"Async\",\n pre_delete_collection: bool = False,\n metadata_indexing_include: Optional[List[str]] = None,\n metadata_indexing_exclude: Optional[List[str]] = None,\n collection_indexing_policy: Optional[dict] = None,\n ) -> VectorStore:\n try:\n setup_mode_value = SetupMode[setup_mode.upper()]\n except KeyError:\n raise ValueError(f\"Invalid setup mode: {setup_mode}\")\n if inputs:\n documents = [_input.to_lc_document() for _input in inputs]\n\n vector_store = AstraDBVectorStore.from_documents(\n documents=documents,\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode_value,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n else:\n vector_store = AstraDBVectorStore(\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode_value,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n\n return vector_store\n",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "code",
"advanced": true,
"dynamic": true,
"info": "",
"load_from_db": false,
"title_case": false
},
"collection_indexing_policy": {
"type": "dict",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "collection_indexing_policy",
"display_name": "Collection Indexing Policy",
"advanced": true,
"dynamic": false,
"info": "Optional dictionary defining the indexing policy for the collection.",
"load_from_db": false,
"title_case": false
},
"collection_name": {
"type": "str",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "collection_name",
"display_name": "Collection Name",
"advanced": false,
"dynamic": false,
"info": "The name of the collection within Astra DB where the vectors will be stored.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
],
"value": "langflow"
},
"metadata_indexing_exclude": {
"type": "str",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "metadata_indexing_exclude",
"display_name": "Metadata Indexing Exclude",
"advanced": true,
"dynamic": false,
"info": "Optional list of metadata fields to exclude from the indexing.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"metadata_indexing_include": {
"type": "str",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "metadata_indexing_include",
"display_name": "Metadata Indexing Include",
"advanced": true,
"dynamic": false,
"info": "Optional list of metadata fields to include in the indexing.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"metric": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "metric",
"display_name": "Metric",
"advanced": true,
"dynamic": false,
"info": "Optional distance metric for vector comparisons in the vector store.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"namespace": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "namespace",
"display_name": "Namespace",
"advanced": true,
"dynamic": false,
"info": "Optional namespace within Astra DB to use for the collection.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"pre_delete_collection": {
"type": "bool",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "pre_delete_collection",
"display_name": "Pre Delete Collection",
"advanced": true,
"dynamic": false,
"info": "Boolean flag to determine whether to delete the collection before creating a new one.",
"load_from_db": false,
"title_case": false
},
"setup_mode": {
"type": "str",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"value": "Async",
"fileTypes": [],
"file_path": "",
"password": false,
"options": [
"Sync",
"Async",
"Off"
],
"name": "setup_mode",
"display_name": "Setup Mode",
"advanced": true,
"dynamic": false,
"info": "Configuration mode for setting up the vector store, with options like “Sync”, “Async”, or “Off”.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"token": {
"type": "str",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": true,
"name": "token",
"display_name": "Token",
"advanced": false,
"dynamic": false,
"info": "Authentication token for accessing Astra DB.",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
],
"value": ""
},
"_type": "CustomComponent"
},
"description": "Builds or loads an Astra DB Vector Store.",
"icon": "AstraDB",
"base_classes": [
"VectorStore"
],
"display_name": "Astra DB",
"documentation": "",
"custom_fields": {
"embedding": null,
"token": null,
"api_endpoint": null,
"collection_name": null,
"inputs": null,
"namespace": null,
"metric": null,
"batch_size": null,
"bulk_insert_batch_concurrency": null,
"bulk_insert_overwrite_concurrency": null,
"bulk_delete_concurrency": null,
"setup_mode": null,
"pre_delete_collection": null,
"metadata_indexing_include": null,
"metadata_indexing_exclude": null,
"collection_indexing_policy": null
},
"output_types": [
"VectorStore"
],
"field_formatters": {},
"frozen": false,
"field_order": [
"token",
"api_endpoint",
"collection_name",
"inputs",
"embedding"
],
"beta": false
},
"id": "AstraDB-eUCSS"
},
"selected": false,
"width": 384,
"height": 573,
"positionAbsolute": {
"x": 3372.04958055989,
"y": 1611.0742035495277
},
"dragging": false
},
{
"id": "OpenAIEmbeddings-9TPjc",
"type": "genericNode",
"position": {
"x": 2814.0402191223047,
"y": 1955.9268168273086
},
"data": {
"type": "OpenAIEmbeddings",
"node": {
"template": {
"allowed_special": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": [],
"fileTypes": [],
"file_path": "",
"password": false,
"name": "allowed_special",
"display_name": "Allowed Special",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"chunk_size": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": 1000,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "chunk_size",
"display_name": "Chunk Size",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"client": {
"type": "Any",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "client",
"display_name": "Client",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"code": {
"type": "code",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": true,
"value": "from typing import Any, Dict, List, Optional\n\nfrom langchain_openai.embeddings.base import OpenAIEmbeddings\n\nfrom langflow.field_typing import Embeddings, NestedDict\nfrom langflow.interface.custom.custom_component import CustomComponent\n\n\nclass OpenAIEmbeddingsComponent(CustomComponent):\n display_name = \"OpenAI Embeddings\"\n description = \"Generate embeddings using OpenAI models.\"\n\n def build_config(self):\n return {\n \"allowed_special\": {\n \"display_name\": \"Allowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"default_headers\": {\n \"display_name\": \"Default Headers\",\n \"advanced\": True,\n \"field_type\": \"dict\",\n },\n \"default_query\": {\n \"display_name\": \"Default Query\",\n \"advanced\": True,\n \"field_type\": \"NestedDict\",\n },\n \"disallowed_special\": {\n \"display_name\": \"Disallowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"chunk_size\": {\"display_name\": \"Chunk Size\", \"advanced\": True},\n \"client\": {\"display_name\": \"Client\", \"advanced\": True},\n \"deployment\": {\"display_name\": \"Deployment\", \"advanced\": True},\n \"embedding_ctx_length\": {\n \"display_name\": \"Embedding Context Length\",\n \"advanced\": True,\n },\n \"max_retries\": {\"display_name\": \"Max Retries\", \"advanced\": True},\n \"model\": {\n \"display_name\": \"Model\",\n \"advanced\": False,\n \"options\": [\n \"text-embedding-3-small\",\n \"text-embedding-3-large\",\n \"text-embedding-ada-002\",\n ],\n },\n \"model_kwargs\": {\"display_name\": \"Model Kwargs\", \"advanced\": True},\n \"openai_api_base\": {\n \"display_name\": \"OpenAI API Base\",\n \"password\": True,\n \"advanced\": True,\n },\n \"openai_api_key\": {\"display_name\": \"OpenAI API Key\", \"password\": True},\n \"openai_api_type\": {\n \"display_name\": \"OpenAI API Type\",\n \"advanced\": True,\n \"password\": True,\n },\n \"openai_api_version\": {\n \"display_name\": \"OpenAI API Version\",\n \"advanced\": True,\n },\n \"openai_organization\": {\n \"display_name\": \"OpenAI Organization\",\n \"advanced\": True,\n },\n \"openai_proxy\": {\"display_name\": \"OpenAI Proxy\", \"advanced\": True},\n \"request_timeout\": {\"display_name\": \"Request Timeout\", \"advanced\": True},\n \"show_progress_bar\": {\n \"display_name\": \"Show Progress Bar\",\n \"advanced\": True,\n },\n \"skip_empty\": {\"display_name\": \"Skip Empty\", \"advanced\": True},\n \"tiktoken_model_name\": {\n \"display_name\": \"TikToken Model Name\",\n \"advanced\": True,\n },\n \"tiktoken_enable\": {\"display_name\": \"TikToken Enable\", \"advanced\": True},\n }\n\n def build(\n self,\n openai_api_key: str,\n default_headers: Optional[Dict[str, str]] = None,\n default_query: Optional[NestedDict] = {},\n allowed_special: List[str] = [],\n disallowed_special: List[str] = [\"all\"],\n chunk_size: int = 1000,\n client: Optional[Any] = None,\n deployment: str = \"text-embedding-ada-002\",\n embedding_ctx_length: int = 8191,\n max_retries: int = 6,\n model: str = \"text-embedding-ada-002\",\n model_kwargs: NestedDict = {},\n openai_api_base: Optional[str] = None,\n openai_api_type: Optional[str] = None,\n openai_api_version: Optional[str] = None,\n openai_organization: Optional[str] = None,\n openai_proxy: Optional[str] = None,\n request_timeout: Optional[float] = None,\n show_progress_bar: bool = False,\n skip_empty: bool = False,\n tiktoken_enable: bool = True,\n tiktoken_model_name: Optional[str] = None,\n ) -> Embeddings:\n # This is to avoid errors with Vector Stores (e.g Chroma)\n if disallowed_special == [\"all\"]:\n disallowed_special = \"all\" # type: ignore\n\n return OpenAIEmbeddings(\n tiktoken_enabled=tiktoken_enable,\n default_headers=default_headers,\n default_query=default_query,\n allowed_special=set(allowed_special),\n disallowed_special=\"all\",\n chunk_size=chunk_size,\n client=client,\n deployment=deployment,\n embedding_ctx_length=embedding_ctx_length,\n max_retries=max_retries,\n model=model,\n model_kwargs=model_kwargs,\n base_url=openai_api_base,\n api_key=openai_api_key,\n openai_api_type=openai_api_type,\n api_version=openai_api_version,\n organization=openai_organization,\n openai_proxy=openai_proxy,\n timeout=request_timeout,\n show_progress_bar=show_progress_bar,\n skip_empty=skip_empty,\n tiktoken_model_name=tiktoken_model_name,\n )\n",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "code",
"advanced": true,
"dynamic": true,
"info": "",
"load_from_db": false,
"title_case": false
},
"default_headers": {
"type": "dict",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "default_headers",
"display_name": "Default Headers",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"default_query": {
"type": "NestedDict",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": {},
"fileTypes": [],
"file_path": "",
"password": false,
"name": "default_query",
"display_name": "Default Query",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"deployment": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": "text-embedding-ada-002",
"fileTypes": [],
"file_path": "",
"password": false,
"name": "deployment",
"display_name": "Deployment",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"disallowed_special": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": [
"all"
],
"fileTypes": [],
"file_path": "",
"password": false,
"name": "disallowed_special",
"display_name": "Disallowed Special",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"embedding_ctx_length": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": 8191,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "embedding_ctx_length",
"display_name": "Embedding Context Length",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"max_retries": {
"type": "int",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": 6,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "max_retries",
"display_name": "Max Retries",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"model": {
"type": "str",
"required": false,
"placeholder": "",
"list": true,
"show": true,
"multiline": false,
"value": "text-embedding-ada-002",
"fileTypes": [],
"file_path": "",
"password": false,
"options": [
"text-embedding-3-small",
"text-embedding-3-large",
"text-embedding-ada-002"
],
"name": "model",
"display_name": "Model",
"advanced": false,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"model_kwargs": {
"type": "NestedDict",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": {},
"fileTypes": [],
"file_path": "",
"password": false,
"name": "model_kwargs",
"display_name": "Model Kwargs",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"openai_api_base": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": true,
"name": "openai_api_base",
"display_name": "OpenAI API Base",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"openai_api_key": {
"type": "str",
"required": true,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": true,
"name": "openai_api_key",
"display_name": "OpenAI API Key",
"advanced": false,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
],
"value": ""
},
"openai_api_type": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": true,
"name": "openai_api_type",
"display_name": "OpenAI API Type",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"openai_api_version": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "openai_api_version",
"display_name": "OpenAI API Version",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"openai_organization": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "openai_organization",
"display_name": "OpenAI Organization",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"openai_proxy": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "openai_proxy",
"display_name": "OpenAI Proxy",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"request_timeout": {
"type": "float",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "request_timeout",
"display_name": "Request Timeout",
"advanced": true,
"dynamic": false,
"info": "",
"rangeSpec": {
"step_type": "float",
"min": -1,
"max": 1,
"step": 0.1
},
"load_from_db": false,
"title_case": false
},
"show_progress_bar": {
"type": "bool",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "show_progress_bar",
"display_name": "Show Progress Bar",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"skip_empty": {
"type": "bool",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "skip_empty",
"display_name": "Skip Empty",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"tiktoken_enable": {
"type": "bool",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"value": true,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "tiktoken_enable",
"display_name": "TikToken Enable",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false
},
"tiktoken_model_name": {
"type": "str",
"required": false,
"placeholder": "",
"list": false,
"show": true,
"multiline": false,
"fileTypes": [],
"file_path": "",
"password": false,
"name": "tiktoken_model_name",
"display_name": "TikToken Model Name",
"advanced": true,
"dynamic": false,
"info": "",
"load_from_db": false,
"title_case": false,
"input_types": [
"Text"
]
},
"_type": "CustomComponent"
},
"description": "Generate embeddings using OpenAI models.",
"base_classes": [
"Embeddings"
],
"display_name": "OpenAI Embeddings",
"documentation": "",
"custom_fields": {
"openai_api_key": null,
"default_headers": null,
"default_query": null,
"allowed_special": null,
"disallowed_special": null,
"chunk_size": null,
"client": null,
"deployment": null,
"embedding_ctx_length": null,
"max_retries": null,
"model": null,
"model_kwargs": null,
"openai_api_base": null,
"openai_api_type": null,
"openai_api_version": null,
"openai_organization": null,
"openai_proxy": null,
"request_timeout": null,
"show_progress_bar": null,
"skip_empty": null,
"tiktoken_enable": null,
"tiktoken_model_name": null
},
"output_types": [
"Embeddings"
],
"field_formatters": {},
"frozen": false,
"field_order": [],
"beta": false
},
"id": "OpenAIEmbeddings-9TPjc"
},
"selected": false,
"width": 384,
"height": 383,
"positionAbsolute": {
"x": 2814.0402191223047,
"y": 1955.9268168273086
},
"dragging": false
}
],
"edges": [
{
"source": "TextOutput-BDknO",
"target": "Prompt-xeI6K",
"sourceHandle": "{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œTextOutputœ,œidœ:œTextOutput-BDknOœ}",
"targetHandle": "{œfieldNameœ:œcontextœ,œidœ:œPrompt-xeI6Kœ,œinputTypesœ:[œDocumentœ,œBaseOutputParserœ,œRecordœ,œTextœ],œtypeœ:œstrœ}",
"id": "reactflow__edge-TextOutput-BDknO{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œTextOutputœ,œidœ:œTextOutput-BDknOœ}-Prompt-xeI6K{œfieldNameœ:œcontextœ,œidœ:œPrompt-xeI6Kœ,œinputTypesœ:[œDocumentœ,œBaseOutputParserœ,œRecordœ,œTextœ],œtypeœ:œstrœ}",
"data": {
"targetHandle": {
"fieldName": "context",
"id": "Prompt-xeI6K",
"inputTypes": [
"Document",
"BaseOutputParser",
"Record",
"Text"
],
"type": "str"
},
"sourceHandle": {
"baseClasses": [
"object",
"Text",
"str"
],
"dataType": "TextOutput",
"id": "TextOutput-BDknO"
}
},
"style": {
"stroke": "#555"
},
"className": "stroke-gray-900 stroke-connection",
"selected": false
},
{
"source": "ChatInput-yxMKE",
"target": "Prompt-xeI6K",
"sourceHandle": "{œbaseClassesœ:[œTextœ,œstrœ,œobjectœ,œRecordœ],œdataTypeœ:œChatInputœ,œidœ:œChatInput-yxMKEœ}",
"targetHandle": "{œfieldNameœ:œquestionœ,œidœ:œPrompt-xeI6Kœ,œinputTypesœ:[œDocumentœ,œBaseOutputParserœ,œRecordœ,œTextœ],œtypeœ:œstrœ}",
"id": "reactflow__edge-ChatInput-yxMKE{œbaseClassesœ:[œTextœ,œstrœ,œobjectœ,œRecordœ],œdataTypeœ:œChatInputœ,œidœ:œChatInput-yxMKEœ}-Prompt-xeI6K{œfieldNameœ:œquestionœ,œidœ:œPrompt-xeI6Kœ,œinputTypesœ:[œDocumentœ,œBaseOutputParserœ,œRecordœ,œTextœ],œtypeœ:œstrœ}",
"data": {
"targetHandle": {
"fieldName": "question",
"id": "Prompt-xeI6K",
"inputTypes": [
"Document",
"BaseOutputParser",
"Record",
"Text"
],
"type": "str"
},
"sourceHandle": {
"baseClasses": [
"Text",
"str",
"object",
"Record"
],
"dataType": "ChatInput",
"id": "ChatInput-yxMKE"
}
},
"style": {
"stroke": "#555"
},
"className": "stroke-gray-900 stroke-connection",
"selected": false
},
{
"source": "Prompt-xeI6K",
"target": "OpenAIModel-EjXlN",
"sourceHandle": "{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œPromptœ,œidœ:œPrompt-xeI6Kœ}",
"targetHandle": "{œfieldNameœ:œinput_valueœ,œidœ:œOpenAIModel-EjXlNœ,œinputTypesœ:[œTextœ],œtypeœ:œstrœ}",
"id": "reactflow__edge-Prompt-xeI6K{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œPromptœ,œidœ:œPrompt-xeI6Kœ}-OpenAIModel-EjXlN{œfieldNameœ:œinput_valueœ,œidœ:œOpenAIModel-EjXlNœ,œinputTypesœ:[œTextœ],œtypeœ:œstrœ}",
"data": {
"targetHandle": {
"fieldName": "input_value",
"id": "OpenAIModel-EjXlN",
"inputTypes": [
"Text"
],
"type": "str"
},
"sourceHandle": {
"baseClasses": [
"object",
"Text",
"str"
],
"dataType": "Prompt",
"id": "Prompt-xeI6K"
}
},
"style": {
"stroke": "#555"
},
"className": "stroke-gray-900 stroke-connection",
"selected": false
},
{
"source": "OpenAIModel-EjXlN",
"target": "ChatOutput-Q39I8",
"sourceHandle": "{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œOpenAIModelœ,œidœ:œOpenAIModel-EjXlNœ}",
"targetHandle": "{œfieldNameœ:œinput_valueœ,œidœ:œChatOutput-Q39I8œ,œinputTypesœ:[œTextœ],œtypeœ:œstrœ}",
"id": "reactflow__edge-OpenAIModel-EjXlN{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œOpenAIModelœ,œidœ:œOpenAIModel-EjXlNœ}-ChatOutput-Q39I8{œfieldNameœ:œinput_valueœ,œidœ:œChatOutput-Q39I8œ,œinputTypesœ:[œTextœ],œtypeœ:œstrœ}",
"data": {
"targetHandle": {
"fieldName": "input_value",
"id": "ChatOutput-Q39I8",
"inputTypes": [
"Text"
],
"type": "str"
},
"sourceHandle": {
"baseClasses": [
"object",
"Text",
"str"
],
"dataType": "OpenAIModel",
"id": "OpenAIModel-EjXlN"
}
},
"style": {
"stroke": "#555"
},
"className": "stroke-gray-900 stroke-connection",
"selected": false
},
{
"source": "File-t0a6a",
"target": "RecursiveCharacterTextSplitter-tR9QM",
"sourceHandle": "{œbaseClassesœ:[œRecordœ],œdataTypeœ:œFileœ,œidœ:œFile-t0a6aœ}",
"targetHandle": "{œfieldNameœ:œinputsœ,œidœ:œRecursiveCharacterTextSplitter-tR9QMœ,œinputTypesœ:[œDocumentœ,œRecordœ],œtypeœ:œDocumentœ}",
"id": "reactflow__edge-File-t0a6a{œbaseClassesœ:[œRecordœ],œdataTypeœ:œFileœ,œidœ:œFile-t0a6aœ}-RecursiveCharacterTextSplitter-tR9QM{œfieldNameœ:œinputsœ,œidœ:œRecursiveCharacterTextSplitter-tR9QMœ,œinputTypesœ:[œDocumentœ,œRecordœ],œtypeœ:œDocumentœ}",
"data": {
"targetHandle": {
"fieldName": "inputs",
"id": "RecursiveCharacterTextSplitter-tR9QM",
"inputTypes": [
"Document",
"Record"
],
"type": "Document"
},
"sourceHandle": {
"baseClasses": [
"Record"
],
"dataType": "File",
"id": "File-t0a6a"
}
},
"style": {
"stroke": "#555"
},
"className": "stroke-gray-900 stroke-connection",
"selected": false
},
{
"source": "OpenAIEmbeddings-ZlOk1",
"sourceHandle": "{œbaseClassesœ:[œEmbeddingsœ],œdataTypeœ:œOpenAIEmbeddingsœ,œidœ:œOpenAIEmbeddings-ZlOk1œ}",
"target": "AstraDBSearch-41nRz",
"targetHandle": "{œfieldNameœ:œembeddingœ,œidœ:œAstraDBSearch-41nRzœ,œinputTypesœ:null,œtypeœ:œEmbeddingsœ}",
"data": {
"targetHandle": {
"fieldName": "embedding",
"id": "AstraDBSearch-41nRz",
"inputTypes": null,
"type": "Embeddings"
},
"sourceHandle": {
"baseClasses": [
"Embeddings"
],
"dataType": "OpenAIEmbeddings",
"id": "OpenAIEmbeddings-ZlOk1"
}
},
"style": {
"stroke": "#555"
},
"className": "stroke-gray-900 stroke-connection",
"id": "reactflow__edge-OpenAIEmbeddings-ZlOk1{œbaseClassesœ:[œEmbeddingsœ],œdataTypeœ:œOpenAIEmbeddingsœ,œidœ:œOpenAIEmbeddings-ZlOk1œ}-AstraDBSearch-41nRz{œfieldNameœ:œembeddingœ,œidœ:œAstraDBSearch-41nRzœ,œinputTypesœ:null,œtypeœ:œEmbeddingsœ}"
},
{
"source": "ChatInput-yxMKE",
"sourceHandle": "{œbaseClassesœ:[œTextœ,œstrœ,œobjectœ,œRecordœ],œdataTypeœ:œChatInputœ,œidœ:œChatInput-yxMKEœ}",
"target": "AstraDBSearch-41nRz",
"targetHandle": "{œfieldNameœ:œinput_valueœ,œidœ:œAstraDBSearch-41nRzœ,œinputTypesœ:[œTextœ],œtypeœ:œstrœ}",
"data": {
"targetHandle": {
"fieldName": "input_value",
"id": "AstraDBSearch-41nRz",
"inputTypes": [
"Text"
],
"type": "str"
},
"sourceHandle": {
"baseClasses": [
"Text",
"str",
"object",
"Record"
],
"dataType": "ChatInput",
"id": "ChatInput-yxMKE"
}
},
"style": {
"stroke": "#555"
},
"className": "stroke-gray-900 stroke-connection",
"id": "reactflow__edge-ChatInput-yxMKE{œbaseClassesœ:[œTextœ,œstrœ,œobjectœ,œRecordœ],œdataTypeœ:œChatInputœ,œidœ:œChatInput-yxMKEœ}-AstraDBSearch-41nRz{œfieldNameœ:œinput_valueœ,œidœ:œAstraDBSearch-41nRzœ,œinputTypesœ:[œTextœ],œtypeœ:œstrœ}"
},
{
"source": "RecursiveCharacterTextSplitter-tR9QM",
"sourceHandle": "{œbaseClassesœ:[œRecordœ],œdataTypeœ:œRecursiveCharacterTextSplitterœ,œidœ:œRecursiveCharacterTextSplitter-tR9QMœ}",
"target": "AstraDB-eUCSS",
"targetHandle": "{œfieldNameœ:œinputsœ,œidœ:œAstraDB-eUCSSœ,œinputTypesœ:null,œtypeœ:œRecordœ}",
"data": {
"targetHandle": {
"fieldName": "inputs",
"id": "AstraDB-eUCSS",
"inputTypes": null,
"type": "Record"
},
"sourceHandle": {
"baseClasses": [
"Record"
],
"dataType": "RecursiveCharacterTextSplitter",
"id": "RecursiveCharacterTextSplitter-tR9QM"
}
},
"style": {
"stroke": "#555"
},
"className": "stroke-gray-900 stroke-connection",
"id": "reactflow__edge-RecursiveCharacterTextSplitter-tR9QM{œbaseClassesœ:[œRecordœ],œdataTypeœ:œRecursiveCharacterTextSplitterœ,œidœ:œRecursiveCharacterTextSplitter-tR9QMœ}-AstraDB-eUCSS{œfieldNameœ:œinputsœ,œidœ:œAstraDB-eUCSSœ,œinputTypesœ:null,œtypeœ:œRecordœ}",
"selected": false
},
{
"source": "OpenAIEmbeddings-9TPjc",
"sourceHandle": "{œbaseClassesœ:[œEmbeddingsœ],œdataTypeœ:œOpenAIEmbeddingsœ,œidœ:œOpenAIEmbeddings-9TPjcœ}",
"target": "AstraDB-eUCSS",
"targetHandle": "{œfieldNameœ:œembeddingœ,œidœ:œAstraDB-eUCSSœ,œinputTypesœ:null,œtypeœ:œEmbeddingsœ}",
"data": {
"targetHandle": {
"fieldName": "embedding",
"id": "AstraDB-eUCSS",
"inputTypes": null,
"type": "Embeddings"
},
"sourceHandle": {
"baseClasses": [
"Embeddings"
],
"dataType": "OpenAIEmbeddings",
"id": "OpenAIEmbeddings-9TPjc"
}
},
"style": {
"stroke": "#555"
},
"className": "stroke-gray-900 stroke-connection",
"id": "reactflow__edge-OpenAIEmbeddings-9TPjc{œbaseClassesœ:[œEmbeddingsœ],œdataTypeœ:œOpenAIEmbeddingsœ,œidœ:œOpenAIEmbeddings-9TPjcœ}-AstraDB-eUCSS{œfieldNameœ:œembeddingœ,œidœ:œAstraDB-eUCSSœ,œinputTypesœ:null,œtypeœ:œEmbeddingsœ}",
"selected": false
},
{
"source": "AstraDBSearch-41nRz",
"sourceHandle": "{œbaseClassesœ:[œRecordœ],œdataTypeœ:œAstraDBSearchœ,œidœ:œAstraDBSearch-41nRzœ}",
"target": "TextOutput-BDknO",
"targetHandle": "{œfieldNameœ:œinput_valueœ,œidœ:œTextOutput-BDknOœ,œinputTypesœ:[œRecordœ,œTextœ],œtypeœ:œstrœ}",
"data": {
"targetHandle": {
"fieldName": "input_value",
"id": "TextOutput-BDknO",
"inputTypes": [
"Record",
"Text"
],
"type": "str"
},
"sourceHandle": {
"baseClasses": [
"Record"
],
"dataType": "AstraDBSearch",
"id": "AstraDBSearch-41nRz"
}
},
"style": {
"stroke": "#555"
},
"className": "stroke-gray-900 stroke-connection",
"id": "reactflow__edge-AstraDBSearch-41nRz{œbaseClassesœ:[œRecordœ],œdataTypeœ:œAstraDBSearchœ,œidœ:œAstraDBSearch-41nRzœ}-TextOutput-BDknO{œfieldNameœ:œinput_valueœ,œidœ:œTextOutput-BDknOœ,œinputTypesœ:[œRecordœ,œTextœ],œtypeœ:œstrœ}"
}
],
"viewport": {
"x": -259.6782520315529,
"y": 90.3428735006047,
"zoom": 0.2687057134854984
}
},
"description": "Visit https://pre-release.langflow.org/guides/rag-with-astradb for a detailed guide of this project.\nThis project give you both Ingestion and RAG in a single file. You'll need to visit https://astra.datastax.com/ to create an Astra DB instance, your Token and grab an API Endpoint.\nRunning this project requires you to add a file in the Files component, then define a Collection Name and click on the Play icon on the Astra DB component. \n\nAfter the ingestion ends you are ready to click on the Run button at the lower left corner and start asking questions about your data.",
"name": "Vector Store RAG",
"last_tested_version": "1.0.0a0",
"is_component": false
}