diff --git a/docs/docs/components/experimental.mdx b/docs/docs/components/experimental.mdx index 036fa334c..735a2de34 100644 --- a/docs/docs/components/experimental.mdx +++ b/docs/docs/components/experimental.mdx @@ -24,15 +24,15 @@ Provide the session ID to clear its message history. --- -## Extract Key From Record +## Extract Key From Data This component extracts specified keys from a record. **Parameters** -- **Record:** +- **Data:** - - **Display Name:** Record + - **Display Name:** Data - **Info:** The record from which to extract keys. - **Keys:** @@ -138,9 +138,9 @@ This component generates a notification. - **Display Name:** Name - **Info:** The notification's name. -- **Record:** +- **Data:** - - **Display Name:** Record + - **Display Name:** Data - **Info:** Optionally, a record to store in the notification. - **Append:** diff --git a/docs/docs/components/helpers.mdx b/docs/docs/components/helpers.mdx index f95c43b9d..ebbde99ad 100644 --- a/docs/docs/components/helpers.mdx +++ b/docs/docs/components/helpers.mdx @@ -13,7 +13,7 @@ This component retrieves stored chat messages based on a specific session ID. - **Number of messages:** Number of messages to retrieve. - **Session ID:** The session ID of the chat history. - **Order:** Choose the message order, either "Ascending" or "Descending". -- **Record template:** (Optional) Template to convert a record to text. If left empty, the system dynamically sets it to the record's text key. +- **Data template:** (Optional) Template to convert a record to text. If left empty, the system dynamically sets it to the record's text key. --- @@ -124,5 +124,5 @@ Update a record with text-based key/value pairs, similar to updating a Python di #### Parameters -- **Record:** The record to update. +- **Data:** The record to update. - **New data:** The new data to update the record with. diff --git a/docs/docs/components/inputs-and-outputs.mdx b/docs/docs/components/inputs-and-outputs.mdx index 484afc6b9..a35976d31 100644 --- a/docs/docs/components/inputs-and-outputs.mdx +++ b/docs/docs/components/inputs-and-outputs.mdx @@ -8,11 +8,11 @@ They also dynamically change the Playground and can be renamed to facilitate bui ## Inputs -Inputs are components used to define where data enters your flow. They can receive data from the user, a database, or any other source that can be converted to Text or Record. +Inputs are components used to define where data enters your flow. They can receive data from the user, a database, or any other source that can be converted to Text or Data. The difference between Chat Input and other Input components is the output format, the number of configurable fields, and the way they are displayed in the Playground. -Chat Input components can output `Text` or `Record`. When you want to pass the sender name or sender to the next component, use the `Record` output. To pass only the message, use the `Text` output, useful when saving the message to a database or memory system like Zep. +Chat Input components can output `Text` or `Data`. When you want to pass the sender name or sender to the next component, use the `Data` output. To pass only the message, use the `Text` output, useful when saving the message to a database or memory system like Zep. You can find out more about Chat Input and other Inputs [here](#chat-input). @@ -38,8 +38,8 @@ This component collects user input from the chat.

- If `As Record` is `true` and the `Message` is a `Record`, the data of the - `Record` will be updated with the `Sender`, `Sender Name`, and `Session ID`. + If `As Data` is `true` and the `Message` is a `Data`, the data of the `Data` + will be updated with the `Sender`, `Sender Name`, and `Session ID`.

@@ -70,11 +70,11 @@ The **Text Input** component adds an **Input** field on the Playground. This ena **Parameters** - **Value:** Specifies the text input value. This is where the user inputs text data that will be passed to the next component in the sequence. If no value is provided, it defaults to an empty string. -- **Record Template:** Specifies how a `Record` should be converted into `Text`. +- **Data Template:** Specifies how a `Data` should be converted into `Text`. -The **Record Template** field is used to specify how a `Record` should be converted into `Text`. This is particularly useful when you want to extract specific information from a `Record` and pass it as text to the next component in the sequence. +The **Data Template** field is used to specify how a `Data` should be converted into `Text`. This is particularly useful when you want to extract specific information from a `Data` and pass it as text to the next component in the sequence. -For example, if you have a `Record` with the following structure: +For example, if you have a `Data` with the following structure: ```json { @@ -84,9 +84,9 @@ For example, if you have a `Record` with the following structure: } ``` -A template with `Name: {name}, Age: {age}` will convert the `Record` into a text string of `Name: John Doe, Age: 30`. +A template with `Name: {name}, Age: {age}` will convert the `Data` into a text string of `Name: John Doe, Age: 30`. -If you pass more than one `Record`, the text will be concatenated with a new line separator. +If you pass more than one `Data`, the text will be concatenated with a new line separator. ## Outputs @@ -112,8 +112,8 @@ This component sends a message to the chat.

- If `As Record` is `true` and the `Message` is a `Record`, the data in the - `Record` is updated with the `Sender`, `Sender Name`, and `Session ID`. + If `As Data` is `true` and the `Message` is a `Data`, the data in the `Data` + is updated with the `Sender`, `Sender Name`, and `Session ID`.

@@ -154,4 +154,5 @@ The `PromptTemplate` component enables users to create prompts and define variab After defining a variable in the prompt template, it acts as its own component input. See [Prompt Customization](../administration/prompt-customization) for more details. -- **template:** The template used to format an individual request. +- **template:** The template used to format an individual request.import Admonition from "@theme/Admonition"; + import ZoomableImage from "/src/theme/ZoomableImage.js"; diff --git a/docs/docs/components/text-and-record.mdx b/docs/docs/components/text-and-record.mdx index 24c16e4aa..6ae43fcbb 100644 --- a/docs/docs/components/text-and-record.mdx +++ b/docs/docs/components/text-and-record.mdx @@ -1,14 +1,14 @@ -# Text and Record +# Text and Data -In Langflow 1.0, we added two main input and output types: `Text` and `Record`. +In Langflow 1.0, we added two main input and output types: `Text` and `Data`. -`Text` is a simple string input and output type, while `Record` is a structure very similar to a dictionary in Python. It is a key-value pair data structure. +`Text` is a simple string input and output type, while `Data` is a structure very similar to a dictionary in Python. It is a key-value pair data structure. We've created a few components to help you work with these types. Let's see how a few of them work. ## Records To Text -This is a component that takes in Records and outputs a `Text`. It does this using a template string and concatenating the values of the `Record`, one per line. +This is a component that takes in Records and outputs a `Text`. It does this using a template string and concatenating the values of the `Data`, one per line. If we have the following Records: @@ -32,13 +32,13 @@ Alice: Hello! John: Hi! ``` -## Create Record +## Create Data -This component allows you to create a `Record` from a number of inputs. You can add as many key-value pairs as you want (as long as it is less than 15). Once you've picked that number you'll need to write the name of the Key and can pass `Text` values from other components to it. +This component allows you to create a `Data` from a number of inputs. You can add as many key-value pairs as you want (as long as it is less than 15). Once you've picked that number you'll need to write the name of the Key and can pass `Text` values from other components to it. ## Documents To Records -This component takes in a LangChain `Document` and outputs a `Record`. It does this by extracting the `page_content` and the `metadata` from the `Document` and adding them to the `Record` as text and data respectively. +This component takes in a LangChain `Document` and outputs a `Data`. It does this by extracting the `page_content` and the `metadata` from the `Document` and adding them to the `Data` as text and data respectively. ## Why is this useful? diff --git a/docs/docs/examples/create-record.mdx b/docs/docs/examples/create-record.mdx index aa7a886f4..9f651c336 100644 --- a/docs/docs/examples/create-record.mdx +++ b/docs/docs/examples/create-record.mdx @@ -4,14 +4,18 @@ import ZoomableImage from "/src/theme/ZoomableImage.js"; import ReactPlayer from "react-player"; import Admonition from "@theme/Admonition"; -# Create Record +# Create Data -In Langflow, a `Record` has a structure very similar to a Python dictionary. It is a key-value pair data structure. +In Langflow, a `Data` has a structure very similar to a Python dictionary. It is a key-value pair data structure. -The **Create Record** component allows you to dynamically create a `Record` from a specified number of inputs. You can add as many key-value pairs as you want (as long as it is less than 15 😅). Once you've chosen the number of `Records`, add keys and fill up values, or pass on values from other components to the component using the input handles. +The **Create Data** component allows you to dynamically create a `Data` from a specified number of inputs. You can add as many key-value pairs as you want (as long as it is less than 15 😅). Once you've chosen the number of `Records`, add keys and fill up values, or pass on values from other components to the component using the input handles.
+import ThemedImage from "@theme/ThemedImage"; import useBaseUrl from +"@docusaurus/useBaseUrl"; import ZoomableImage from +"/src/theme/ZoomableImage.js"; import ReactPlayer from "react-player"; import +Admonition from "@theme/Admonition"; diff --git a/docs/docs/integrations/notion/list-database-properties.md b/docs/docs/integrations/notion/list-database-properties.md index c41159893..2056fa81d 100644 --- a/docs/docs/integrations/notion/list-database-properties.md +++ b/docs/docs/integrations/notion/list-database-properties.md @@ -33,7 +33,7 @@ import requests from typing import Dict from langflow import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class NotionDatabaseProperties(CustomComponent): @@ -61,7 +61,7 @@ class NotionDatabaseProperties(CustomComponent): self, database_id: str, notion_secret: str, - ) -> Record: + ) -> Data: url = f"https://api.notion.com/v1/databases/{database_id}" headers = { "Authorization": f"Bearer {notion_secret}", @@ -74,7 +74,7 @@ class NotionDatabaseProperties(CustomComponent): data = response.json() properties = data.get("properties", {}) - record = Record(text=str(response.json()), data=properties) + record = Data(text=str(response.json()), data=properties) self.status = f"Retrieved {len(properties)} properties from the Notion database.\n {record.text}" return record ``` diff --git a/docs/docs/integrations/notion/list-pages.md b/docs/docs/integrations/notion/list-pages.md index ea1b04950..f7f1ebd49 100644 --- a/docs/docs/integrations/notion/list-pages.md +++ b/docs/docs/integrations/notion/list-pages.md @@ -39,7 +39,7 @@ import requests import json from typing import Dict, Any, List from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class NotionListPages(CustomComponent): display_name = "List Pages [Notion]" @@ -83,7 +83,7 @@ class NotionListPages(CustomComponent): notion_secret: str, database_id: str, query_payload: str = "{}", - ) -> List[Record]: + ) -> List[Data]: try: query_data = json.loads(query_payload) filter_obj = query_data.get("filter") @@ -127,14 +127,14 @@ class NotionListPages(CustomComponent): ) combined_text += text - records.append(Record(text=text, data=page_data)) + records.append(Data(text=text, data=page_data)) self.status = combined_text.strip() return records except Exception as e: self.status = f"An error occurred: {str(e)}" - return [Record(text=self.status, data=[])] + return [Data(text=self.status, data=[])] ``` diff --git a/docs/docs/integrations/notion/list-users.md b/docs/docs/integrations/notion/list-users.md index 0eb8236f5..0dc9a771e 100644 --- a/docs/docs/integrations/notion/list-users.md +++ b/docs/docs/integrations/notion/list-users.md @@ -30,7 +30,7 @@ import requests from typing import List from langflow import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class NotionUserList(CustomComponent): @@ -52,7 +52,7 @@ class NotionUserList(CustomComponent): def build( self, notion_secret: str, - ) -> List[Record]: + ) -> List[Data]: url = "https://api.notion.com/v1/users" headers = { "Authorization": f"Bearer {notion_secret}", @@ -84,7 +84,7 @@ class NotionUserList(CustomComponent): output += f"{key.replace('_', ' ').title()}: {value}\n" output += "________________________\n" - record = Record(text=output, data=record_data) + record = Data(text=output, data=record_data) records.append(record) self.status = "\n".join(record.text for record in records) diff --git a/docs/docs/integrations/notion/page-content-viewer.md b/docs/docs/integrations/notion/page-content-viewer.md index f4eeba052..070d71800 100644 --- a/docs/docs/integrations/notion/page-content-viewer.md +++ b/docs/docs/integrations/notion/page-content-viewer.md @@ -36,7 +36,7 @@ import requests from typing import Dict, Any from langflow import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class NotionPageContent(CustomComponent): @@ -64,7 +64,7 @@ class NotionPageContent(CustomComponent): self, page_id: str, notion_secret: str, - ) -> Record: + ) -> Data: blocks_url = f"https://api.notion.com/v1/blocks/{page_id}/children?page_size=100" headers = { "Authorization": f"Bearer {notion_secret}", @@ -80,7 +80,7 @@ class NotionPageContent(CustomComponent): content = self.parse_blocks(blocks_data["results"]) self.status = content - return Record(data={"content": content}, text=content) + return Data(data={"content": content}, text=content) def parse_blocks(self, blocks: list) -> str: content = "" diff --git a/docs/docs/integrations/notion/page-update.md b/docs/docs/integrations/notion/page-update.md index b48efbba6..3ed8f7740 100644 --- a/docs/docs/integrations/notion/page-update.md +++ b/docs/docs/integrations/notion/page-update.md @@ -26,7 +26,7 @@ import requests from typing import Dict, Any from langflow import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class NotionPageUpdate(CustomComponent): @@ -61,7 +61,7 @@ class NotionPageUpdate(CustomComponent): page_id: str, properties: str, notion_secret: str, - ) -> Record: + ) -> Data: url = f"https://api.notion.com/v1/pages/{page_id}" headers = { "Authorization": f"Bearer {notion_secret}", @@ -88,7 +88,7 @@ class NotionPageUpdate(CustomComponent): output += f"{prop_name}: {prop_value}\n" self.status = output - return Record(data=updated_page) + return Data(data=updated_page) ``` Let's break down the key parts of this component: @@ -99,7 +99,7 @@ Let's break down the key parts of this component: - The component interacts with the Notion API to update the page properties. It constructs the API URL, headers, and request data based on the provided parameters. -- The processed data is returned as a `Record` object, which can be connected to other components in the Langflow flow. The `Record` object contains the updated page data. +- The processed data is returned as a `Data` object, which can be connected to other components in the Langflow flow. The `Data` object contains the updated page data. - The component also stores the updated page properties in the `status` attribute for logging and debugging purposes. diff --git a/docs/docs/integrations/notion/search.md b/docs/docs/integrations/notion/search.md index a972bffc0..35ae4ff5a 100644 --- a/docs/docs/integrations/notion/search.md +++ b/docs/docs/integrations/notion/search.md @@ -36,7 +36,7 @@ To use the `NotionSearch` component in a Langflow flow, follow these steps: import requests from typing import Dict, Any, List from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class NotionSearch(CustomComponent): display_name = "Search Notion" @@ -88,7 +88,7 @@ class NotionSearch(CustomComponent): query: str = "", filter_value: str = "page", sort_direction: str = "descending", - ) -> List[Record]: + ) -> List[Data]: try: url = "https://api.notion.com/v1/search" headers = { @@ -135,14 +135,14 @@ class NotionSearch(CustomComponent): text += f"type: {result['object']}\nlast_edited_time: {result['last_edited_time']}\n\n" combined_text += text - records.append(Record(text=text, data=result_data)) + records.append(Data(text=text, data=result_data)) self.status = combined_text return records except Exception as e: self.status = f"An error occurred: {str(e)}" - return [Record(text=self.status, data=[])] + return [Data(text=self.status, data=[])] ``` ## Example Usage diff --git a/docs/docs/migration/migrating-to-one-point-zero.mdx b/docs/docs/migration/migrating-to-one-point-zero.mdx index e58620347..6b608adcf 100644 --- a/docs/docs/migration/migrating-to-one-point-zero.mdx +++ b/docs/docs/migration/migrating-to-one-point-zero.mdx @@ -16,7 +16,7 @@ We have a special channel in our Discord server dedicated to Langflow 1.0 migrat - Continued support for LangChain and new support for multiple frameworks - Redesigned sidebar and customizable interaction panel - New Native Categories and Components -- Improved user experience with Text and Record modes +- Improved user experience with Text and Data modes - CustomComponent for all components - Compatibility with previous versions using Runnable Executor - Multiple flows in the canvas @@ -58,11 +58,11 @@ Langflow 1.0 introduces many new native categories, including Inputs, Outputs, H **Guide coming soon** -## New Way of Using Langflow: Text and Record (and more to come) +## New Way of Using Langflow: Text and Data (and more to come) -With the introduction of Text and Record types connections between Components are more intuitive and easier to understand. This is the first step in a series of improvements to the way you interact with Langflow. Learn how to use Text, and Record and how they help you build better flows. +With the introduction of Text and Data types connections between Components are more intuitive and easier to understand. This is the first step in a series of improvements to the way you interact with Langflow. Learn how to use Text, and Data and how they help you build better flows. -[Learn more about Text and Record](../components/text-and-record) +[Learn more about Text and Data](../components/text-and-record) ## CustomComponent for All Components diff --git a/src/backend/base/langflow/base/agents/agent.py b/src/backend/base/langflow/base/agents/agent.py index d4328032d..b24cf1984 100644 --- a/src/backend/base/langflow/base/agents/agent.py +++ b/src/backend/base/langflow/base/agents/agent.py @@ -4,10 +4,10 @@ from langchain.agents import AgentExecutor, BaseMultiActionAgent, BaseSingleActi from langchain_core.messages import BaseMessage from langchain_core.runnables import Runnable -from langflow.base.agents.utils import get_agents_list, records_to_messages +from langflow.base.agents.utils import data_to_messages, get_agents_list from langflow.custom import CustomComponent from langflow.field_typing import Text, Tool -from langflow.schema import Record +from langflow.schema import Data class LCAgentComponent(CustomComponent): @@ -49,7 +49,7 @@ class LCAgentComponent(CustomComponent): agent: Union[Runnable, BaseSingleActionAgent, BaseMultiActionAgent, AgentExecutor], inputs: str, tools: List[Tool], - message_history: Optional[List[Record]] = None, + message_history: Optional[List[Data]] = None, handle_parsing_errors: bool = True, output_key: str = "output", ) -> Text: @@ -64,7 +64,7 @@ class LCAgentComponent(CustomComponent): ) input_dict: dict[str, str | list[BaseMessage]] = {"input": inputs} if message_history: - input_dict["chat_history"] = records_to_messages(message_history) + input_dict["chat_history"] = data_to_messages(message_history) result = await runnable.ainvoke(input_dict) self.status = result if output_key in result: diff --git a/src/backend/base/langflow/base/agents/utils.py b/src/backend/base/langflow/base/agents/utils.py index 781fa2362..2651ecb4a 100644 --- a/src/backend/base/langflow/base/agents/utils.py +++ b/src/backend/base/langflow/base/agents/utils.py @@ -13,7 +13,7 @@ from langchain_core.prompts import BasePromptTemplate, ChatPromptTemplate from langchain_core.tools import BaseTool from pydantic import BaseModel -from langflow.schema import Record +from langflow.schema import Data from .default_prompts import XML_AGENT_PROMPT @@ -34,17 +34,17 @@ class AgentSpec(BaseModel): hub_repo: Optional[str] = None -def records_to_messages(records: List[Record]) -> List[BaseMessage]: +def data_to_messages(data: List[Data]) -> List[BaseMessage]: """ - Convert a list of records to a list of messages. + Convert a list of data to a list of messages. Args: - records (List[Record]): The records to convert. + data (List[Data]): The data to convert. Returns: - List[Message]: The records as messages. + List[Message]: The data as messages. """ - return [record.to_lc_message() for record in records] + return [value.to_lc_message() for value in data] def validate_and_create_xml_agent( diff --git a/src/backend/base/langflow/base/data/utils.py b/src/backend/base/langflow/base/data/utils.py index 9bad1dabf..a89696221 100644 --- a/src/backend/base/langflow/base/data/utils.py +++ b/src/backend/base/langflow/base/data/utils.py @@ -8,7 +8,7 @@ import chardet import orjson import yaml -from langflow.schema import Record +from langflow.schema import Data # Types of files that can be read simply by file.read() # and have 100% to be completely readable @@ -82,7 +82,7 @@ def retrieve_file_paths( # ! Removing unstructured dependency until # ! 3.12 is supported -# def partition_file_to_record(file_path: str, silent_errors: bool) -> Optional[Record]: +# def partition_file_to_record(file_path: str, silent_errors: bool) -> Optional[Data]: # # Use the partition function to load the file # from unstructured.partition.auto import partition # type: ignore @@ -93,11 +93,11 @@ def retrieve_file_paths( # raise ValueError(f"Error loading file {file_path}: {e}") from e # return None -# # Create a Record +# # Create a Data # text = "\n\n".join([Text(el) for el in elements]) # metadata = elements.metadata if hasattr(elements, "metadata") else {} # metadata["file_path"] = file_path -# record = Record(text=text, data=metadata) +# record = Data(text=text, data=metadata) # return record @@ -129,7 +129,7 @@ def parse_pdf_to_text(file_path: str) -> str: return "\n\n".join([page.extract_text() for page in reader.pages]) -def parse_text_file_to_record(file_path: str, silent_errors: bool) -> Optional[Record]: +def parse_text_file_to_record(file_path: str, silent_errors: bool) -> Optional[Data]: try: if file_path.endswith(".pdf"): text = parse_pdf_to_text(file_path) @@ -156,7 +156,7 @@ def parse_text_file_to_record(file_path: str, silent_errors: bool) -> Optional[R raise ValueError(f"Error loading file {file_path}: {e}") from e return None - record = Record(data={"file_path": file_path, "text": text}) + record = Data(data={"file_path": file_path, "text": text}) return record @@ -167,21 +167,21 @@ def parse_text_file_to_record(file_path: str, silent_errors: bool) -> Optional[R # silent_errors: bool, # max_concurrency: int, # use_multithreading: bool, -# ) -> List[Optional[Record]]: +# ) -> List[Optional[Data]]: # if use_multithreading: -# records = parallel_load_records(file_paths, silent_errors, max_concurrency) +# data = parallel_load_data(file_paths, silent_errors, max_concurrency) # else: -# records = [partition_file_to_record(file_path, silent_errors) for file_path in file_paths] -# records = list(filter(None, records)) -# return records +# data = [partition_file_to_record(file_path, silent_errors) for file_path in file_paths] +# data = list(filter(None, data)) +# return data -def parallel_load_records( +def parallel_load_data( file_paths: List[str], silent_errors: bool, max_concurrency: int, load_function: Callable = parse_text_file_to_record, -) -> List[Optional[Record]]: +) -> List[Optional[Data]]: with futures.ThreadPoolExecutor(max_workers=max_concurrency) as executor: loaded_files = executor.map( lambda file_path: load_function(file_path, silent_errors), diff --git a/src/backend/base/langflow/base/flow_processing/utils.py b/src/backend/base/langflow/base/flow_processing/utils.py index 1f756a1db..14420dac0 100644 --- a/src/backend/base/langflow/base/flow_processing/utils.py +++ b/src/backend/base/langflow/base/flow_processing/utils.py @@ -1,67 +1,67 @@ from typing import List from langflow.graph.schema import ResultData, RunOutputs -from langflow.schema import Record +from langflow.schema import Data -def build_records_from_run_outputs(run_outputs: RunOutputs) -> List[Record]: +def build_data_from_run_outputs(run_outputs: RunOutputs) -> List[Data]: """ - Build a list of records from the given RunOutputs. + Build a list of data from the given RunOutputs. Args: run_outputs (RunOutputs): The RunOutputs object containing the output data. Returns: - List[Record]: A list of records built from the RunOutputs. + List[Data]: A list of data built from the RunOutputs. """ if not run_outputs: return [] - records = [] + data = [] for result_data in run_outputs.outputs: if result_data: - records.extend(build_records_from_result_data(result_data)) - return records + data.extend(build_data_from_result_data(result_data)) + return data -def build_records_from_result_data(result_data: ResultData, get_final_results_only: bool = True) -> List[Record]: +def build_data_from_result_data(result_data: ResultData, get_final_results_only: bool = True) -> List[Data]: """ - Build a list of records from the given ResultData. + Build a list of data from the given ResultData. Args: result_data (ResultData): The ResultData object containing the result data. get_final_results_only (bool, optional): Whether to include only final results. Defaults to True. Returns: - List[Record]: A list of records built from the ResultData. + List[Data]: A list of data built from the ResultData. """ messages = result_data.messages if not messages: return [] - records = [] + data = [] for message in messages: message_dict = message if isinstance(message, dict) else message.model_dump() if get_final_results_only: result_data_dict = result_data.model_dump() results = result_data_dict.get("results", {}) inner_result = results.get("result", {}) - record = Record(data={"result": inner_result, "message": message_dict}, text_key="result") - records.append(record) - return records + record = Data(data={"result": inner_result, "message": message_dict}, text_key="result") + data.append(record) + return data -def format_flow_output_records(records: List[Record]) -> str: +def format_flow_output_data(data: List[Data]) -> str: """ - Format the flow output records into a string. + Format the flow output data into a string. Args: - records (List[Record]): The list of records to format. + data (List[Data]): The list of data to format. Returns: - str: The formatted flow output records. + str: The formatted flow output data. """ result = "Flow run output:\n" - results = "\n".join([record.result for record in records if record.data["message"]]) + results = "\n".join([value.result for value in data if value.data["message"]]) return result + results diff --git a/src/backend/base/langflow/base/io/chat.py b/src/backend/base/langflow/base/io/chat.py index d2894a623..ab14924bd 100644 --- a/src/backend/base/langflow/base/io/chat.py +++ b/src/backend/base/langflow/base/io/chat.py @@ -3,7 +3,7 @@ from typing import Optional, Union from langflow.base.data.utils import IMG_FILE_TYPES, TEXT_FILE_TYPES from langflow.custom import Component from langflow.memory import store_message -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.message import Message @@ -35,9 +35,9 @@ class ChatComponent(Component): "advanced": True, }, "record_template": { - "display_name": "Record Template", + "display_name": "Data Template", "multiline": True, - "info": "In case of Message being a Record, this template will be used to convert it to text.", + "info": "In case of Message being a Data, this template will be used to convert it to text.", "advanced": True, }, "files": { @@ -65,14 +65,14 @@ class ChatComponent(Component): self, sender: Optional[str] = "User", sender_name: Optional[str] = "User", - input_value: Optional[Union[str, Record, Message]] = None, + input_value: Optional[Union[str, Data, Message]] = None, files: Optional[list[str]] = None, session_id: Optional[str] = None, return_message: Optional[bool] = False, ) -> Message: message: Message | None = None - if isinstance(input_value, Record): + if isinstance(input_value, Data): # Update the data of the record message = Message.from_record(input_value) else: diff --git a/src/backend/base/langflow/base/io/text.py b/src/backend/base/langflow/base/io/text.py index a9ec48848..2c5b1da26 100644 --- a/src/backend/base/langflow/base/io/text.py +++ b/src/backend/base/langflow/base/io/text.py @@ -2,8 +2,8 @@ from typing import Optional from langflow.custom import Component from langflow.field_typing import Text -from langflow.helpers.record import records_to_text -from langflow.schema import Record +from langflow.helpers.record import data_to_text +from langflow.schema import Data class TextComponent(Component): @@ -14,13 +14,13 @@ class TextComponent(Component): return { "input_value": { "display_name": "Value", - "input_types": ["Text", "Record"], - "info": "Text or Record to be passed.", + "input_types": ["Text", "Data"], + "info": "Text or Data to be passed.", }, "record_template": { - "display_name": "Record Template", + "display_name": "Data Template", "multiline": True, - "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + "info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", "advanced": True, }, } @@ -30,12 +30,12 @@ class TextComponent(Component): input_value: Optional[Text] = "", record_template: Optional[str] = "{text}", ) -> Text: - if isinstance(input_value, Record): + if isinstance(input_value, Data): if record_template == "": - # it should be dynamically set to the Record's .text_key value + # it should be dynamically set to the Data's .text_key value # meaning, if text_key = "bacon", then record_template = "{bacon}" record_template = "{" + input_value.text_key + "}" - input_value = records_to_text(template=record_template, records=input_value) + input_value = data_to_text(template=record_template, data=input_value) self.status = input_value if not input_value: input_value = "" diff --git a/src/backend/base/langflow/base/memory/memory.py b/src/backend/base/langflow/base/memory/memory.py index fe372a96b..003b5f06a 100644 --- a/src/backend/base/langflow/base/memory/memory.py +++ b/src/backend/base/langflow/base/memory/memory.py @@ -1,7 +1,7 @@ from typing import Optional from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class BaseMemoryComponent(CustomComponent): @@ -33,14 +33,14 @@ class BaseMemoryComponent(CustomComponent): "advanced": True, }, "record_template": { - "display_name": "Record Template", + "display_name": "Data Template", "multiline": True, - "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + "info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", "advanced": True, }, } - def get_messages(self, **kwargs) -> list[Record]: + def get_messages(self, **kwargs) -> list[Data]: raise NotImplementedError def add_message( diff --git a/src/backend/base/langflow/base/prompts/utils.py b/src/backend/base/langflow/base/prompts/utils.py index 0fa62ea3b..70c8b1dfe 100644 --- a/src/backend/base/langflow/base/prompts/utils.py +++ b/src/backend/base/langflow/base/prompts/utils.py @@ -2,16 +2,16 @@ from copy import deepcopy from langchain_core.documents import Document -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.message import Message -def record_to_string(record: Record) -> str: +def record_to_string(record: Data) -> str: """ Convert a record to a string. Args: - record (Record): The record to convert. + record (Data): The record to convert. Returns: str: The record as a string. @@ -32,18 +32,18 @@ def dict_values_to_string(d: dict) -> dict: # Do something similar to the above d_copy = deepcopy(d) for key, value in d_copy.items(): - # it could be a list of records or documents or strings + # it could be a list of data or documents or strings if isinstance(value, list): for i, item in enumerate(value): if isinstance(item, Message): d_copy[key][i] = item.text - elif isinstance(item, Record): + elif isinstance(item, Data): d_copy[key][i] = record_to_string(item) elif isinstance(item, Document): d_copy[key][i] = document_to_string(item) elif isinstance(value, Message): d_copy[key] = value.text - elif isinstance(value, Record): + elif isinstance(value, Data): d_copy[key] = record_to_string(value) elif isinstance(value, Document): d_copy[key] = document_to_string(value) diff --git a/src/backend/base/langflow/base/tools/flow_tool.py b/src/backend/base/langflow/base/tools/flow_tool.py index d0993bd99..4f767e4da 100644 --- a/src/backend/base/langflow/base/tools/flow_tool.py +++ b/src/backend/base/langflow/base/tools/flow_tool.py @@ -6,7 +6,7 @@ from langchain_core.runnables import RunnableConfig from langchain_core.tools import ToolException from pydantic.v1 import BaseModel -from langflow.base.flow_processing.utils import build_records_from_result_data, format_flow_output_records +from langflow.base.flow_processing.utils import build_data_from_result_data, format_flow_output_data from langflow.graph.graph.base import Graph from langflow.graph.vertex.base import Vertex from langflow.helpers.flow import build_schema_from_inputs, get_arg_names, get_flow_inputs, run_flow @@ -59,14 +59,12 @@ class FlowTool(BaseTool): return "No output" run_output = run_outputs[0] - records = [] + data = [] if run_output is not None: for output in run_output.outputs: if output: - records.extend( - build_records_from_result_data(output, get_final_results_only=self.get_final_results_only) - ) - return format_flow_output_records(records) + data.extend(build_data_from_result_data(output, get_final_results_only=self.get_final_results_only)) + return format_flow_output_data(data) def validate_inputs(self, args_names: List[dict[str, str]], args: Any, kwargs: Any): """Validate the inputs.""" @@ -107,11 +105,9 @@ class FlowTool(BaseTool): return "No output" run_output = run_outputs[0] - records = [] + data = [] if run_output is not None: for output in run_output.outputs: if output: - records.extend( - build_records_from_result_data(output, get_final_results_only=self.get_final_results_only) - ) - return format_flow_output_records(records) + data.extend(build_data_from_result_data(output, get_final_results_only=self.get_final_results_only)) + return format_flow_output_data(data) diff --git a/src/backend/base/langflow/base/vectorstores/utils.py b/src/backend/base/langflow/base/vectorstores/utils.py index 739181600..42cd1a1ec 100644 --- a/src/backend/base/langflow/base/vectorstores/utils.py +++ b/src/backend/base/langflow/base/vectorstores/utils.py @@ -1,17 +1,17 @@ -from langflow.schema import Record +from langflow.schema import Data -def chroma_collection_to_records(collection_dict: dict): +def chroma_collection_to_data(collection_dict: dict): """ - Converts a collection of chroma vectors into a list of records. + Converts a collection of chroma vectors into a list of data. Args: collection_dict (dict): A dictionary containing the collection of chroma vectors. Returns: - list: A list of records, where each record represents a document in the collection. + list: A list of data, where each record represents a document in the collection. """ - records = [] + data = [] for i, doc in enumerate(collection_dict["documents"]): record_dict = { "id": collection_dict["ids"][i], @@ -20,5 +20,5 @@ def chroma_collection_to_records(collection_dict: dict): if "metadatas" in collection_dict: for key, value in collection_dict["metadatas"][i].items(): record_dict[key] = value - records.append(Record(**record_dict)) - return records + data.append(Data(**record_dict)) + return data diff --git a/src/backend/base/langflow/components/agents/ToolCallingAgent.py b/src/backend/base/langflow/components/agents/ToolCallingAgent.py index 91fcb1132..eda017cc8 100644 --- a/src/backend/base/langflow/components/agents/ToolCallingAgent.py +++ b/src/backend/base/langflow/components/agents/ToolCallingAgent.py @@ -5,7 +5,7 @@ from langchain_core.prompts import ChatPromptTemplate from langflow.base.agents.agent import LCAgentComponent from langflow.field_typing import BaseLanguageModel, Text, Tool -from langflow.schema import Record +from langflow.schema import Data class ToolCallingAgentComponent(LCAgentComponent): @@ -42,7 +42,7 @@ class ToolCallingAgentComponent(LCAgentComponent): llm: BaseLanguageModel, tools: List[Tool], user_prompt: str = "{input}", - message_history: Optional[List[Record]] = None, + message_history: Optional[List[Data]] = None, system_message: str = "You are a helpful assistant", handle_parsing_errors: bool = True, ) -> Text: diff --git a/src/backend/base/langflow/components/agents/XMLAgent.py b/src/backend/base/langflow/components/agents/XMLAgent.py index 47f823ba4..1b49520f1 100644 --- a/src/backend/base/langflow/components/agents/XMLAgent.py +++ b/src/backend/base/langflow/components/agents/XMLAgent.py @@ -5,7 +5,7 @@ from langchain_core.prompts import ChatPromptTemplate from langflow.base.agents.agent import LCAgentComponent from langflow.field_typing import BaseLanguageModel, Text, Tool -from langflow.schema import Record +from langflow.schema import Data class XMLAgentComponent(LCAgentComponent): @@ -76,7 +76,7 @@ class XMLAgentComponent(LCAgentComponent): tools: List[Tool], user_prompt: str = "{input}", system_message: str = "You are a helpful assistant", - message_history: Optional[List[Record]] = None, + message_history: Optional[List[Data]] = None, tool_template: str = "{name}: {description}", handle_parsing_errors: bool = True, ) -> Text: diff --git a/src/backend/base/langflow/components/chains/RetrievalQA.py b/src/backend/base/langflow/components/chains/RetrievalQA.py index ca9910279..074800868 100644 --- a/src/backend/base/langflow/components/chains/RetrievalQA.py +++ b/src/backend/base/langflow/components/chains/RetrievalQA.py @@ -5,7 +5,7 @@ from langchain_core.documents import Document from langflow.custom import CustomComponent from langflow.field_typing import BaseLanguageModel, BaseMemory, BaseRetriever, Text -from langflow.schema import Record +from langflow.schema import Data class RetrievalQAComponent(CustomComponent): @@ -23,7 +23,7 @@ class RetrievalQAComponent(CustomComponent): "return_source_documents": {"display_name": "Return Source Documents"}, "input_value": { "display_name": "Input", - "input_types": ["Record", "Document"], + "input_types": ["Data", "Document"], }, } @@ -50,17 +50,17 @@ class RetrievalQAComponent(CustomComponent): ) if isinstance(input_value, Document): input_value = input_value.page_content - if isinstance(input_value, Record): + if isinstance(input_value, Data): input_value = input_value.get_text() self.status = runnable result = runnable.invoke({input_key: input_value}) result = result.content if hasattr(result, "content") else result # Result is a dict with keys "query", "result" and "source_documents" # for now we just return the result - records = self.to_records(result.get("source_documents")) + data = self.to_data(result.get("source_documents")) references_str = "" if return_source_documents: - references_str = self.create_references_from_records(records) + references_str = self.create_references_from_data(data) result_str = result.get("result", "") final_result = "\n".join([Text(result_str), references_str]) diff --git a/src/backend/base/langflow/components/chains/RetrievalQAWithSourcesChain.py b/src/backend/base/langflow/components/chains/RetrievalQAWithSourcesChain.py index 2e0fa4ced..ea2d950a9 100644 --- a/src/backend/base/langflow/components/chains/RetrievalQAWithSourcesChain.py +++ b/src/backend/base/langflow/components/chains/RetrievalQAWithSourcesChain.py @@ -53,10 +53,10 @@ class RetrievalQAWithSourcesChainComponent(CustomComponent): result = result.content if hasattr(result, "content") else result # Result is a dict with keys "query", "result" and "source_documents" # for now we just return the result - records = self.to_records(result.get("source_documents")) + data = self.to_data(result.get("source_documents")) references_str = "" if return_source_documents: - references_str = self.create_references_from_records(records) + references_str = self.create_references_from_data(data) result_str = Text(result.get("answer", "")) final_result = "\n".join([result_str, references_str]) self.status = final_result diff --git a/src/backend/base/langflow/components/data/APIRequest.py b/src/backend/base/langflow/components/data/APIRequest.py index 2065f90c7..934844ff5 100644 --- a/src/backend/base/langflow/components/data/APIRequest.py +++ b/src/backend/base/langflow/components/data/APIRequest.py @@ -8,14 +8,14 @@ from loguru import logger from langflow.base.curl.parse import parse_context from langflow.custom import CustomComponent from langflow.field_typing import NestedDict -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.dotdict import dotdict class APIRequest(CustomComponent): display_name: str = "API Request" description: str = "Make HTTP requests given one or more URLs." - output_types: list[str] = ["Record"] + output_types: list[str] = ["Data"] documentation: str = "https://docs.langflow.org/components/utilities#api-request" icon = "Globe" @@ -36,12 +36,12 @@ class APIRequest(CustomComponent): "headers": { "display_name": "Headers", "info": "The headers to send with the request.", - "input_types": ["Record"], + "input_types": ["Data"], }, "body": { "display_name": "Body", "info": "The body to send with the request (for POST, PATCH, PUT).", - "input_types": ["Record"], + "input_types": ["Data"], }, "timeout": { "display_name": "Timeout", @@ -80,7 +80,7 @@ class APIRequest(CustomComponent): headers: Optional[dict] = None, body: Optional[dict] = None, timeout: int = 5, - ) -> Record: + ) -> Data: method = method.upper() if method not in ["GET", "POST", "PATCH", "PUT", "DELETE"]: raise ValueError(f"Unsupported method: {method}") @@ -93,7 +93,7 @@ class APIRequest(CustomComponent): result = response.json() except Exception: result = response.text - return Record( + return Data( data={ "source": url, "headers": headers, @@ -102,7 +102,7 @@ class APIRequest(CustomComponent): }, ) except httpx.TimeoutException: - return Record( + return Data( data={ "source": url, "headers": headers, @@ -111,7 +111,7 @@ class APIRequest(CustomComponent): }, ) except Exception as exc: - return Record( + return Data( data={ "source": url, "headers": headers, @@ -128,10 +128,10 @@ class APIRequest(CustomComponent): headers: Optional[NestedDict] = {}, body: Optional[NestedDict] = {}, timeout: int = 5, - ) -> List[Record]: + ) -> List[Data]: if headers is None: headers_dict = {} - elif isinstance(headers, Record): + elif isinstance(headers, Data): headers_dict = headers.data else: headers_dict = headers @@ -142,7 +142,7 @@ class APIRequest(CustomComponent): bodies = [body] else: bodies = body - bodies = [b.data if isinstance(b, Record) else b for b in bodies] # type: ignore + bodies = [b.data if isinstance(b, Data) else b for b in bodies] # type: ignore if len(urls) != len(bodies): # add bodies with None diff --git a/src/backend/base/langflow/components/data/Directory.py b/src/backend/base/langflow/components/data/Directory.py index 4dfa51de3..5e5265ba6 100644 --- a/src/backend/base/langflow/components/data/Directory.py +++ b/src/backend/base/langflow/components/data/Directory.py @@ -1,8 +1,8 @@ from typing import Any, Dict, List, Optional -from langflow.base.data.utils import parallel_load_records, parse_text_file_to_record, retrieve_file_paths +from langflow.base.data.utils import parallel_load_data, parse_text_file_to_record, retrieve_file_paths from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class DirectoryComponent(CustomComponent): @@ -49,15 +49,15 @@ class DirectoryComponent(CustomComponent): recursive: bool = True, silent_errors: bool = False, use_multithreading: bool = True, - ) -> List[Optional[Record]]: + ) -> List[Optional[Data]]: resolved_path = self.resolve_path(path) file_paths = retrieve_file_paths(resolved_path, load_hidden, recursive, depth) - loaded_records = [] + loaded_data = [] if use_multithreading: - loaded_records = parallel_load_records(file_paths, silent_errors, max_concurrency) + loaded_data = parallel_load_data(file_paths, silent_errors, max_concurrency) else: - loaded_records = [parse_text_file_to_record(file_path, silent_errors) for file_path in file_paths] - loaded_records = list(filter(None, loaded_records)) - self.status = loaded_records - return loaded_records + loaded_data = [parse_text_file_to_record(file_path, silent_errors) for file_path in file_paths] + loaded_data = list(filter(None, loaded_data)) + self.status = loaded_data + return loaded_data diff --git a/src/backend/base/langflow/components/data/File.py b/src/backend/base/langflow/components/data/File.py index 5ebb94cff..5b23fd759 100644 --- a/src/backend/base/langflow/components/data/File.py +++ b/src/backend/base/langflow/components/data/File.py @@ -3,7 +3,7 @@ from typing import Any, Dict from langflow.base.data.utils import TEXT_FILE_TYPES, parse_text_file_to_record from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class FileComponent(CustomComponent): @@ -26,7 +26,7 @@ class FileComponent(CustomComponent): }, } - def load_file(self, path: str, silent_errors: bool = False) -> Record: + def load_file(self, path: str, silent_errors: bool = False) -> Data: resolved_path = self.resolve_path(path) path_obj = Path(resolved_path) extension = path_obj.suffix[1:].lower() @@ -36,13 +36,13 @@ class FileComponent(CustomComponent): raise ValueError(f"Unsupported file type: {extension}") record = parse_text_file_to_record(resolved_path, silent_errors) self.status = record if record else "No data" - return record or Record() + return record or Data() def build( self, path: str, silent_errors: bool = False, - ) -> Record: + ) -> Data: record = self.load_file(path, silent_errors) self.status = record return record diff --git a/src/backend/base/langflow/components/data/URL.py b/src/backend/base/langflow/components/data/URL.py index 32ebc91ee..2ca20e23e 100644 --- a/src/backend/base/langflow/components/data/URL.py +++ b/src/backend/base/langflow/components/data/URL.py @@ -3,7 +3,7 @@ from typing import Any, Dict from langchain_community.document_loaders.web_base import WebBaseLoader from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class URLComponent(CustomComponent): @@ -19,9 +19,9 @@ class URLComponent(CustomComponent): def build( self, urls: list[str], - ) -> list[Record]: + ) -> list[Data]: loader = WebBaseLoader(web_paths=[url for url in urls if url]) docs = loader.load() - records = self.to_records(docs) - self.status = records - return records + data = self.to_data(docs) + self.status = data + return data diff --git a/src/backend/base/langflow/components/data/Webhook.py b/src/backend/base/langflow/components/data/Webhook.py index a1989cd49..a1e672883 100644 --- a/src/backend/base/langflow/components/data/Webhook.py +++ b/src/backend/base/langflow/components/data/Webhook.py @@ -3,7 +3,7 @@ import uuid from typing import Any, Optional from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.dotdict import dotdict @@ -25,14 +25,14 @@ class WebhookComponent(CustomComponent): } } - def build(self, data: Optional[str] = "") -> Record: + def build(self, data: Optional[str] = "") -> Data: message = "" try: body = json.loads(data or "{}") except json.JSONDecodeError: body = {"payload": data} message = f"Invalid JSON payload. Please check the format.\n\n{data}" - record = Record(data=body) + record = Data(data=body) if not message: message = json.dumps(body, indent=2) self.status = message diff --git a/src/backend/base/langflow/components/experimental/AgentComponent.py b/src/backend/base/langflow/components/experimental/AgentComponent.py index abd8826d4..c40e416f1 100644 --- a/src/backend/base/langflow/components/experimental/AgentComponent.py +++ b/src/backend/base/langflow/components/experimental/AgentComponent.py @@ -6,7 +6,7 @@ from langchain_core.prompts.chat import HumanMessagePromptTemplate, SystemMessag from langflow.base.agents.agent import LCAgentComponent from langflow.base.agents.utils import AGENTS, AgentSpec, get_agents_list from langflow.field_typing import BaseLanguageModel, Text, Tool -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.dotdict import dotdict @@ -149,7 +149,7 @@ class AgentComponent(LCAgentComponent): tools: List[Tool], system_message: str = "You are a helpful assistant. Help the user answer any questions.", user_prompt: str = "{input}", - message_history: Optional[List[Record]] = None, + message_history: Optional[List[Data]] = None, tool_template: str = "{name}: {description}", handle_parsing_errors: bool = True, ) -> Text: diff --git a/src/backend/base/langflow/components/experimental/ClearMessageHistory.py b/src/backend/base/langflow/components/experimental/ClearMessageHistory.py index dacfaccb4..4cdcf3212 100644 --- a/src/backend/base/langflow/components/experimental/ClearMessageHistory.py +++ b/src/backend/base/langflow/components/experimental/ClearMessageHistory.py @@ -21,6 +21,6 @@ class ClearMessageHistoryComponent(CustomComponent): session_id: str, ) -> None: delete_messages(session_id=session_id) - records = get_messages(session_id=session_id) - self.records = records - return records + data = get_messages(session_id=session_id) + self.data = data + return data diff --git a/src/backend/base/langflow/components/experimental/Embed.py b/src/backend/base/langflow/components/experimental/Embed.py index 88de23486..e99ab0d03 100644 --- a/src/backend/base/langflow/components/experimental/Embed.py +++ b/src/backend/base/langflow/components/experimental/Embed.py @@ -1,6 +1,6 @@ from langflow.custom import CustomComponent -from langflow.schema import Record from langflow.field_typing import Embeddings +from langflow.schema import Data class EmbedComponent(CustomComponent): @@ -10,6 +10,6 @@ class EmbedComponent(CustomComponent): return {"texts": {"display_name": "Texts"}, "embbedings": {"display_name": "Embeddings"}} def build(self, texts: list[str], embbedings: Embeddings) -> Embeddings: - vectors = Record(vector=embbedings.embed_documents(texts)) + vectors = Data(vector=embbedings.embed_documents(texts)) self.status = vectors return vectors diff --git a/src/backend/base/langflow/components/experimental/ExtractDataFromRecord.py b/src/backend/base/langflow/components/experimental/ExtractDataFromRecord.py index b1d6ecd40..263a4158d 100644 --- a/src/backend/base/langflow/components/experimental/ExtractDataFromRecord.py +++ b/src/backend/base/langflow/components/experimental/ExtractDataFromRecord.py @@ -1,14 +1,14 @@ from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class ExtractKeyFromRecordComponent(CustomComponent): - display_name = "Extract Key From Record" + display_name = "Extract Key From Data" description = "Extracts a key from a record." beta: bool = True field_config = { - "record": {"display_name": "Record"}, + "record": {"display_name": "Data"}, "keys": { "display_name": "Keys", "info": "The keys to extract from the record.", @@ -21,12 +21,12 @@ class ExtractKeyFromRecordComponent(CustomComponent): }, } - def build(self, record: Record, keys: list[str], silent_error: bool = True) -> Record: + def build(self, record: Data, keys: list[str], silent_error: bool = True) -> Data: """ Extracts the keys from a record. Args: - record (Record): The record from which to extract the keys. + record (Data): The record from which to extract the keys. keys (list[str]): The keys to extract from the record. silent_error (bool): If True, errors will not be raised. @@ -40,6 +40,6 @@ class ExtractKeyFromRecordComponent(CustomComponent): except AttributeError: if not silent_error: raise KeyError(f"The key '{key}' does not exist in the record.") - return_record = Record(data=extracted_keys) + return_record = Data(data=extracted_keys) self.status = return_record return return_record diff --git a/src/backend/base/langflow/components/experimental/FlowTool.py b/src/backend/base/langflow/components/experimental/FlowTool.py index eaebb0c6e..24ab6f5c2 100644 --- a/src/backend/base/langflow/components/experimental/FlowTool.py +++ b/src/backend/base/langflow/components/experimental/FlowTool.py @@ -7,7 +7,7 @@ from langflow.custom import CustomComponent from langflow.field_typing import Tool from langflow.graph.graph.base import Graph from langflow.helpers.flow import get_flow_inputs -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.dotdict import dotdict @@ -17,10 +17,10 @@ class FlowToolComponent(CustomComponent): field_order = ["flow_name", "name", "description", "return_direct"] def get_flow_names(self) -> List[str]: - flow_records = self.list_flows() - return [flow_record.data["name"] for flow_record in flow_records] + flow_data = self.list_flows() + return [flow_record.data["name"] for flow_record in flow_data] - def get_flow(self, flow_name: str) -> Optional[Record]: + def get_flow(self, flow_name: str) -> Optional[Data]: """ Retrieves a flow by its name. @@ -30,8 +30,8 @@ class FlowToolComponent(CustomComponent): Returns: Optional[Text]: The flow record if found, None otherwise. """ - flow_records = self.list_flows() - for flow_record in flow_records: + flow_data = self.list_flows() + for flow_record in flow_data: if flow_record.data["name"] == flow_name: return flow_record return None diff --git a/src/backend/base/langflow/components/experimental/ListFlows.py b/src/backend/base/langflow/components/experimental/ListFlows.py index 07b4a4bbc..38fb2b967 100644 --- a/src/backend/base/langflow/components/experimental/ListFlows.py +++ b/src/backend/base/langflow/components/experimental/ListFlows.py @@ -1,7 +1,7 @@ from typing import List from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class ListFlowsComponent(CustomComponent): @@ -15,7 +15,7 @@ class ListFlowsComponent(CustomComponent): def build( self, - ) -> List[Record]: + ) -> List[Data]: flows = self.list_flows() self.status = flows return flows diff --git a/src/backend/base/langflow/components/experimental/Listen.py b/src/backend/base/langflow/components/experimental/Listen.py index be7ddb8e3..03a81d130 100644 --- a/src/backend/base/langflow/components/experimental/Listen.py +++ b/src/backend/base/langflow/components/experimental/Listen.py @@ -1,5 +1,5 @@ from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class ListenComponent(CustomComponent): @@ -15,7 +15,7 @@ class ListenComponent(CustomComponent): }, } - def build(self, name: str) -> Record: + def build(self, name: str) -> Data: state = self.get_state(name) self.status = state return state diff --git a/src/backend/base/langflow/components/experimental/MergeRecords.py b/src/backend/base/langflow/components/experimental/MergeRecords.py index c938b4473..49c5ebcdc 100644 --- a/src/backend/base/langflow/components/experimental/MergeRecords.py +++ b/src/backend/base/langflow/components/experimental/MergeRecords.py @@ -1,36 +1,36 @@ from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class MergeRecordsComponent(CustomComponent): display_name = "Merge Records" - description = "Merges records." + description = "Merges data." beta: bool = True field_config = { - "records": {"display_name": "Records"}, + "data": {"display_name": "Records"}, } - def build(self, records: list[Record]) -> Record: - if not records: - return Record() - if len(records) == 1: - return records[0] - merged_record = Record() - for record in records: + def build(self, data: list[Data]) -> Data: + if not data: + return Data() + if len(data) == 1: + return data[0] + merged_record = Data() + for value in data: if merged_record is None: - merged_record = record + merged_record = value else: - merged_record += record + merged_record += value self.status = merged_record return merged_record if __name__ == "__main__": - records = [ - Record(data={"key1": "value1"}), - Record(data={"key2": "value2"}), + data = [ + Data(data={"key1": "value1"}), + Data(data={"key2": "value2"}), ] component = MergeRecordsComponent() - result = component.build(records) + result = component.build(data) print(result) diff --git a/src/backend/base/langflow/components/experimental/Notify.py b/src/backend/base/langflow/components/experimental/Notify.py index bf4391682..e29213592 100644 --- a/src/backend/base/langflow/components/experimental/Notify.py +++ b/src/backend/base/langflow/components/experimental/Notify.py @@ -1,7 +1,7 @@ from typing import Optional from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class NotifyComponent(CustomComponent): @@ -13,23 +13,23 @@ class NotifyComponent(CustomComponent): def build_config(self): return { "name": {"display_name": "Name", "info": "The name of the notification."}, - "record": {"display_name": "Record", "info": "The record to store."}, + "record": {"display_name": "Data", "info": "The record to store."}, "append": { "display_name": "Append", "info": "If True, the record will be appended to the notification.", }, } - def build(self, name: str, record: Optional[Record] = None, append: bool = False) -> Record: - if record and not isinstance(record, Record): + def build(self, name: str, record: Optional[Data] = None, append: bool = False) -> Data: + if record and not isinstance(record, Data): if isinstance(record, str): - record = Record(text=record) + record = Data(text=record) elif isinstance(record, dict): - record = Record(data=record) + record = Data(data=record) else: - record = Record(text=str(record)) + record = Data(text=str(record)) elif not record: - record = Record(text="") + record = Data(text="") if record: if append: self.append_state(name, record) diff --git a/src/backend/base/langflow/components/experimental/Pass.py b/src/backend/base/langflow/components/experimental/Pass.py index 3fdb438a0..d21fe0887 100644 --- a/src/backend/base/langflow/components/experimental/Pass.py +++ b/src/backend/base/langflow/components/experimental/Pass.py @@ -2,7 +2,7 @@ from typing import Union from langflow.custom import CustomComponent from langflow.field_typing import Text -from langflow.schema import Record +from langflow.schema import Data class PassComponent(CustomComponent): @@ -15,16 +15,16 @@ class PassComponent(CustomComponent): "ignored_input": { "display_name": "Ignored Input", "info": "This input is ignored. It's used to control the flow in the graph.", - "input_types": ["Text", "Record"], + "input_types": ["Text", "Data"], }, "forwarded_input": { "display_name": "Input", "info": "This input is forwarded by the component.", - "input_types": ["Text", "Record"], + "input_types": ["Text", "Data"], }, } - def build(self, ignored_input: Text, forwarded_input: Text) -> Union[Text, Record]: + def build(self, ignored_input: Text, forwarded_input: Text) -> Union[Text, Data]: # The ignored_input is not used in the logic, it's just there for graph flow control self.status = forwarded_input return forwarded_input diff --git a/src/backend/base/langflow/components/experimental/RunFlow.py b/src/backend/base/langflow/components/experimental/RunFlow.py index d2e7dd285..141cae1eb 100644 --- a/src/backend/base/langflow/components/experimental/RunFlow.py +++ b/src/backend/base/langflow/components/experimental/RunFlow.py @@ -1,10 +1,10 @@ from typing import Any, List, Optional -from langflow.base.flow_processing.utils import build_records_from_run_outputs +from langflow.base.flow_processing.utils import build_data_from_run_outputs from langflow.custom import CustomComponent from langflow.field_typing import NestedDict, Text from langflow.graph.schema import RunOutputs -from langflow.schema import Record, dotdict +from langflow.schema import Data, dotdict class RunFlowComponent(CustomComponent): @@ -13,8 +13,8 @@ class RunFlowComponent(CustomComponent): beta: bool = True def get_flow_names(self) -> List[str]: - flow_records = self.list_flows() - return [flow_record.data["name"] for flow_record in flow_records] + flow_data = self.list_flows() + return [flow_record.data["name"] for flow_record in flow_data] def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None): if field_name == "flow_name": @@ -40,17 +40,17 @@ class RunFlowComponent(CustomComponent): }, } - async def build(self, input_value: Text, flow_name: str, tweaks: NestedDict) -> List[Record]: + async def build(self, input_value: Text, flow_name: str, tweaks: NestedDict) -> List[Data]: results: List[Optional[RunOutputs]] = await self.run_flow( inputs={"input_value": input_value}, flow_name=flow_name, tweaks=tweaks ) if isinstance(results, list): - records = [] + data = [] for result in results: if result: - records.extend(build_records_from_run_outputs(result)) + data.extend(build_data_from_run_outputs(result)) else: - records = build_records_from_run_outputs()(results) + data = build_data_from_run_outputs()(results) - self.status = records - return records + self.status = data + return data diff --git a/src/backend/base/langflow/components/experimental/SplitText.py b/src/backend/base/langflow/components/experimental/SplitText.py index 7156371c3..3d0b18e5a 100644 --- a/src/backend/base/langflow/components/experimental/SplitText.py +++ b/src/backend/base/langflow/components/experimental/SplitText.py @@ -2,7 +2,7 @@ from typing import Optional from langflow.custom import CustomComponent from langflow.field_typing import Text -from langflow.schema import Record +from langflow.schema import Data from langflow.utils.util import unescape_string @@ -15,7 +15,7 @@ class SplitTextComponent(CustomComponent): "inputs": { "display_name": "Inputs", "info": "Texts to split.", - "input_types": ["Record", "Text"], + "input_types": ["Data", "Text"], }, "separator": { "display_name": "Separator", @@ -32,7 +32,7 @@ class SplitTextComponent(CustomComponent): inputs: list[Text], separator: str = " ", truncate_size: Optional[int] = 0, - ) -> list[Record]: + ) -> list[Data]: separator = unescape_string(separator) outputs = [] @@ -43,7 +43,7 @@ class SplitTextComponent(CustomComponent): chunks = [chunk[:truncate_size] for chunk in chunks] for chunk in chunks: - outputs.append(Record(data={"parent": text, "text": chunk})) + outputs.append(Data(data={"parent": text, "text": chunk})) self.status = outputs return outputs diff --git a/src/backend/base/langflow/components/experimental/SubFlow.py b/src/backend/base/langflow/components/experimental/SubFlow.py index 86dd336bf..825e09183 100644 --- a/src/backend/base/langflow/components/experimental/SubFlow.py +++ b/src/backend/base/langflow/components/experimental/SubFlow.py @@ -2,30 +2,32 @@ from typing import Any, List, Optional from loguru import logger -from langflow.base.flow_processing.utils import build_records_from_result_data +from langflow.base.flow_processing.utils import build_data_from_result_data from langflow.custom import CustomComponent from langflow.graph.graph.base import Graph from langflow.graph.schema import RunOutputs from langflow.graph.vertex.base import Vertex from langflow.helpers.flow import get_flow_inputs -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.dotdict import dotdict from langflow.template.field.base import Input class SubFlowComponent(CustomComponent): display_name = "Sub Flow" - description = "Dynamically Generates a Component from a Flow. The output is a list of records with keys 'result' and 'message'." + description = ( + "Dynamically Generates a Component from a Flow. The output is a list of data with keys 'result' and 'message'." + ) beta: bool = True field_order = ["flow_name"] def get_flow_names(self) -> List[str]: - flow_records = self.list_flows() - return [flow_record.data["name"] for flow_record in flow_records] + flow_data = self.list_flows() + return [flow_record.data["name"] for flow_record in flow_data] - def get_flow(self, flow_name: str) -> Optional[Record]: - flow_records = self.list_flows() - for flow_record in flow_records: + def get_flow(self, flow_name: str) -> Optional[Data]: + flow_data = self.list_flows() + for flow_record in flow_data: if flow_record.data["name"] == flow_name: return flow_record return None @@ -93,7 +95,7 @@ class SubFlowComponent(CustomComponent): }, } - async def build(self, flow_name: str, get_final_results_only: bool = True, **kwargs) -> List[Record]: + async def build(self, flow_name: str, get_final_results_only: bool = True, **kwargs) -> List[Data]: tweaks = {key: {"input_value": value} for key, value in kwargs.items()} run_outputs: List[Optional[RunOutputs]] = await self.run_flow( tweaks=tweaks, @@ -103,12 +105,12 @@ class SubFlowComponent(CustomComponent): return [] run_output = run_outputs[0] - records = [] + data = [] if run_output is not None: for output in run_output.outputs: if output: - records.extend(build_records_from_result_data(output, get_final_results_only)) + data.extend(build_data_from_result_data(output, get_final_results_only)) - self.status = records - logger.debug(records) - return records + self.status = data + logger.debug(data) + return data diff --git a/src/backend/base/langflow/components/experimental/TextOperator.py b/src/backend/base/langflow/components/experimental/TextOperator.py index 0b9821240..73b472acf 100644 --- a/src/backend/base/langflow/components/experimental/TextOperator.py +++ b/src/backend/base/langflow/components/experimental/TextOperator.py @@ -2,7 +2,7 @@ from typing import Union from langflow.custom import Component from langflow.field_typing import Text -from langflow.schema import Record +from langflow.schema import Data from langflow.template import Input, Output @@ -29,17 +29,17 @@ class TextOperatorComponent(Component): ), Input( name="true_output", - type=Union[str, Record], + type=Union[str, Data], display_name="True Output", info="The output to return or display when the comparison is true.", - input_types=["Text", "Record"], + input_types=["Text", "Data"], ), Input( name="false_output", - type=Union[str, Record], + type=Union[str, Data], display_name="False Output", info="The output to return or display when the comparison is false.", - input_types=["Text", "Record"], + input_types=["Text", "Data"], ), ] outputs = [ @@ -47,15 +47,15 @@ class TextOperatorComponent(Component): Output(display_name="False Result", name="false_result", method="result_response"), ] - def true_response(self) -> Union[Text, Record]: + def true_response(self) -> Union[Text, Data]: self.stop("False Result") return self.true_output if self.true_output else self.input_text - def false_response(self) -> Union[Text, Record]: + def false_response(self) -> Union[Text, Data]: self.stop("True Result") return self.false_output if self.false_output else self.input_text - def result_response(self) -> Union[Text, Record]: + def result_response(self) -> Union[Text, Data]: input_text = self.input_text match_text = self.match_text operator = self.operator diff --git a/src/backend/base/langflow/components/helpers/CreateRecord.py b/src/backend/base/langflow/components/helpers/CreateRecord.py index d466f5c26..5226317bf 100644 --- a/src/backend/base/langflow/components/helpers/CreateRecord.py +++ b/src/backend/base/langflow/components/helpers/CreateRecord.py @@ -2,14 +2,14 @@ from typing import Any from langflow.custom import CustomComponent from langflow.field_typing.range_spec import RangeSpec -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.dotdict import dotdict from langflow.template.field.base import Input class CreateRecordComponent(CustomComponent): - display_name = "Create Record" - description = "Dynamically create a Record with a specified number of fields." + display_name = "Create Data" + description = "Dynamically create a Data with a specified number of fields." field_order = ["number_of_fields", "text_key"] def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None): @@ -40,7 +40,7 @@ class CreateRecordComponent(CustomComponent): name=key, info=f"Key for field {i}.", field_type="dict", - input_types=["Text", "Record"], + input_types=["Text", "Data"], ) build_config[field.name] = field.to_dict() @@ -67,15 +67,15 @@ class CreateRecordComponent(CustomComponent): number_of_fields: int = 0, text_key: str = "text", **kwargs, - ) -> Record: + ) -> Data: data = {} for value_dict in kwargs.values(): if isinstance(value_dict, dict): - # Check if the value of the value_dict is a Record + # Check if the value of the value_dict is a Data value_dict = { - key: value.get_text() if isinstance(value, Record) else value for key, value in value_dict.items() + key: value.get_text() if isinstance(value, Data) else value for key, value in value_dict.items() } data.update(value_dict) - return_record = Record(data=data, text_key=text_key) + return_record = Data(data=data, text_key=text_key) self.status = return_record return return_record diff --git a/src/backend/base/langflow/components/helpers/CustomComponent.py b/src/backend/base/langflow/components/helpers/CustomComponent.py index 7313323a9..98dcf3934 100644 --- a/src/backend/base/langflow/components/helpers/CustomComponent.py +++ b/src/backend/base/langflow/components/helpers/CustomComponent.py @@ -1,6 +1,6 @@ # from langflow.field_typing import Data from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class Component(CustomComponent): @@ -12,5 +12,5 @@ class Component(CustomComponent): def build_config(self): return {"param": {"display_name": "Parameter"}} - def build(self, param: str) -> Record: - return Record(data=param) + def build(self, param: str) -> Data: + return Data(data=param) diff --git a/src/backend/base/langflow/components/helpers/DocumentToRecord.py b/src/backend/base/langflow/components/helpers/DocumentToRecord.py index 5adaf7ab4..6c3df044d 100644 --- a/src/backend/base/langflow/components/helpers/DocumentToRecord.py +++ b/src/backend/base/langflow/components/helpers/DocumentToRecord.py @@ -3,7 +3,7 @@ from typing import List from langchain_core.documents import Document from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class DocumentToRecordComponent(CustomComponent): @@ -14,9 +14,9 @@ class DocumentToRecordComponent(CustomComponent): "documents": {"display_name": "Documents"}, } - def build(self, documents: List[Document]) -> List[Record]: + def build(self, documents: List[Document]) -> List[Data]: if isinstance(documents, Document): documents = [documents] - records = [Record.from_document(document) for document in documents] - self.status = records - return records + data = [Data.from_document(document) for document in documents] + self.status = data + return data diff --git a/src/backend/base/langflow/components/helpers/MemoryComponent.py b/src/backend/base/langflow/components/helpers/MemoryComponent.py index 96e82da1e..e14a404b1 100644 --- a/src/backend/base/langflow/components/helpers/MemoryComponent.py +++ b/src/backend/base/langflow/components/helpers/MemoryComponent.py @@ -36,9 +36,9 @@ class MemoryComponent(BaseMemoryComponent): "advanced": True, }, "record_template": { - "display_name": "Record Template", + "display_name": "Data Template", "multiline": True, - "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + "info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", "advanced": True, }, } diff --git a/src/backend/base/langflow/components/helpers/MessageHistory.py b/src/backend/base/langflow/components/helpers/MessageHistory.py index 90191e7d0..5933c27aa 100644 --- a/src/backend/base/langflow/components/helpers/MessageHistory.py +++ b/src/backend/base/langflow/components/helpers/MessageHistory.py @@ -2,7 +2,7 @@ from typing import List, Optional from langflow.custom import CustomComponent from langflow.memory import get_messages -from langflow.schema import Record +from langflow.schema import Data class MessageHistoryComponent(CustomComponent): @@ -43,7 +43,7 @@ class MessageHistoryComponent(CustomComponent): session_id: Optional[str] = None, n_messages: int = 100, order: Optional[str] = "Descending", - ) -> List[Record]: + ) -> List[Data]: order = "DESC" if order == "Descending" else "ASC" if sender == "Machine and User": sender = None diff --git a/src/backend/base/langflow/components/helpers/RecordsToText.py b/src/backend/base/langflow/components/helpers/RecordsToText.py index 049c99243..515cc5ef8 100644 --- a/src/backend/base/langflow/components/helpers/RecordsToText.py +++ b/src/backend/base/langflow/components/helpers/RecordsToText.py @@ -1,7 +1,7 @@ from langflow.custom import CustomComponent from langflow.field_typing import Text -from langflow.helpers.record import records_to_text -from langflow.schema import Record +from langflow.helpers.record import data_to_text +from langflow.schema import Data class RecordsToTextComponent(CustomComponent): @@ -10,27 +10,27 @@ class RecordsToTextComponent(CustomComponent): def build_config(self): return { - "records": { + "data": { "display_name": "Records", - "info": "The records to convert to text.", + "info": "The data to convert to text.", }, "template": { "display_name": "Template", - "info": "The template to use for formatting the records. It can contain the keys {text}, {data} or any other key in the Record.", + "info": "The template to use for formatting the data. It can contain the keys {text}, {data} or any other key in the Data.", "multiline": True, }, } def build( self, - records: list[Record], + data: list[Data], template: str = "Text: {text}\nData: {data}", ) -> Text: - if not records: + if not data: return "" - if isinstance(records, Record): - records = [records] + if isinstance(data, Data): + data = [data] - result_string = records_to_text(template, records) + result_string = data_to_text(template, data) self.status = result_string return result_string diff --git a/src/backend/base/langflow/components/helpers/UpdateRecord.py b/src/backend/base/langflow/components/helpers/UpdateRecord.py index e3153d6d7..545745801 100644 --- a/src/backend/base/langflow/components/helpers/UpdateRecord.py +++ b/src/backend/base/langflow/components/helpers/UpdateRecord.py @@ -1,15 +1,15 @@ from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class UpdateRecordComponent(CustomComponent): - display_name = "Update Record" - description = "Update Record with text-based key/value pairs, similar to updating a Python dictionary." + display_name = "Update Data" + description = "Update Data with text-based key/value pairs, similar to updating a Python dictionary." def build_config(self): return { "record": { - "display_name": "Record", + "display_name": "Data", "info": "The record to update.", }, "new_data": { @@ -21,18 +21,18 @@ class UpdateRecordComponent(CustomComponent): def build( self, - record: Record, + record: Data, new_data: dict, - ) -> Record: + ) -> Data: """ Updates a record with new data. Args: - record (Record): The record to update. + record (Data): The record to update. new_data (dict): The new data to update the record with. Returns: - Record: The updated record. + Data: The updated record. """ record.data.update(new_data) self.status = record diff --git a/src/backend/base/langflow/components/inputs/TextInput.py b/src/backend/base/langflow/components/inputs/TextInput.py index 7fdcbb501..307c35fcf 100644 --- a/src/backend/base/langflow/components/inputs/TextInput.py +++ b/src/backend/base/langflow/components/inputs/TextInput.py @@ -13,15 +13,15 @@ class TextInput(TextComponent): name="input_value", type=str, display_name="Value", - info="Text or Record to be passed as input.", - input_types=["Record", "Text"], + info="Text or Data to be passed as input.", + input_types=["Data", "Text"], ), Input( name="record_template", type=str, - display_name="Record Template", + display_name="Data Template", multiline=True, - info="Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + info="Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", advanced=True, ), ] diff --git a/src/backend/base/langflow/components/langchain_utilities/SearchApi.py b/src/backend/base/langflow/components/langchain_utilities/SearchApi.py index 3e6721fd6..5dfd55250 100644 --- a/src/backend/base/langflow/components/langchain_utilities/SearchApi.py +++ b/src/backend/base/langflow/components/langchain_utilities/SearchApi.py @@ -3,7 +3,7 @@ from typing import Optional from langchain_community.utilities.searchapi import SearchApiAPIWrapper from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data from langflow.services.database.models.base import orjson_dumps @@ -37,7 +37,7 @@ class SearchApi(CustomComponent): engine: str, api_key: str, params: Optional[dict] = None, - ) -> Record: + ) -> Data: if params is None: params = {} @@ -48,6 +48,6 @@ class SearchApi(CustomComponent): result = orjson_dumps(results, indent_2=False) - record = Record(data=result) + record = Data(data=result) self.status = record return record diff --git a/src/backend/base/langflow/components/memories/AstraDBMessageReader.py b/src/backend/base/langflow/components/memories/AstraDBMessageReader.py index a55c99b1d..dfb8ef86a 100644 --- a/src/backend/base/langflow/components/memories/AstraDBMessageReader.py +++ b/src/backend/base/langflow/components/memories/AstraDBMessageReader.py @@ -1,7 +1,7 @@ from typing import Optional, cast from langflow.base.memory.memory import BaseMemoryComponent -from langflow.schema import Record +from langflow.schema import Data class AstraDBMessageReaderComponent(BaseMemoryComponent): @@ -38,7 +38,7 @@ class AstraDBMessageReaderComponent(BaseMemoryComponent): }, } - def get_messages(self, **kwargs) -> list[Record]: + def get_messages(self, **kwargs) -> list[Data]: """ Retrieves messages from the AstraDBChatMessageHistory memory. @@ -46,7 +46,7 @@ class AstraDBMessageReaderComponent(BaseMemoryComponent): memory (AstraDBChatMessageHistory): The AstraDBChatMessageHistory instance to retrieve messages from. Returns: - list[Record]: A list of Record objects representing the search results. + list[Data]: A list of Data objects representing the search results. """ try: from langchain_astradb.chat_message_histories import AstraDBChatMessageHistory @@ -62,7 +62,7 @@ class AstraDBMessageReaderComponent(BaseMemoryComponent): # Get messages from the memory messages = memory.messages - results = [Record.from_lc_message(message) for message in messages] + results = [Data.from_lc_message(message) for message in messages] return list(results) @@ -73,7 +73,7 @@ class AstraDBMessageReaderComponent(BaseMemoryComponent): token: str, api_endpoint: str, namespace: Optional[str] = None, - ) -> list[Record]: + ) -> list[Data]: try: from langchain_astradb.chat_message_histories import AstraDBChatMessageHistory except ImportError: @@ -90,7 +90,7 @@ class AstraDBMessageReaderComponent(BaseMemoryComponent): namespace=namespace, ) - records = self.get_messages(memory=memory) - self.status = records + data = self.get_messages(memory=memory) + self.status = data - return records + return data diff --git a/src/backend/base/langflow/components/memories/AstraDBMessageWriter.py b/src/backend/base/langflow/components/memories/AstraDBMessageWriter.py index 2ec80d603..497104c9d 100644 --- a/src/backend/base/langflow/components/memories/AstraDBMessageWriter.py +++ b/src/backend/base/langflow/components/memories/AstraDBMessageWriter.py @@ -3,7 +3,7 @@ from typing import Optional from langchain_core.messages import BaseMessage from langflow.base.memory.memory import BaseMemoryComponent -from langflow.schema import Record +from langflow.schema import Data class AstraDBMessageWriterComponent(BaseMemoryComponent): @@ -13,8 +13,8 @@ class AstraDBMessageWriterComponent(BaseMemoryComponent): def build_config(self): return { "input_value": { - "display_name": "Input Record", - "info": "Record to write to Astra DB.", + "display_name": "Input Data", + "info": "Data to write to Astra DB.", }, "session_id": { "display_name": "Session ID", @@ -96,13 +96,13 @@ class AstraDBMessageWriterComponent(BaseMemoryComponent): def build( self, - input_value: Record, + input_value: Data, session_id: str, collection_name: str, token: str, api_endpoint: str, namespace: Optional[str] = None, - ) -> Record: + ) -> Data: try: from langchain_astradb.chat_message_histories import AstraDBChatMessageHistory except ImportError: diff --git a/src/backend/base/langflow/components/memories/CassandraMessageReader.py b/src/backend/base/langflow/components/memories/CassandraMessageReader.py index 3fd11d772..a8bd1c365 100644 --- a/src/backend/base/langflow/components/memories/CassandraMessageReader.py +++ b/src/backend/base/langflow/components/memories/CassandraMessageReader.py @@ -3,7 +3,7 @@ from typing import Optional, cast from langchain_community.chat_message_histories import CassandraChatMessageHistory from langflow.base.memory.memory import BaseMemoryComponent -from langflow.schema.record import Record +from langflow.schema.data import Data class CassandraMessageReaderComponent(BaseMemoryComponent): @@ -38,7 +38,7 @@ class CassandraMessageReaderComponent(BaseMemoryComponent): }, } - def get_messages(self, **kwargs) -> list[Record]: + def get_messages(self, **kwargs) -> list[Data]: """ Retrieves messages from the CassandraChatMessageHistory memory. @@ -46,7 +46,7 @@ class CassandraMessageReaderComponent(BaseMemoryComponent): memory (CassandraChatMessageHistory): The CassandraChatMessageHistory instance to retrieve messages from. Returns: - list[Record]: A list of Record objects representing the search results. + list[Data]: A list of Data objects representing the search results. """ memory: CassandraChatMessageHistory = cast(CassandraChatMessageHistory, kwargs.get("memory")) if not memory: @@ -54,7 +54,7 @@ class CassandraMessageReaderComponent(BaseMemoryComponent): # Get messages from the memory messages = memory.messages - results = [Record.from_lc_message(message) for message in messages] + results = [Data.from_lc_message(message) for message in messages] return list(results) @@ -65,7 +65,7 @@ class CassandraMessageReaderComponent(BaseMemoryComponent): token: str, database_id: str, keyspace: Optional[str] = None, - ) -> list[Record]: + ) -> list[Data]: try: import cassio except ImportError: @@ -80,7 +80,7 @@ class CassandraMessageReaderComponent(BaseMemoryComponent): keyspace=keyspace, ) - records = self.get_messages(memory=memory) - self.status = records + data = self.get_messages(memory=memory) + self.status = data - return records + return data diff --git a/src/backend/base/langflow/components/memories/CassandraMessageWriter.py b/src/backend/base/langflow/components/memories/CassandraMessageWriter.py index c8e3831a5..15da27274 100644 --- a/src/backend/base/langflow/components/memories/CassandraMessageWriter.py +++ b/src/backend/base/langflow/components/memories/CassandraMessageWriter.py @@ -4,7 +4,7 @@ from langchain_community.chat_message_histories import CassandraChatMessageHisto from langchain_core.messages import BaseMessage from langflow.base.memory.memory import BaseMemoryComponent -from langflow.schema.record import Record +from langflow.schema.data import Data class CassandraMessageWriterComponent(BaseMemoryComponent): @@ -14,8 +14,8 @@ class CassandraMessageWriterComponent(BaseMemoryComponent): def build_config(self): return { "input_value": { - "display_name": "Input Record", - "info": "Record to write to Cassandra.", + "display_name": "Input Data", + "info": "Data to write to Cassandra.", }, "session_id": { "display_name": "Session ID", @@ -93,14 +93,14 @@ class CassandraMessageWriterComponent(BaseMemoryComponent): def build( self, - input_value: Record, + input_value: Data, session_id: str, table_name: str, token: str, database_id: str, keyspace: Optional[str] = None, ttl_seconds: Optional[int] = None, - ) -> Record: + ) -> Data: try: import cassio except ImportError: diff --git a/src/backend/base/langflow/components/memories/ZepMessageReader.py b/src/backend/base/langflow/components/memories/ZepMessageReader.py index feef017a6..89a16587b 100644 --- a/src/backend/base/langflow/components/memories/ZepMessageReader.py +++ b/src/backend/base/langflow/components/memories/ZepMessageReader.py @@ -4,7 +4,7 @@ from langchain_community.chat_message_histories.zep import SearchScope, SearchTy from langflow.base.memory.memory import BaseMemoryComponent from langflow.field_typing import Text -from langflow.schema import Record +from langflow.schema import Data class ZepMessageReaderComponent(BaseMemoryComponent): @@ -60,7 +60,7 @@ class ZepMessageReaderComponent(BaseMemoryComponent): }, } - def get_messages(self, **kwargs) -> list[Record]: + def get_messages(self, **kwargs) -> list[Data]: """ Retrieves messages from the ZepChatMessageHistory memory. @@ -75,7 +75,7 @@ class ZepMessageReaderComponent(BaseMemoryComponent): limit (int, optional): The maximum number of search results to return. Defaults to None. Returns: - list[Record]: A list of Record objects representing the search results. + list[Data]: A list of Data objects representing the search results. """ memory: ZepChatMessageHistory = cast(ZepChatMessageHistory, kwargs.get("memory")) if not memory: @@ -103,10 +103,10 @@ class ZepMessageReaderComponent(BaseMemoryComponent): result_dict["metadata"] = result.metadata result_dict["score"] = result.score result_dicts.append(result_dict) - results = [Record(data=result_dict) for result_dict in result_dicts] + results = [Data(data=result_dict) for result_dict in result_dicts] else: messages = memory.messages - results = [Record.from_lc_message(message) for message in messages] + results = [Data.from_lc_message(message) for message in messages] return results def build( @@ -119,7 +119,7 @@ class ZepMessageReaderComponent(BaseMemoryComponent): search_scope: str = SearchScope.messages, search_type: str = SearchType.similarity, limit: Optional[int] = None, - ) -> list[Record]: + ) -> list[Data]: try: # Monkeypatch API_BASE_PATH to # avoid 404 @@ -139,12 +139,12 @@ class ZepMessageReaderComponent(BaseMemoryComponent): zep_client = ZepClient(api_url=url, api_key=api_key) memory = ZepChatMessageHistory(session_id=session_id, zep_client=zep_client) - records = self.get_messages( + data = self.get_messages( memory=memory, query=query, search_scope=search_scope, search_type=search_type, limit=limit, ) - self.status = records - return records + self.status = data + return data diff --git a/src/backend/base/langflow/components/memories/ZepMessageWriter.py b/src/backend/base/langflow/components/memories/ZepMessageWriter.py index c3d55a721..cc343488e 100644 --- a/src/backend/base/langflow/components/memories/ZepMessageWriter.py +++ b/src/backend/base/langflow/components/memories/ZepMessageWriter.py @@ -2,7 +2,7 @@ from typing import TYPE_CHECKING, Optional from langflow.base.memory.memory import BaseMemoryComponent from langflow.field_typing import Text -from langflow.schema import Record +from langflow.schema import Data if TYPE_CHECKING: from zep_python.langchain import ZepChatMessageHistory @@ -35,8 +35,8 @@ class ZepMessageWriterComponent(BaseMemoryComponent): "advanced": True, }, "input_value": { - "display_name": "Input Record", - "info": "Record to write to Zep.", + "display_name": "Input Data", + "info": "Data to write to Zep.", }, "api_base_path": { "display_name": "API Base Path", @@ -78,12 +78,12 @@ class ZepMessageWriterComponent(BaseMemoryComponent): def build( self, - input_value: Record, + input_value: Data, session_id: Text, api_base_path: str = "api/v1", url: Optional[Text] = None, api_key: Optional[Text] = None, - ) -> Record: + ) -> Data: try: # Monkeypatch API_BASE_PATH to # avoid 404 diff --git a/src/backend/base/langflow/components/models/AmazonBedrockModel.py b/src/backend/base/langflow/components/models/AmazonBedrockModel.py index 99229deb2..296f5d1f9 100644 --- a/src/backend/base/langflow/components/models/AmazonBedrockModel.py +++ b/src/backend/base/langflow/components/models/AmazonBedrockModel.py @@ -58,7 +58,7 @@ class AmazonBedrockComponent(LCModelComponent): "advanced": True, }, "cache": {"display_name": "Cache"}, - "input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]}, + "input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]}, "system_message": { "display_name": "System Message", "info": "System message to pass to the model.", diff --git a/src/backend/base/langflow/components/models/AnthropicModel.py b/src/backend/base/langflow/components/models/AnthropicModel.py index bac7708d4..4a7e1a330 100644 --- a/src/backend/base/langflow/components/models/AnthropicModel.py +++ b/src/backend/base/langflow/components/models/AnthropicModel.py @@ -63,7 +63,7 @@ class AnthropicLLM(LCModelComponent): "info": "Endpoint of the Anthropic API. Defaults to 'https://api.anthropic.com' if not specified.", }, "code": {"show": False}, - "input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]}, + "input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]}, "stream": { "display_name": "Stream", "advanced": True, diff --git a/src/backend/base/langflow/components/models/AzureOpenAIModel.py b/src/backend/base/langflow/components/models/AzureOpenAIModel.py index 24848ff66..ec4e375b1 100644 --- a/src/backend/base/langflow/components/models/AzureOpenAIModel.py +++ b/src/backend/base/langflow/components/models/AzureOpenAIModel.py @@ -5,7 +5,7 @@ from pydantic.v1 import SecretStr from langflow.base.constants import STREAM_INFO_TEXT from langflow.base.models.model import LCModelComponent -from langflow.field_typing import Text, BaseLanguageModel +from langflow.field_typing import BaseLanguageModel, Text from langflow.template import Input, Output @@ -63,7 +63,7 @@ class AzureChatOpenAIComponent(LCModelComponent): advanced=True, info="The maximum number of tokens to generate. Set to 0 for unlimited tokens.", ), - Input(name="input_value", type=str, display_name="Input", input_types=["Text", "Record", "Prompt"]), + Input(name="input_value", type=str, display_name="Input", input_types=["Text", "Data", "Prompt"]), Input(name="stream", type=bool, display_name="Stream", info=STREAM_INFO_TEXT, advanced=True), Input( name="system_message", diff --git a/src/backend/base/langflow/components/models/BaiduQianfanChatModel.py b/src/backend/base/langflow/components/models/BaiduQianfanChatModel.py index aaae3112f..927d85668 100644 --- a/src/backend/base/langflow/components/models/BaiduQianfanChatModel.py +++ b/src/backend/base/langflow/components/models/BaiduQianfanChatModel.py @@ -81,7 +81,7 @@ class QianfanChatEndpointComponent(LCModelComponent): "info": "Endpoint of the Qianfan LLM, required if custom model used.", }, "code": {"show": False}, - "input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]}, + "input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]}, "stream": { "display_name": "Stream", "info": STREAM_INFO_TEXT, diff --git a/src/backend/base/langflow/components/models/ChatLiteLLMModel.py b/src/backend/base/langflow/components/models/ChatLiteLLMModel.py index aa3cf6976..d8ebed14c 100644 --- a/src/backend/base/langflow/components/models/ChatLiteLLMModel.py +++ b/src/backend/base/langflow/components/models/ChatLiteLLMModel.py @@ -111,7 +111,7 @@ class ChatLiteLLMModelComponent(LCModelComponent): "required": False, "default": False, }, - "input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]}, + "input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]}, "stream": { "display_name": "Stream", "info": STREAM_INFO_TEXT, diff --git a/src/backend/base/langflow/components/models/CohereModel.py b/src/backend/base/langflow/components/models/CohereModel.py index b5ecbab9f..3b785ff83 100644 --- a/src/backend/base/langflow/components/models/CohereModel.py +++ b/src/backend/base/langflow/components/models/CohereModel.py @@ -43,7 +43,7 @@ class CohereComponent(LCModelComponent): "type": "float", "show": True, }, - "input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]}, + "input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]}, "stream": { "display_name": "Stream", "info": STREAM_INFO_TEXT, diff --git a/src/backend/base/langflow/components/models/GroqModel.py b/src/backend/base/langflow/components/models/GroqModel.py index 022aece47..825dccc5b 100644 --- a/src/backend/base/langflow/components/models/GroqModel.py +++ b/src/backend/base/langflow/components/models/GroqModel.py @@ -57,7 +57,7 @@ class GroqModelComponent(LCModelComponent): info="The name of the model to use. Supported examples: gemini-pro", options=MODEL_NAMES, ), - Input(name="input_value", field_type=str, display_name="Input", input_types=["Text", "Record", "Prompt"]), + Input(name="input_value", field_type=str, display_name="Input", input_types=["Text", "Data", "Prompt"]), Input(name="stream", field_type=bool, display_name="Stream", advanced=True, info=STREAM_INFO_TEXT), Input( name="system_message", diff --git a/src/backend/base/langflow/components/models/HuggingFaceModel.py b/src/backend/base/langflow/components/models/HuggingFaceModel.py index 949598b2d..fa3414c9f 100644 --- a/src/backend/base/langflow/components/models/HuggingFaceModel.py +++ b/src/backend/base/langflow/components/models/HuggingFaceModel.py @@ -37,7 +37,7 @@ class HuggingFaceEndpointsComponent(LCModelComponent): "advanced": True, }, "code": {"show": False}, - "input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]}, + "input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]}, "stream": { "display_name": "Stream", "info": STREAM_INFO_TEXT, diff --git a/src/backend/base/langflow/components/models/MistralModel.py b/src/backend/base/langflow/components/models/MistralModel.py index 75937e70d..bfd92405e 100644 --- a/src/backend/base/langflow/components/models/MistralModel.py +++ b/src/backend/base/langflow/components/models/MistralModel.py @@ -27,7 +27,7 @@ class MistralAIModelComponent(LCModelComponent): def build_config(self): return { - "input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]}, + "input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]}, "max_tokens": { "display_name": "Max Tokens", "advanced": True, diff --git a/src/backend/base/langflow/components/models/OllamaModel.py b/src/backend/base/langflow/components/models/OllamaModel.py index 2c7d3bae7..8e999f83d 100644 --- a/src/backend/base/langflow/components/models/OllamaModel.py +++ b/src/backend/base/langflow/components/models/OllamaModel.py @@ -120,7 +120,7 @@ class ChatOllamaComponent(LCModelComponent): info="Controls the creativity of model responses.", value=0.8, ), - Input(name="input_value", type=str, display_name="Input", input_types=["Text", "Record", "Prompt"]), + Input(name="input_value", type=str, display_name="Input", input_types=["Text", "Data", "Prompt"]), Input(name="stream", type=bool, display_name="Stream", info=STREAM_INFO_TEXT, value=False), Input( name="system_message", diff --git a/src/backend/base/langflow/components/models/OpenAIModel.py b/src/backend/base/langflow/components/models/OpenAIModel.py index 43b7e774d..e9130d520 100644 --- a/src/backend/base/langflow/components/models/OpenAIModel.py +++ b/src/backend/base/langflow/components/models/OpenAIModel.py @@ -16,7 +16,7 @@ class OpenAIModelComponent(LCModelComponent): icon = "OpenAI" inputs = [ - StrInput(name="input_value", display_name="Input", input_types=["Text", "Record", "Prompt"]), + StrInput(name="input_value", display_name="Input", input_types=["Text", "Data", "Prompt"]), IntInput( name="max_tokens", display_name="Max Tokens", diff --git a/src/backend/base/langflow/components/models/VertexAiModel.py b/src/backend/base/langflow/components/models/VertexAiModel.py index 33bbbbc46..aed218c08 100644 --- a/src/backend/base/langflow/components/models/VertexAiModel.py +++ b/src/backend/base/langflow/components/models/VertexAiModel.py @@ -73,7 +73,7 @@ class ChatVertexAIComponent(LCModelComponent): "value": False, "advanced": True, }, - "input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]}, + "input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]}, "stream": { "display_name": "Stream", "info": STREAM_INFO_TEXT, diff --git a/src/backend/base/langflow/components/outputs/ChatOutput.py b/src/backend/base/langflow/components/outputs/ChatOutput.py index 3b7c74927..5a63fc422 100644 --- a/src/backend/base/langflow/components/outputs/ChatOutput.py +++ b/src/backend/base/langflow/components/outputs/ChatOutput.py @@ -30,10 +30,10 @@ class ChatOutput(ChatComponent): StrInput(name="session_id", display_name="Session ID", info="Session ID for the message.", advanced=True), BoolInput( name="record_template", - display_name="Record Template", + display_name="Data Template", value="{text}", advanced=True, - info="Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + info="Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", ), ] outputs = [ diff --git a/src/backend/base/langflow/components/outputs/RecordsOutput.py b/src/backend/base/langflow/components/outputs/RecordsOutput.py index a8a288864..c5c4630bf 100644 --- a/src/backend/base/langflow/components/outputs/RecordsOutput.py +++ b/src/backend/base/langflow/components/outputs/RecordsOutput.py @@ -1,5 +1,5 @@ from langflow.custom import Component -from langflow.schema import Record +from langflow.schema import Data from langflow.template import Input, Output @@ -8,12 +8,12 @@ class RecordsOutput(Component): description = "Display Records as a Table" inputs = [ - Input(name="input_value", type=Record, display_name="Record Input"), + Input(name="input_value", type=Data, display_name="Data Input"), ] outputs = [ - Output(display_name="Record", name="record", method="record_response"), + Output(display_name="Data", name="record", method="record_response"), ] - def record_response(self) -> Record: + def record_response(self) -> Data: self.status = self.input_value return self.input_value diff --git a/src/backend/base/langflow/components/outputs/TextOutput.py b/src/backend/base/langflow/components/outputs/TextOutput.py index d4615b418..e6fe65bad 100644 --- a/src/backend/base/langflow/components/outputs/TextOutput.py +++ b/src/backend/base/langflow/components/outputs/TextOutput.py @@ -13,15 +13,15 @@ class TextOutput(TextComponent): name="input_value", type=str, display_name="Value", - info="Text or Record to be passed as output.", - input_types=["Record", "Text"], + info="Text or Data to be passed as output.", + input_types=["Data", "Text"], ), Input( name="record_template", type=str, - display_name="Record Template", + display_name="Data Template", multiline=True, - info="Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + info="Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", advanced=True, ), ] diff --git a/src/backend/base/langflow/components/retrievers/SelfQueryRetriever.py b/src/backend/base/langflow/components/retrievers/SelfQueryRetriever.py index 3e6d6f696..cbe001de3 100644 --- a/src/backend/base/langflow/components/retrievers/SelfQueryRetriever.py +++ b/src/backend/base/langflow/components/retrievers/SelfQueryRetriever.py @@ -5,7 +5,7 @@ from langchain_core.vectorstores import VectorStore from langflow.custom import CustomComponent from langflow.field_typing import BaseLanguageModel, Text -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.message import Message @@ -43,11 +43,11 @@ class SelfQueryRetrieverComponent(CustomComponent): self, query: Message, vectorstore: VectorStore, - attribute_infos: list[Record], + attribute_infos: list[Data], document_content_description: Text, llm: BaseLanguageModel, - ) -> Record: - metadata_field_infos = [AttributeInfo(**record.data) for record in attribute_infos] + ) -> Data: + metadata_field_infos = [AttributeInfo(**value.data) for value in attribute_infos] self_query_retriever = SelfQueryRetriever.from_llm( llm=llm, vectorstore=vectorstore, @@ -63,6 +63,6 @@ class SelfQueryRetrieverComponent(CustomComponent): else: raise ValueError(f"Query type {type(query)} not supported.") documents = self_query_retriever.invoke(input=input_text) - records = [Record.from_document(document) for document in documents] - self.status = records - return records + data = [Data.from_document(document) for document in documents] + self.status = data + return data diff --git a/src/backend/base/langflow/components/textsplitters/CharacterTextSplitter.py b/src/backend/base/langflow/components/textsplitters/CharacterTextSplitter.py index 9f60d7c88..c0f00b078 100644 --- a/src/backend/base/langflow/components/textsplitters/CharacterTextSplitter.py +++ b/src/backend/base/langflow/components/textsplitters/CharacterTextSplitter.py @@ -3,7 +3,7 @@ from typing import List from langchain_text_splitters import CharacterTextSplitter from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data from langflow.utils.util import unescape_string @@ -13,7 +13,7 @@ class CharacterTextSplitterComponent(CustomComponent): def build_config(self): return { - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "chunk_overlap": {"display_name": "Chunk Overlap", "default": 200}, "chunk_size": {"display_name": "Chunk Size", "default": 1000}, "separator": {"display_name": "Separator", "default": "\n"}, @@ -21,16 +21,16 @@ class CharacterTextSplitterComponent(CustomComponent): def build( self, - inputs: List[Record], + inputs: List[Data], chunk_overlap: int = 200, chunk_size: int = 1000, separator: str = "\n", - ) -> List[Record]: + ) -> List[Data]: # separator may come escaped from the frontend separator = unescape_string(separator) documents = [] for _input in inputs: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) @@ -39,6 +39,6 @@ class CharacterTextSplitterComponent(CustomComponent): chunk_size=chunk_size, separator=separator, ).split_documents(documents) - records = self.to_records(docs) - self.status = records - return records + data = self.to_data(docs) + self.status = data + return data diff --git a/src/backend/base/langflow/components/textsplitters/LanguageRecursiveTextSplitter.py b/src/backend/base/langflow/components/textsplitters/LanguageRecursiveTextSplitter.py index a43fdcd72..4c074e861 100644 --- a/src/backend/base/langflow/components/textsplitters/LanguageRecursiveTextSplitter.py +++ b/src/backend/base/langflow/components/textsplitters/LanguageRecursiveTextSplitter.py @@ -3,7 +3,7 @@ from typing import List, Optional from langchain_text_splitters import Language, RecursiveCharacterTextSplitter from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class LanguageRecursiveTextSplitterComponent(CustomComponent): @@ -14,7 +14,7 @@ class LanguageRecursiveTextSplitterComponent(CustomComponent): def build_config(self): options = [x.value for x in Language] return { - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "separator_type": { "display_name": "Separator Type", "info": "The type of separator to use.", @@ -44,11 +44,11 @@ class LanguageRecursiveTextSplitterComponent(CustomComponent): def build( self, - inputs: List[Record], + inputs: List[Data], chunk_size: Optional[int] = 1000, chunk_overlap: Optional[int] = 200, separator_type: str = "Python", - ) -> list[Record]: + ) -> list[Data]: """ Split text into chunks of a specified length. @@ -75,10 +75,10 @@ class LanguageRecursiveTextSplitterComponent(CustomComponent): ) documents = [] for _input in inputs: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) docs = splitter.split_documents(documents) - records = self.to_records(docs) - return records + data = self.to_data(docs) + return data diff --git a/src/backend/base/langflow/components/textsplitters/RecursiveCharacterTextSplitter.py b/src/backend/base/langflow/components/textsplitters/RecursiveCharacterTextSplitter.py index 77fcfa62a..abd42aa53 100644 --- a/src/backend/base/langflow/components/textsplitters/RecursiveCharacterTextSplitter.py +++ b/src/backend/base/langflow/components/textsplitters/RecursiveCharacterTextSplitter.py @@ -4,8 +4,8 @@ from langchain_core.documents import Document from langchain_text_splitters import RecursiveCharacterTextSplitter from langflow.custom import CustomComponent -from langflow.schema import Record -from langflow.utils.util import build_loader_repr_from_records, unescape_string +from langflow.schema import Data +from langflow.utils.util import build_loader_repr_from_data, unescape_string class RecursiveCharacterTextSplitterComponent(CustomComponent): @@ -18,7 +18,7 @@ class RecursiveCharacterTextSplitterComponent(CustomComponent): "inputs": { "display_name": "Input", "info": "The texts to split.", - "input_types": ["Document", "Record"], + "input_types": ["Document", "Data"], }, "separators": { "display_name": "Separators", @@ -46,7 +46,7 @@ class RecursiveCharacterTextSplitterComponent(CustomComponent): separators: Optional[list[str]] = None, chunk_size: Optional[int] = 1000, chunk_overlap: Optional[int] = 200, - ) -> list[Record]: + ) -> list[Data]: """ Split text into chunks of a specified length. @@ -79,11 +79,11 @@ class RecursiveCharacterTextSplitterComponent(CustomComponent): ) documents = [] for _input in inputs: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) docs = splitter.split_documents(documents) - records = self.to_records(docs) - self.repr_value = build_loader_repr_from_records(records) - return records + data = self.to_data(docs) + self.repr_value = build_loader_repr_from_data(data) + return data diff --git a/src/backend/base/langflow/components/tools/SearchApi.py b/src/backend/base/langflow/components/tools/SearchApi.py index 3e6721fd6..5dfd55250 100644 --- a/src/backend/base/langflow/components/tools/SearchApi.py +++ b/src/backend/base/langflow/components/tools/SearchApi.py @@ -3,7 +3,7 @@ from typing import Optional from langchain_community.utilities.searchapi import SearchApiAPIWrapper from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data from langflow.services.database.models.base import orjson_dumps @@ -37,7 +37,7 @@ class SearchApi(CustomComponent): engine: str, api_key: str, params: Optional[dict] = None, - ) -> Record: + ) -> Data: if params is None: params = {} @@ -48,6 +48,6 @@ class SearchApi(CustomComponent): result = orjson_dumps(results, indent_2=False) - record = Record(data=result) + record = Data(data=result) self.status = record return record diff --git a/src/backend/base/langflow/components/vectorsearch/AstraDBSearch.py b/src/backend/base/langflow/components/vectorsearch/AstraDBSearch.py index 83ed42daf..dfa4311da 100644 --- a/src/backend/base/langflow/components/vectorsearch/AstraDBSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/AstraDBSearch.py @@ -3,7 +3,7 @@ from typing import List, Optional from langflow.components.vectorstores.AstraDB import AstraDBVectorStoreComponent from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.field_typing import Embeddings, Text -from langflow.schema import Record +from langflow.schema import Data class AstraDBSearchComponent(LCVectorStoreComponent): @@ -48,7 +48,7 @@ class AstraDBSearchComponent(LCVectorStoreComponent): }, "batch_size": { "display_name": "Batch Size", - "info": "Optional number of records to process in a single batch.", + "info": "Optional number of data to process in a single batch.", "advanced": True, }, "bulk_insert_batch_concurrency": { @@ -58,7 +58,7 @@ class AstraDBSearchComponent(LCVectorStoreComponent): }, "bulk_insert_overwrite_concurrency": { "display_name": "Bulk Insert Overwrite Concurrency", - "info": "Optional concurrency level for bulk insert operations that overwrite existing records.", + "info": "Optional concurrency level for bulk insert operations that overwrite existing data.", "advanced": True, }, "bulk_delete_concurrency": { @@ -119,7 +119,7 @@ class AstraDBSearchComponent(LCVectorStoreComponent): metadata_indexing_include: Optional[List[str]] = None, metadata_indexing_exclude: Optional[List[str]] = None, collection_indexing_policy: Optional[dict] = None, - ) -> List[Record]: + ) -> List[Data]: vector_store = AstraDBVectorStoreComponent().build( embedding=embedding, collection_name=collection_name, diff --git a/src/backend/base/langflow/components/vectorsearch/CassandraSearch.py b/src/backend/base/langflow/components/vectorsearch/CassandraSearch.py index 8ee558276..b656b99a8 100644 --- a/src/backend/base/langflow/components/vectorsearch/CassandraSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/CassandraSearch.py @@ -1,11 +1,12 @@ from typing import Any, List, Optional, Tuple -from langflow.components.vectorstores.Cassandra import CassandraVectorStoreComponent -from langflow.components.vectorstores.base.model import LCVectorStoreComponent -from langflow.field_typing import Embeddings, Text -from langflow.schema import Record from langchain_community.utilities.cassandra import SetupMode +from langflow.components.vectorstores.base.model import LCVectorStoreComponent +from langflow.components.vectorstores.Cassandra import CassandraVectorStoreComponent +from langflow.field_typing import Embeddings, Text +from langflow.schema import Data + class CassandraSearchComponent(LCVectorStoreComponent): display_name = "Cassandra Search" @@ -72,7 +73,7 @@ class CassandraSearchComponent(LCVectorStoreComponent): keyspace: Optional[str] = None, body_index_options: Optional[List[Tuple[str, Any]]] = None, setup_mode: SetupMode = SetupMode.SYNC, - ) -> List[Record]: + ) -> List[Data]: vector_store = CassandraVectorStoreComponent().build( embedding=embedding, table_name=table_name, diff --git a/src/backend/base/langflow/components/vectorsearch/ChromaSearch.py b/src/backend/base/langflow/components/vectorsearch/ChromaSearch.py index 228e100e4..6ea230a1d 100644 --- a/src/backend/base/langflow/components/vectorsearch/ChromaSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/ChromaSearch.py @@ -6,7 +6,7 @@ from langchain_chroma import Chroma from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.field_typing import Embeddings, Text -from langflow.schema import Record +from langflow.schema import Data class ChromaSearchComponent(LCVectorStoreComponent): @@ -69,7 +69,7 @@ class ChromaSearchComponent(LCVectorStoreComponent): chroma_server_host: Optional[str] = None, chroma_server_http_port: Optional[int] = None, chroma_server_grpc_port: Optional[int] = None, - ) -> List[Record]: + ) -> List[Data]: """ Builds the Vector Store or BaseRetriever object. @@ -87,7 +87,7 @@ class ChromaSearchComponent(LCVectorStoreComponent): - chroma_server_grpc_port (int, optional): The gRPC port for the Chroma server. Defaults to None. Returns: - - List[Record]: The list of records. + - List[Data]: The list of data. """ # Chroma settings diff --git a/src/backend/base/langflow/components/vectorsearch/CouchbaseSearch.py b/src/backend/base/langflow/components/vectorsearch/CouchbaseSearch.py index 2aa23c490..9a3e9b93c 100644 --- a/src/backend/base/langflow/components/vectorsearch/CouchbaseSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/CouchbaseSearch.py @@ -3,7 +3,7 @@ from typing import List from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.components.vectorstores.Couchbase import CouchbaseComponent from langflow.field_typing import Embeddings, Text -from langflow.schema import Record +from langflow.schema import Data class CouchbaseSearchComponent(LCVectorStoreComponent): @@ -51,7 +51,7 @@ class CouchbaseSearchComponent(LCVectorStoreComponent): couchbase_connection_string: str = "", couchbase_username: str = "", couchbase_password: str = "", - ) -> List[Record]: + ) -> List[Data]: vector_store = CouchbaseComponent().build( couchbase_connection_string=couchbase_connection_string, couchbase_username=couchbase_username, diff --git a/src/backend/base/langflow/components/vectorsearch/FAISSSearch.py b/src/backend/base/langflow/components/vectorsearch/FAISSSearch.py index d68f455cc..681c112dd 100644 --- a/src/backend/base/langflow/components/vectorsearch/FAISSSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/FAISSSearch.py @@ -4,7 +4,7 @@ from langchain_community.vectorstores.faiss import FAISS from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.field_typing import Embeddings, Text -from langflow.schema import Record +from langflow.schema import Data class FAISSSearchComponent(LCVectorStoreComponent): @@ -35,7 +35,7 @@ class FAISSSearchComponent(LCVectorStoreComponent): folder_path: str, number_of_results: int = 4, index_name: str = "langflow_index", - ) -> List[Record]: + ) -> List[Data]: if not folder_path: raise ValueError("Folder path is required to save the FAISS index.") path = self.resolve_path(folder_path) diff --git a/src/backend/base/langflow/components/vectorsearch/MongoDBAtlasVectorSearch.py b/src/backend/base/langflow/components/vectorsearch/MongoDBAtlasVectorSearch.py index 0ecde1688..50183f959 100644 --- a/src/backend/base/langflow/components/vectorsearch/MongoDBAtlasVectorSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/MongoDBAtlasVectorSearch.py @@ -3,7 +3,7 @@ from typing import List, Optional from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.components.vectorstores.MongoDBAtlasVector import MongoDBAtlasComponent from langflow.field_typing import Embeddings, NestedDict, Text -from langflow.schema import Record +from langflow.schema import Data class MongoDBAtlasSearchComponent(LCVectorStoreComponent): @@ -41,7 +41,7 @@ class MongoDBAtlasSearchComponent(LCVectorStoreComponent): index_name: str = "", mongodb_atlas_cluster_uri: str = "", search_kwargs: Optional[NestedDict] = None, - ) -> List[Record]: + ) -> List[Data]: search_kwargs = search_kwargs or {} vector_store = MongoDBAtlasComponent().build( mongodb_atlas_cluster_uri=mongodb_atlas_cluster_uri, diff --git a/src/backend/base/langflow/components/vectorsearch/PineconeSearch.py b/src/backend/base/langflow/components/vectorsearch/PineconeSearch.py index e995f86f8..63bc04414 100644 --- a/src/backend/base/langflow/components/vectorsearch/PineconeSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/PineconeSearch.py @@ -6,7 +6,7 @@ from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.components.vectorstores.Pinecone import PineconeComponent from langflow.field_typing import Embeddings, Text from langflow.field_typing.constants import NestedDict -from langflow.schema import Record +from langflow.schema import Data class PineconeSearchComponent(PineconeComponent, LCVectorStoreComponent): @@ -70,7 +70,7 @@ class PineconeSearchComponent(PineconeComponent, LCVectorStoreComponent): namespace: Optional[str] = "default", search_type: str = "similarity", search_kwargs: Optional[NestedDict] = None, - ) -> List[Record]: # type: ignore[override] + ) -> List[Data]: # type: ignore[override] vector_store = super().build( embedding=embedding, distance_strategy=distance_strategy, diff --git a/src/backend/base/langflow/components/vectorsearch/QdrantSearch.py b/src/backend/base/langflow/components/vectorsearch/QdrantSearch.py index a64343e17..e8311b31b 100644 --- a/src/backend/base/langflow/components/vectorsearch/QdrantSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/QdrantSearch.py @@ -3,7 +3,7 @@ from typing import List, Optional from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.components.vectorstores.Qdrant import QdrantComponent from langflow.field_typing import Embeddings, NestedDict, Text -from langflow.schema import Record +from langflow.schema import Data class QdrantSearchComponent(QdrantComponent, LCVectorStoreComponent): @@ -70,7 +70,7 @@ class QdrantSearchComponent(QdrantComponent, LCVectorStoreComponent): search_kwargs: Optional[NestedDict] = None, timeout: Optional[int] = None, url: Optional[str] = None, - ) -> List[Record]: # type: ignore[override] + ) -> List[Data]: # type: ignore[override] vector_store = super().build( embedding=embedding, collection_name=collection_name, diff --git a/src/backend/base/langflow/components/vectorsearch/RedisSearch.py b/src/backend/base/langflow/components/vectorsearch/RedisSearch.py index 75aba7f8a..eb51a914e 100644 --- a/src/backend/base/langflow/components/vectorsearch/RedisSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/RedisSearch.py @@ -5,7 +5,7 @@ from langchain_core.embeddings import Embeddings from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.components.vectorstores.Redis import RedisComponent from langflow.field_typing import Text -from langflow.schema import Record +from langflow.schema import Data class RedisSearchComponent(RedisComponent, LCVectorStoreComponent): @@ -55,7 +55,7 @@ class RedisSearchComponent(RedisComponent, LCVectorStoreComponent): redis_index_name: str, number_of_results: int = 4, schema: Optional[str] = None, - ) -> List[Record]: + ) -> List[Data]: """ Builds the Vector Store or BaseRetriever object. diff --git a/src/backend/base/langflow/components/vectorsearch/SupabaseVectorStoreSearch.py b/src/backend/base/langflow/components/vectorsearch/SupabaseVectorStoreSearch.py index aef1c13b7..4617d3e44 100644 --- a/src/backend/base/langflow/components/vectorsearch/SupabaseVectorStoreSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/SupabaseVectorStoreSearch.py @@ -5,7 +5,7 @@ from supabase.client import Client, create_client from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.field_typing import Embeddings, Text -from langflow.schema import Record +from langflow.schema import Data class SupabaseSearchComponent(LCVectorStoreComponent): @@ -43,7 +43,7 @@ class SupabaseSearchComponent(LCVectorStoreComponent): supabase_service_key: str = "", supabase_url: str = "", table_name: str = "", - ) -> List[Record]: + ) -> List[Data]: supabase: Client = create_client(supabase_url, supabase_key=supabase_service_key) vector_store = SupabaseVectorStore( client=supabase, diff --git a/src/backend/base/langflow/components/vectorsearch/UpstashSearch.py b/src/backend/base/langflow/components/vectorsearch/UpstashSearch.py index 506896e2b..93490b83c 100644 --- a/src/backend/base/langflow/components/vectorsearch/UpstashSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/UpstashSearch.py @@ -5,7 +5,7 @@ from langchain_core.embeddings import Embeddings from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.components.vectorstores.Upstash import UpstashVectorStoreComponent from langflow.field_typing import Text -from langflow.schema import Record +from langflow.schema import Data class UpstashSearchComponent(UpstashVectorStoreComponent, LCVectorStoreComponent): @@ -29,7 +29,7 @@ class UpstashSearchComponent(UpstashVectorStoreComponent, LCVectorStoreComponent "options": ["Similarity", "MMR"], }, "input_value": {"display_name": "Input"}, - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "embedding": { "display_name": "Embedding", "input_types": ["Embeddings"], @@ -64,7 +64,7 @@ class UpstashSearchComponent(UpstashVectorStoreComponent, LCVectorStoreComponent index_token: Optional[str] = None, embedding: Optional[Embeddings] = None, number_of_results: int = 4, - ) -> List[Record]: + ) -> List[Data]: vector_store = super().build( embedding=embedding, text_key=text_key, diff --git a/src/backend/base/langflow/components/vectorsearch/VectaraSearch.py b/src/backend/base/langflow/components/vectorsearch/VectaraSearch.py index 459054f67..595f3b3b9 100644 --- a/src/backend/base/langflow/components/vectorsearch/VectaraSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/VectaraSearch.py @@ -5,7 +5,7 @@ from langchain_community.vectorstores.vectara import Vectara from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.components.vectorstores.Vectara import VectaraComponent from langflow.field_typing import Text -from langflow.schema import Record +from langflow.schema import Data class VectaraSearchComponent(VectaraComponent, LCVectorStoreComponent): @@ -49,7 +49,7 @@ class VectaraSearchComponent(VectaraComponent, LCVectorStoreComponent): vectara_corpus_id: str, vectara_api_key: str, number_of_results: int = 4, - ) -> List[Record]: + ) -> List[Data]: source = "Langflow" vector_store = Vectara( vectara_customer_id=vectara_customer_id, diff --git a/src/backend/base/langflow/components/vectorsearch/WeaviateSearch.py b/src/backend/base/langflow/components/vectorsearch/WeaviateSearch.py index b70dfa41d..f29363e57 100644 --- a/src/backend/base/langflow/components/vectorsearch/WeaviateSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/WeaviateSearch.py @@ -5,7 +5,7 @@ from langchain_core.embeddings import Embeddings from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.components.vectorstores.Weaviate import WeaviateVectorStoreComponent from langflow.field_typing import Text -from langflow.schema import Record +from langflow.schema import Data class WeaviateSearchVectorStore(WeaviateVectorStoreComponent, LCVectorStoreComponent): @@ -68,7 +68,7 @@ class WeaviateSearchVectorStore(WeaviateVectorStoreComponent, LCVectorStoreCompo text_key: str = "text", embedding: Optional[Embeddings] = None, attributes: Optional[list] = None, - ) -> List[Record]: + ) -> List[Data]: vector_store = super().build( url=url, api_key=api_key, diff --git a/src/backend/base/langflow/components/vectorsearch/pgvectorSearch.py b/src/backend/base/langflow/components/vectorsearch/pgvectorSearch.py index 304439ff4..6adc888a6 100644 --- a/src/backend/base/langflow/components/vectorsearch/pgvectorSearch.py +++ b/src/backend/base/langflow/components/vectorsearch/pgvectorSearch.py @@ -5,7 +5,7 @@ from langchain_core.embeddings import Embeddings from langflow.components.vectorstores.base.model import LCVectorStoreComponent from langflow.components.vectorstores.pgvector import PGVectorComponent from langflow.field_typing import Text -from langflow.schema import Record +from langflow.schema import Data class PGVectorSearchComponent(PGVectorComponent, LCVectorStoreComponent): @@ -48,7 +48,7 @@ class PGVectorSearchComponent(PGVectorComponent, LCVectorStoreComponent): pg_server_url: str, collection_name: str, number_of_results: int = 4, - ) -> List[Record]: + ) -> List[Data]: """ Builds the Vector Store or BaseRetriever object. diff --git a/src/backend/base/langflow/components/vectorstores/AstraDB.py b/src/backend/base/langflow/components/vectorstores/AstraDB.py index cd4a06aea..1a97297c8 100644 --- a/src/backend/base/langflow/components/vectorstores/AstraDB.py +++ b/src/backend/base/langflow/components/vectorstores/AstraDB.py @@ -1,9 +1,10 @@ from typing import List, Optional, Union +from langchain_core.retrievers import BaseRetriever + from langflow.custom import CustomComponent from langflow.field_typing import Embeddings, VectorStore -from langflow.schema import Record -from langchain_core.retrievers import BaseRetriever +from langflow.schema import Data class AstraDBVectorStoreComponent(CustomComponent): @@ -16,7 +17,7 @@ class AstraDBVectorStoreComponent(CustomComponent): return { "inputs": { "display_name": "Inputs", - "info": "Optional list of records to be processed and stored in the vector store.", + "info": "Optional list of data to be processed and stored in the vector store.", }, "embedding": {"display_name": "Embedding", "info": "Embedding to use"}, "collection_name": { @@ -44,7 +45,7 @@ class AstraDBVectorStoreComponent(CustomComponent): }, "batch_size": { "display_name": "Batch Size", - "info": "Optional number of records to process in a single batch.", + "info": "Optional number of data to process in a single batch.", "advanced": True, }, "bulk_insert_batch_concurrency": { @@ -54,7 +55,7 @@ class AstraDBVectorStoreComponent(CustomComponent): }, "bulk_insert_overwrite_concurrency": { "display_name": "Bulk Insert Overwrite Concurrency", - "info": "Optional concurrency level for bulk insert operations that overwrite existing records.", + "info": "Optional concurrency level for bulk insert operations that overwrite existing data.", "advanced": True, }, "bulk_delete_concurrency": { @@ -96,7 +97,7 @@ class AstraDBVectorStoreComponent(CustomComponent): token: str, api_endpoint: str, collection_name: str, - inputs: Optional[List[Record]] = None, + inputs: Optional[List[Data]] = None, namespace: Optional[str] = None, metric: Optional[str] = None, batch_size: Optional[int] = None, diff --git a/src/backend/base/langflow/components/vectorstores/Cassandra.py b/src/backend/base/langflow/components/vectorstores/Cassandra.py index 34c21ccd0..b5fb76dc1 100644 --- a/src/backend/base/langflow/components/vectorstores/Cassandra.py +++ b/src/backend/base/langflow/components/vectorstores/Cassandra.py @@ -1,10 +1,11 @@ from typing import Any, List, Optional, Tuple -from langchain_community.vectorstores import Cassandra + from langchain_community.utilities.cassandra import SetupMode +from langchain_community.vectorstores import Cassandra from langflow.custom import CustomComponent from langflow.field_typing import Embeddings, VectorStore -from langflow.schema import Record +from langflow.schema import Data class CassandraVectorStoreComponent(CustomComponent): @@ -17,7 +18,7 @@ class CassandraVectorStoreComponent(CustomComponent): return { "inputs": { "display_name": "Inputs", - "info": "Optional list of records to be processed and stored in the vector store.", + "info": "Optional list of data to be processed and stored in the vector store.", }, "embedding": {"display_name": "Embedding", "info": "Embedding to use"}, "token": { @@ -45,7 +46,7 @@ class CassandraVectorStoreComponent(CustomComponent): }, "batch_size": { "display_name": "Batch Size", - "info": "Optional number of records to process in a single batch.", + "info": "Optional number of data to process in a single batch.", "advanced": True, }, "body_index_options": { @@ -66,7 +67,7 @@ class CassandraVectorStoreComponent(CustomComponent): embedding: Embeddings, token: str, database_id: str, - inputs: Optional[List[Record]] = None, + inputs: Optional[List[Data]] = None, keyspace: Optional[str] = None, table_name: str = "", ttl_seconds: Optional[int] = None, diff --git a/src/backend/base/langflow/components/vectorstores/Chroma.py b/src/backend/base/langflow/components/vectorstores/Chroma.py index 6001b119c..8aac051ca 100644 --- a/src/backend/base/langflow/components/vectorstores/Chroma.py +++ b/src/backend/base/langflow/components/vectorstores/Chroma.py @@ -8,9 +8,9 @@ from langchain_core.embeddings import Embeddings from langchain_core.retrievers import BaseRetriever from langchain_core.vectorstores import VectorStore -from langflow.base.vectorstores.utils import chroma_collection_to_records +from langflow.base.vectorstores.utils import chroma_collection_to_data from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class ChromaComponent(CustomComponent): @@ -34,7 +34,7 @@ class ChromaComponent(CustomComponent): "collection_name": {"display_name": "Collection Name", "value": "langflow"}, "index_directory": {"display_name": "Persist Directory"}, "code": {"advanced": True, "display_name": "Code"}, - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "embedding": {"display_name": "Embedding"}, "chroma_server_cors_allow_origins": { "display_name": "Server CORS Allow Origins", @@ -63,7 +63,7 @@ class ChromaComponent(CustomComponent): embedding: Embeddings, chroma_server_ssl_enabled: bool, index_directory: Optional[str] = None, - inputs: Optional[List[Record]] = None, + inputs: Optional[List[Data]] = None, chroma_server_cors_allow_origins: List[str] = [], chroma_server_host: Optional[str] = None, chroma_server_http_port: Optional[int] = None, @@ -78,7 +78,7 @@ class ChromaComponent(CustomComponent): - embedding (Embeddings): The embeddings to use for the Vector Store. - chroma_server_ssl_enabled (bool): Whether to enable SSL for the Chroma server. - index_directory (Optional[str]): The directory to persist the Vector Store to. - - inputs (Optional[List[Record]]): The input records to use for the Vector Store. + - inputs (Optional[List[Data]]): The input data to use for the Vector Store. - chroma_server_cors_allow_origins (List[str]): The CORS allow origins for the Chroma server. - chroma_server_host (Optional[str]): The host for the Chroma server. - chroma_server_http_port (Optional[int]): The HTTP port for the Chroma server. @@ -113,23 +113,23 @@ class ChromaComponent(CustomComponent): collection_name=collection_name, ) if allow_duplicates: - stored_records = [] + stored_data = [] else: - stored_records = chroma_collection_to_records(chroma.get()) + stored_data = chroma_collection_to_data(chroma.get()) _stored_documents_without_id = [] - for record in deepcopy(stored_records): - del record.id - _stored_documents_without_id.append(record) + for value in deepcopy(stored_data): + del value.id + _stored_documents_without_id.append(value) documents = [] for _input in inputs or []: - if isinstance(_input, Record): + if isinstance(_input, Data): if _input not in _stored_documents_without_id: documents.append(_input.to_lc_document()) else: - raise ValueError("Inputs must be a Record objects.") + raise ValueError("Inputs must be a Data objects.") if documents and embedding is not None: chroma.add_documents(documents) - self.status = stored_records + self.status = stored_data return chroma diff --git a/src/backend/base/langflow/components/vectorstores/Couchbase.py b/src/backend/base/langflow/components/vectorstores/Couchbase.py index ffc17f1b6..fe09a3f3b 100644 --- a/src/backend/base/langflow/components/vectorstores/Couchbase.py +++ b/src/backend/base/langflow/components/vectorstores/Couchbase.py @@ -5,7 +5,7 @@ from langchain_core.retrievers import BaseRetriever from langflow.custom import CustomComponent from langflow.field_typing import Embeddings, VectorStore -from langflow.schema import Record +from langflow.schema import Data class CouchbaseComponent(CustomComponent): @@ -25,7 +25,7 @@ class CouchbaseComponent(CustomComponent): def build_config(self): return { - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "embedding": {"display_name": "Embedding"}, "couchbase_connection_string": {"display_name": "Couchbase Cluster connection string", "required": True}, "couchbase_username": {"display_name": "Couchbase username", "required": True}, @@ -39,7 +39,7 @@ class CouchbaseComponent(CustomComponent): def build( self, embedding: Embeddings, - inputs: Optional[List[Record]] = None, + inputs: Optional[List[Data]] = None, bucket_name: str = "", scope_name: str = "", collection_name: str = "", @@ -68,7 +68,7 @@ class CouchbaseComponent(CustomComponent): raise ValueError(f"Failed to connect to Couchbase: {e}") documents = [] for _input in inputs or []: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) diff --git a/src/backend/base/langflow/components/vectorstores/FAISS.py b/src/backend/base/langflow/components/vectorstores/FAISS.py index 3efd5b722..0dd59a576 100644 --- a/src/backend/base/langflow/components/vectorstores/FAISS.py +++ b/src/backend/base/langflow/components/vectorstores/FAISS.py @@ -6,7 +6,7 @@ from langchain_core.vectorstores import VectorStore from langflow.custom import CustomComponent from langflow.field_typing import Embeddings -from langflow.schema import Record +from langflow.schema import Data class FAISSComponent(CustomComponent): @@ -16,7 +16,7 @@ class FAISSComponent(CustomComponent): def build_config(self): return { - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "embedding": {"display_name": "Embedding"}, "folder_path": { "display_name": "Folder Path", @@ -28,13 +28,13 @@ class FAISSComponent(CustomComponent): def build( self, embedding: Embeddings, - inputs: List[Record], + inputs: List[Data], folder_path: str, index_name: str = "langflow_index", ) -> Union[VectorStore, FAISS, BaseRetriever]: documents = [] for _input in inputs or []: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) diff --git a/src/backend/base/langflow/components/vectorstores/MongoDBAtlasVector.py b/src/backend/base/langflow/components/vectorstores/MongoDBAtlasVector.py index 61c4933e9..c69931e25 100644 --- a/src/backend/base/langflow/components/vectorstores/MongoDBAtlasVector.py +++ b/src/backend/base/langflow/components/vectorstores/MongoDBAtlasVector.py @@ -4,7 +4,7 @@ from langchain_community.vectorstores.mongodb_atlas import MongoDBAtlasVectorSea from langflow.custom import CustomComponent from langflow.field_typing import Embeddings -from langflow.schema import Record +from langflow.schema import Data class MongoDBAtlasComponent(CustomComponent): @@ -14,7 +14,7 @@ class MongoDBAtlasComponent(CustomComponent): def build_config(self): return { - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "embedding": {"display_name": "Embedding"}, "collection_name": {"display_name": "Collection Name"}, "db_name": {"display_name": "Database Name"}, @@ -25,7 +25,7 @@ class MongoDBAtlasComponent(CustomComponent): def build( self, embedding: Embeddings, - inputs: Optional[List[Record]] = None, + inputs: Optional[List[Data]] = None, collection_name: str = "", db_name: str = "", index_name: str = "", @@ -42,7 +42,7 @@ class MongoDBAtlasComponent(CustomComponent): raise ValueError(f"Failed to connect to MongoDB Atlas: {e}") documents = [] for _input in inputs or []: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) diff --git a/src/backend/base/langflow/components/vectorstores/Pinecone.py b/src/backend/base/langflow/components/vectorstores/Pinecone.py index 135dd7501..1fa0b937f 100644 --- a/src/backend/base/langflow/components/vectorstores/Pinecone.py +++ b/src/backend/base/langflow/components/vectorstores/Pinecone.py @@ -8,7 +8,7 @@ from langchain_pinecone.vectorstores import PineconeVectorStore from langflow.custom import CustomComponent from langflow.field_typing import Embeddings -from langflow.schema import Record +from langflow.schema import Data class PineconeComponent(CustomComponent): @@ -21,7 +21,7 @@ class PineconeComponent(CustomComponent): distance_options = [e.value.title().replace("_", " ") for e in DistanceStrategy] distance_value = distance_options[0] return { - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "embedding": {"display_name": "Embedding"}, "index_name": {"display_name": "Index Name"}, "namespace": {"display_name": "Namespace"}, @@ -110,7 +110,7 @@ class PineconeComponent(CustomComponent): self, embedding: Embeddings, distance_strategy: str, - inputs: Optional[List[Record]] = None, + inputs: Optional[List[Data]] = None, text_key: str = "text", pool_threads: int = 4, index_name: Optional[str] = None, @@ -124,7 +124,7 @@ class PineconeComponent(CustomComponent): raise ValueError("Index Name is required.") documents = [] for _input in inputs or []: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) diff --git a/src/backend/base/langflow/components/vectorstores/Qdrant.py b/src/backend/base/langflow/components/vectorstores/Qdrant.py index 6c1bdbcb6..7b63cebb6 100644 --- a/src/backend/base/langflow/components/vectorstores/Qdrant.py +++ b/src/backend/base/langflow/components/vectorstores/Qdrant.py @@ -6,7 +6,7 @@ from langchain_core.vectorstores import VectorStore from langflow.custom import CustomComponent from langflow.field_typing import Embeddings -from langflow.schema import Record +from langflow.schema import Data class QdrantComponent(CustomComponent): @@ -16,7 +16,7 @@ class QdrantComponent(CustomComponent): def build_config(self): return { - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "embedding": {"display_name": "Embedding"}, "api_key": {"display_name": "API Key", "password": True, "advanced": True}, "collection_name": {"display_name": "Collection Name"}, @@ -45,7 +45,7 @@ class QdrantComponent(CustomComponent): self, embedding: Embeddings, collection_name: str, - inputs: Optional[Record] = None, + inputs: Optional[Data] = None, api_key: Optional[str] = None, content_payload_key: str = "page_content", distance_func: str = "Cosine", @@ -63,7 +63,7 @@ class QdrantComponent(CustomComponent): ) -> Union[VectorStore, Qdrant, BaseRetriever]: documents = [] for _input in inputs or []: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) diff --git a/src/backend/base/langflow/components/vectorstores/Redis.py b/src/backend/base/langflow/components/vectorstores/Redis.py index c35ec018e..d519b3633 100644 --- a/src/backend/base/langflow/components/vectorstores/Redis.py +++ b/src/backend/base/langflow/components/vectorstores/Redis.py @@ -6,7 +6,7 @@ from langchain_core.retrievers import BaseRetriever from langchain_core.vectorstores import VectorStore from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class RedisComponent(CustomComponent): @@ -28,7 +28,7 @@ class RedisComponent(CustomComponent): return { "index_name": {"display_name": "Index Name", "value": "your_index"}, "code": {"show": False, "display_name": "Code"}, - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "embedding": {"display_name": "Embedding"}, "schema": {"display_name": "Schema", "file_types": [".yaml"]}, "redis_server_url": { @@ -44,7 +44,7 @@ class RedisComponent(CustomComponent): redis_server_url: str, redis_index_name: str, schema: Optional[str] = None, - inputs: Optional[Record] = None, + inputs: Optional[Data] = None, ) -> Union[VectorStore, BaseRetriever]: """ Builds the Vector Store or BaseRetriever object. @@ -60,7 +60,7 @@ class RedisComponent(CustomComponent): """ documents = [] for _input in inputs or []: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) diff --git a/src/backend/base/langflow/components/vectorstores/SupabaseVectorStore.py b/src/backend/base/langflow/components/vectorstores/SupabaseVectorStore.py index e7c847f2b..a9ca62452 100644 --- a/src/backend/base/langflow/components/vectorstores/SupabaseVectorStore.py +++ b/src/backend/base/langflow/components/vectorstores/SupabaseVectorStore.py @@ -7,7 +7,7 @@ from supabase.client import Client, create_client from langflow.custom import CustomComponent from langflow.field_typing import Embeddings -from langflow.schema import Record +from langflow.schema import Data class SupabaseComponent(CustomComponent): @@ -16,7 +16,7 @@ class SupabaseComponent(CustomComponent): def build_config(self): return { - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "embedding": {"display_name": "Embedding"}, "query_name": {"display_name": "Query Name"}, "supabase_service_key": {"display_name": "Supabase Service Key"}, @@ -27,7 +27,7 @@ class SupabaseComponent(CustomComponent): def build( self, embedding: Embeddings, - inputs: Optional[List[Record]] = None, + inputs: Optional[List[Data]] = None, query_name: str = "", supabase_service_key: str = "", supabase_url: str = "", @@ -36,7 +36,7 @@ class SupabaseComponent(CustomComponent): supabase: Client = create_client(supabase_url, supabase_key=supabase_service_key) documents = [] for _input in inputs or []: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) diff --git a/src/backend/base/langflow/components/vectorstores/Upstash.py b/src/backend/base/langflow/components/vectorstores/Upstash.py index 2695abecc..6720d9b9e 100644 --- a/src/backend/base/langflow/components/vectorstores/Upstash.py +++ b/src/backend/base/langflow/components/vectorstores/Upstash.py @@ -6,7 +6,7 @@ from langchain_core.retrievers import BaseRetriever from langchain_core.vectorstores import VectorStore from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class UpstashVectorStoreComponent(CustomComponent): @@ -25,7 +25,7 @@ class UpstashVectorStoreComponent(CustomComponent): - dict: A dictionary containing the configuration options for the component. """ return { - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "embedding": { "display_name": "Embedding", "input_types": ["Embeddings"], @@ -48,7 +48,7 @@ class UpstashVectorStoreComponent(CustomComponent): def build( self, - inputs: Optional[List[Record]] = None, + inputs: Optional[List[Data]] = None, text_key: str = "text", index_url: Optional[str] = None, index_token: Optional[str] = None, @@ -56,7 +56,7 @@ class UpstashVectorStoreComponent(CustomComponent): ) -> Union[VectorStore, BaseRetriever]: documents = [] for _input in inputs or []: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) diff --git a/src/backend/base/langflow/components/vectorstores/Vectara.py b/src/backend/base/langflow/components/vectorstores/Vectara.py index 5a51b5a1b..f5d5253fa 100644 --- a/src/backend/base/langflow/components/vectorstores/Vectara.py +++ b/src/backend/base/langflow/components/vectorstores/Vectara.py @@ -9,7 +9,7 @@ from langchain_core.vectorstores import VectorStore from langflow.custom import CustomComponent from langflow.field_typing import BaseRetriever -from langflow.schema import Record +from langflow.schema import Data class VectaraComponent(CustomComponent): @@ -30,7 +30,7 @@ class VectaraComponent(CustomComponent): }, "inputs": { "display_name": "Input", - "input_types": ["Document", "Record"], + "input_types": ["Document", "Data"], "info": "If provided, will be upserted to corpus (optional)", }, "files_url": { @@ -45,13 +45,13 @@ class VectaraComponent(CustomComponent): vectara_corpus_id: str, vectara_api_key: str, files_url: Optional[List[str]] = None, - inputs: Optional[Record] = None, + inputs: Optional[Data] = None, ) -> Union[VectorStore, BaseRetriever]: source = "Langflow" documents = [] for _input in inputs or []: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) diff --git a/src/backend/base/langflow/components/vectorstores/Weaviate.py b/src/backend/base/langflow/components/vectorstores/Weaviate.py index fafa2f390..c77ccb2a8 100644 --- a/src/backend/base/langflow/components/vectorstores/Weaviate.py +++ b/src/backend/base/langflow/components/vectorstores/Weaviate.py @@ -8,7 +8,7 @@ from langchain_core.retrievers import BaseRetriever from langchain_core.vectorstores import VectorStore from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class WeaviateVectorStoreComponent(CustomComponent): @@ -32,7 +32,7 @@ class WeaviateVectorStoreComponent(CustomComponent): "advanced": True, "value": "text", }, - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "embedding": {"display_name": "Embedding"}, "attributes": { "display_name": "Attributes", @@ -57,7 +57,7 @@ class WeaviateVectorStoreComponent(CustomComponent): api_key: Optional[str] = None, text_key: str = "text", embedding: Optional[Embeddings] = None, - inputs: Optional[Record] = None, + inputs: Optional[Data] = None, attributes: Optional[list] = None, ) -> Union[VectorStore, BaseRetriever]: if api_key: @@ -84,7 +84,7 @@ class WeaviateVectorStoreComponent(CustomComponent): raise ValueError("Index name is required") documents: list[Document] = [] for _input in inputs or []: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) elif isinstance(_input, Document): documents.append(_input) diff --git a/src/backend/base/langflow/components/vectorstores/base/model.py b/src/backend/base/langflow/components/vectorstores/base/model.py index 18a37c9cf..1ea5e330b 100644 --- a/src/backend/base/langflow/components/vectorstores/base/model.py +++ b/src/backend/base/langflow/components/vectorstores/base/model.py @@ -6,8 +6,8 @@ from langchain_core.vectorstores import VectorStore from langflow.custom import CustomComponent from langflow.field_typing import Text -from langflow.helpers.record import docs_to_records -from langflow.schema import Record +from langflow.helpers.record import docs_to_data +from langflow.schema import Data class LCVectorStoreComponent(CustomComponent): @@ -21,9 +21,9 @@ class LCVectorStoreComponent(CustomComponent): vector_store: Union[VectorStore, BaseRetriever], k=10, **kwargs, - ) -> List[Record]: + ) -> List[Data]: """ - Search for records in the vector store based on the input value and search type. + Search for data in the vector store based on the input value and search type. Args: input_value (Text): The input value to search for. @@ -31,7 +31,7 @@ class LCVectorStoreComponent(CustomComponent): vector_store (VectorStore): The vector store to search in. Returns: - List[Record]: A list of records matching the search criteria. + List[Data]: A list of data matching the search criteria. Raises: ValueError: If invalid inputs are provided. @@ -42,6 +42,6 @@ class LCVectorStoreComponent(CustomComponent): docs = vector_store.search(query=input_value, search_type=search_type.lower(), k=k, **kwargs) else: raise ValueError("Invalid inputs provided.") - records = docs_to_records(docs) - self.status = records - return records + data = docs_to_data(docs) + self.status = data + return data diff --git a/src/backend/base/langflow/components/vectorstores/pgvector.py b/src/backend/base/langflow/components/vectorstores/pgvector.py index 3ea7b6eb6..36bb6f505 100644 --- a/src/backend/base/langflow/components/vectorstores/pgvector.py +++ b/src/backend/base/langflow/components/vectorstores/pgvector.py @@ -6,7 +6,7 @@ from langchain_core.retrievers import BaseRetriever from langchain_core.vectorstores import VectorStore from langflow.custom import CustomComponent -from langflow.schema import Record +from langflow.schema import Data class PGVectorComponent(CustomComponent): @@ -27,7 +27,7 @@ class PGVectorComponent(CustomComponent): """ return { "code": {"show": False}, - "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Data"]}, "embedding": {"display_name": "Embedding"}, "pg_server_url": { "display_name": "PostgreSQL Server Connection String", @@ -41,7 +41,7 @@ class PGVectorComponent(CustomComponent): embedding: Embeddings, pg_server_url: str, collection_name: str, - inputs: Optional[Record] = None, + inputs: Optional[Data] = None, ) -> Union[VectorStore, BaseRetriever]: """ Builds the Vector Store or BaseRetriever object. @@ -58,7 +58,7 @@ class PGVectorComponent(CustomComponent): documents = [] for _input in inputs or []: - if isinstance(_input, Record): + if isinstance(_input, Data): documents.append(_input.to_lc_document()) else: documents.append(_input) diff --git a/src/backend/base/langflow/custom/custom_component/component.py b/src/backend/base/langflow/custom/custom_component/component.py index 6d2fe36e5..5d0af85c3 100644 --- a/src/backend/base/langflow/custom/custom_component/component.py +++ b/src/backend/base/langflow/custom/custom_component/component.py @@ -18,7 +18,7 @@ from loguru import logger from pydantic import BaseModel from langflow.schema.artifact import get_artifact_type, post_process_raw -from langflow.schema.record import Record +from langflow.schema.data import Data from langflow.template.field.base import UNDEFINED, Input, Output from .custom_component import CustomComponent @@ -91,7 +91,7 @@ class Component(CustomComponent): _results[output.name] = result output.value = result custom_repr = self.custom_repr() - if custom_repr is None and isinstance(result, (dict, Record, str)): + if custom_repr is None and isinstance(result, (dict, Data, str)): custom_repr = result if not isinstance(custom_repr, str): custom_repr = str(custom_repr) @@ -120,7 +120,7 @@ class Component(CustomComponent): logger.error(f"Error while dumping build_result: {e}") custom_repr = str(self._results) - if custom_repr is None and isinstance(self._results, (dict, Record, str)): + if custom_repr is None and isinstance(self._results, (dict, Data, str)): custom_repr = self._results if not isinstance(custom_repr, str): custom_repr = str(custom_repr) diff --git a/src/backend/base/langflow/custom/custom_component/custom_component.py b/src/backend/base/langflow/custom/custom_component/custom_component.py index 548ad1dd7..e63bc7088 100644 --- a/src/backend/base/langflow/custom/custom_component/custom_component.py +++ b/src/backend/base/langflow/custom/custom_component/custom_component.py @@ -9,7 +9,7 @@ from pydantic import BaseModel from langflow.custom.custom_component.base_component import BaseComponent from langflow.helpers.flow import list_flows, load_flow, run_flow -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.artifact import get_artifact_type from langflow.schema.dotdict import dotdict from langflow.schema.message import Message @@ -28,7 +28,7 @@ if TYPE_CHECKING: from langflow.services.storage.service import StorageService -LoggableType = Union[str, dict, list, int, float, bool, None, Record, Message] +LoggableType = Union[str, dict, list, int, float, bool, None, Data, Message] class CustomComponent(BaseComponent): @@ -80,7 +80,7 @@ class CustomComponent(BaseComponent): user_id: Optional[Union[UUID, str]] = None status: Optional[Any] = None """The status of the component. This is displayed on the frontend. Defaults to None.""" - _flows_records: Optional[List[Record]] = None + _flows_data: Optional[List[Data]] = None _logs: Optional[List[Log]] = [] def update_state(self, name: str, value: Any): @@ -166,7 +166,7 @@ class CustomComponent(BaseComponent): return yaml.dump(self.repr_value) if isinstance(self.repr_value, str): return self.repr_value - if isinstance(self.repr_value, BaseModel) and not isinstance(self.repr_value, Record): + if isinstance(self.repr_value, BaseModel) and not isinstance(self.repr_value, Data): return str(self.repr_value) return self.repr_value @@ -198,9 +198,9 @@ class CustomComponent(BaseComponent): """ return self.get_code_tree(self.code or "") - def to_records(self, data: Any, keys: Optional[List[str]] = None, silent_errors: bool = False) -> List[Record]: + def to_data(self, data: Any, keys: Optional[List[str]] = None, silent_errors: bool = False) -> List[Data]: """ - Converts input data into a list of Record objects. + Converts input data into a list of Data objects. Args: data (Any): The input data to be converted. It can be a single item or a sequence of items. @@ -211,7 +211,7 @@ class CustomComponent(BaseComponent): Defaults to None, in which case the default keys "text" and "data" are used. Returns: - List[Record]: A list of Record objects. + List[Data]: A list of Data objects. Raises: ValueError: If the input data is not of a valid type or if the specified keys are not found in the data. @@ -219,7 +219,7 @@ class CustomComponent(BaseComponent): """ if not keys: keys = [] - records = [] + data = [] if not isinstance(data, Sequence): data = [data] for item in data: @@ -245,28 +245,28 @@ class CustomComponent(BaseComponent): else: raise ValueError(f"Invalid data type: {type(item)}") - records.append(Record(data=data_dict)) + data.append(Data(data=data_dict)) - return records + return data - def create_references_from_records(self, records: List[Record], include_data: bool = False) -> str: + def create_references_from_data(self, data: List[Data], include_data: bool = False) -> str: """ - Create references from a list of records. + Create references from a list of data. Args: - records (List[dict]): A list of records, where each record is a dictionary. + data (List[dict]): A list of data, where each record is a dictionary. include_data (bool, optional): Whether to include data in the references. Defaults to False. Returns: str: A string containing the references in markdown format. """ - if not records: + if not data: return "" markdown_string = "---\n" - for record in records: - markdown_string += f"- Text: {record.get_text()}" + for value in data: + markdown_string += f"- Text: {value.get_text()}" if include_data: - markdown_string += f" Data: {record.data}" + markdown_string += f" Data: {value.data}" markdown_string += "\n" return markdown_string @@ -454,7 +454,7 @@ class CustomComponent(BaseComponent): ) -> Any: return await run_flow(inputs=inputs, flow_id=flow_id, flow_name=flow_name, tweaks=tweaks, user_id=self._user_id) - def list_flows(self) -> List[Record]: + def list_flows(self) -> List[Data]: if not self._user_id: raise ValueError("Session is invalid") try: diff --git a/src/backend/base/langflow/field_typing/prompt.py b/src/backend/base/langflow/field_typing/prompt.py index 9b73076fe..d6f7c8e40 100644 --- a/src/backend/base/langflow/field_typing/prompt.py +++ b/src/backend/base/langflow/field_typing/prompt.py @@ -4,10 +4,10 @@ from langchain_core.prompts import BaseChatPromptTemplate, ChatPromptTemplate, P from langflow.base.prompts.utils import dict_values_to_string from langflow.schema.message import Message -from langflow.schema.record import Record +from langflow.schema.data import Data -class Prompt(Record): +class Prompt(Data): def load_lc_prompt(self): if "prompt" not in self: raise ValueError("Prompt is required.") diff --git a/src/backend/base/langflow/graph/graph/base.py b/src/backend/base/langflow/graph/graph/base.py index 86dd2a08c..2ba9ae0d4 100644 --- a/src/backend/base/langflow/graph/graph/base.py +++ b/src/backend/base/langflow/graph/graph/base.py @@ -15,7 +15,7 @@ from langflow.graph.graph.utils import process_flow from langflow.graph.schema import InterfaceComponentTypes, RunOutputs from langflow.graph.vertex.base import Vertex from langflow.graph.vertex.types import InterfaceVertex, StateVertex -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.schema import INPUT_FIELD_NAME, InputType from langflow.services.cache.utils import CacheMiss from langflow.services.chat.service import ChatService @@ -81,7 +81,7 @@ class Graph: self.define_vertices_lists() self.state_manager = GraphStateManager() - def get_state(self, name: str) -> Optional[Record]: + def get_state(self, name: str) -> Optional[Data]: """ Returns the state of the graph with the given name. @@ -89,17 +89,17 @@ class Graph: name (str): The name of the state. Returns: - Optional[Record]: The state record, or None if the state does not exist. + Optional[Data]: The state record, or None if the state does not exist. """ return self.state_manager.get_state(name, run_id=self._run_id) - def update_state(self, name: str, record: Union[str, Record], caller: Optional[str] = None) -> None: + def update_state(self, name: str, record: Union[str, Data], caller: Optional[str] = None) -> None: """ Updates the state of the graph with the given name. Args: name (str): The name of the state. - record (Union[str, Record]): The new state record. + record (Union[str, Data]): The new state record. caller (Optional[str], optional): The ID of the vertex that is updating the state. Defaults to None. """ if caller: @@ -154,13 +154,13 @@ class Graph: """ self.activated_vertices = [] - def append_state(self, name: str, record: Union[str, Record], caller: Optional[str] = None) -> None: + def append_state(self, name: str, record: Union[str, Data], caller: Optional[str] = None) -> None: """ Appends the state of the graph with the given name. Args: name (str): The name of the state. - record (Union[str, Record]): The state record to append. + record (Union[str, Data]): The state record to append. caller (Optional[str], optional): The ID of the vertex that is updating the state. Defaults to None. """ if caller: diff --git a/src/backend/base/langflow/graph/vertex/base.py b/src/backend/base/langflow/graph/vertex/base.py index cf825202b..bff37199e 100644 --- a/src/backend/base/langflow/graph/vertex/base.py +++ b/src/backend/base/langflow/graph/vertex/base.py @@ -592,7 +592,7 @@ class Vertex: for vertex in vertices: result = await vertex.get_result(self) # Weird check to see if the params[key] is a list - # because sometimes it is a Record and breaks the code + # because sometimes it is a Data and breaks the code if not isinstance(self.params[key], list): self.params[key] = [self.params[key]] diff --git a/src/backend/base/langflow/graph/vertex/types.py b/src/backend/base/langflow/graph/vertex/types.py index 674247af5..6dbf9f2db 100644 --- a/src/backend/base/langflow/graph/vertex/types.py +++ b/src/backend/base/langflow/graph/vertex/types.py @@ -8,7 +8,7 @@ from loguru import logger from langflow.graph.schema import CHAT_COMPONENTS, RECORDS_COMPONENTS, InterfaceComponentTypes, ResultData from langflow.graph.utils import UnbuiltObject, serialize_field from langflow.graph.vertex.base import Vertex -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.artifact import ArtifactType from langflow.schema.schema import INPUT_FIELD_NAME, build_logs_from_artifacts from langflow.services.monitor.utils import log_transaction, log_vertex_build @@ -167,8 +167,8 @@ class InterfaceVertex(ComponentVertex): # dump as a yaml string if isinstance(self.artifacts, dict): _artifacts = [self.artifacts] - elif hasattr(self.artifacts, "records"): - _artifacts = self.artifacts.records + elif hasattr(self.artifacts, "data"): + _artifacts = self.artifacts.data else: _artifacts = self.artifacts artifacts = [] @@ -191,7 +191,7 @@ class InterfaceVertex(ComponentVertex): object using the `from_message` method. If `_built_object` is not an instance of `UnbuiltObject`, it checks the type of `_built_object` and performs specific operations accordingly. If `_built_object` is a dictionary, it converts it into a - code block. If `_built_object` is an instance of `Record`, it assigns the `text` + code block. If `_built_object` is an instance of `Data`, it assigns the `text` attribute to the `message` variable. If `message` is an instance of `AsyncIterator` or `Iterator`, it builds a stream URL and sets `message` to an empty string. If `_built_object` is not a string, it converts it to a string. If `message` is a @@ -229,7 +229,7 @@ class InterfaceVertex(ComponentVertex): # Turn the dict into a pleasing to # read JSON inside a code block message = dict_to_codeblock(text_output) - elif isinstance(text_output, Record): + elif isinstance(text_output, Data): message = text_output.text elif isinstance(message, (AsyncIterator, Iterator)): stream_url = self.build_stream_url() @@ -263,11 +263,11 @@ class InterfaceVertex(ComponentVertex): """ Process the record component of the vertex. - If the built object is an instance of `Record`, it calls the `model_dump` method + If the built object is an instance of `Data`, it calls the `model_dump` method and assigns the result to the `artifacts` attribute. If the built object is a list, it iterates over each element and checks if it is - an instance of `Record`. If it is, it calls the `model_dump` method and appends + an instance of `Data`. If it is, it calls the `model_dump` method and appends the result to the `artifacts` list. If it is not, it raises a `ValueError` if the `ignore_errors` parameter is set to `False`, or logs an error message if it is set to `True`. @@ -276,22 +276,22 @@ class InterfaceVertex(ComponentVertex): The built object. Raises: - ValueError: If an element in the list is not an instance of `Record` and + ValueError: If an element in the list is not an instance of `Data` and `ignore_errors` is set to `False`. """ - if isinstance(self._built_object, Record): + if isinstance(self._built_object, Data): artifacts = [self._built_object.data] elif isinstance(self._built_object, list): artifacts = [] ignore_errors = self.params.get("ignore_errors", False) - for record in self._built_object: - if isinstance(record, Record): - artifacts.append(record.data) + for value in self._built_object: + if isinstance(value, Data): + artifacts.append(value.data) elif ignore_errors: - logger.error(f"Record expected, but got {record} of type {type(record)}") + logger.error(f"Data expected, but got {value} of type {type(value)}") else: - raise ValueError(f"Record expected, but got {record} of type {type(record)}") - self.artifacts = RecordOutputResponse(records=artifacts) + raise ValueError(f"Data expected, but got {value} of type {type(value)}") + self.artifacts = RecordOutputResponse(data=artifacts) return self._built_object async def _run(self, *args, **kwargs): @@ -302,7 +302,7 @@ class InterfaceVertex(ComponentVertex): message = self._process_record_component() if isinstance(self._built_object, (AsyncIterator, Iterator)): if self.params.get("return_record", False): - self._built_object = Record(text=message, data=self.artifacts) + self._built_object = Data(text=message, data=self.artifacts) else: self._built_object = message self._built_result = self._built_object @@ -336,7 +336,7 @@ class InterfaceVertex(ComponentVertex): type=ArtifactType.OBJECT.value, ).model_dump() self.params[INPUT_FIELD_NAME] = complete_message - self._built_object = Record(text=complete_message, data=self.artifacts) + self._built_object = Data(text=complete_message, data=self.artifacts) self._built_result = complete_message # Update artifacts with the message # and remove the stream_url diff --git a/src/backend/base/langflow/helpers/__init__.py b/src/backend/base/langflow/helpers/__init__.py index 38b460af2..cf3c63bb0 100644 --- a/src/backend/base/langflow/helpers/__init__.py +++ b/src/backend/base/langflow/helpers/__init__.py @@ -1,3 +1,3 @@ -from .record import docs_to_records, records_to_text, messages_to_text +from .record import data_to_text, docs_to_data, messages_to_text -__all__ = ["docs_to_records", "records_to_text", "messages_to_text"] +__all__ = ["docs_to_data", "data_to_text", "messages_to_text"] diff --git a/src/backend/base/langflow/helpers/flow.py b/src/backend/base/langflow/helpers/flow.py index 61674942a..39add66b2 100644 --- a/src/backend/base/langflow/helpers/flow.py +++ b/src/backend/base/langflow/helpers/flow.py @@ -6,7 +6,7 @@ from pydantic.v1 import BaseModel, Field, create_model from sqlmodel import Session, select from langflow.graph.schema import RunOutputs -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.schema import INPUT_FIELD_NAME from langflow.services.database.models.flow import Flow from langflow.services.deps import get_session, get_settings_service, session_scope @@ -22,7 +22,7 @@ INPUT_TYPE_MAP = { } -def list_flows(*, user_id: Optional[str] = None) -> List[Record]: +def list_flows(*, user_id: Optional[str] = None) -> List[Data]: if not user_id: raise ValueError("Session is invalid") try: @@ -31,8 +31,8 @@ def list_flows(*, user_id: Optional[str] = None) -> List[Record]: select(Flow).where(Flow.user_id == user_id).where(Flow.is_component == False) # noqa ).all() - flows_records = [flow.to_record() for flow in flows] - return flows_records + flows_data = [flow.to_record() for flow in flows] + return flows_data except Exception as e: raise ValueError(f"Error listing flows: {e}") @@ -142,7 +142,7 @@ async def flow_function({func_args}): tweaks = {{ {arg_mappings} }} from langflow.helpers.flow import run_flow from langchain_core.tools import ToolException - from langflow.base.flow_processing.utils import build_records_from_result_data, format_flow_output_records + from langflow.base.flow_processing.utils import build_data_from_result_data, format_flow_output_data try: run_outputs = await run_flow( tweaks={{key: {{'input_value': value}} for key, value in tweaks.items()}}, @@ -153,12 +153,12 @@ async def flow_function({func_args}): return [] run_output = run_outputs[0] - records = [] + data = [] if run_output is not None: for output in run_output.outputs: if output: - records.extend(build_records_from_result_data(output, get_final_results_only=True)) - return format_flow_output_records(records) + data.extend(build_data_from_result_data(output, get_final_results_only=True)) + return format_flow_output_data(data) except Exception as e: raise ToolException(f'Error running flow: ' + e) """ @@ -170,13 +170,13 @@ async def flow_function({func_args}): def build_function_and_schema( - flow_record: Record, graph: "Graph", user_id: str | UUID | None + flow_record: Data, graph: "Graph", user_id: str | UUID | None ) -> Tuple[Callable[..., Awaitable[Any]], Type[BaseModel]]: """ Builds a dynamic function and schema for a given flow. Args: - flow_record (Record): The flow record containing information about the flow. + flow_record (Data): The flow record containing information about the flow. graph (Graph): The graph representing the flow. Returns: @@ -197,7 +197,7 @@ def get_flow_inputs(graph: "Graph") -> List["Vertex"]: graph (Graph): The graph object representing the flow. Returns: - List[Record]: A list of input records, where each record contains the ID, name, and description of the input vertex. + List[Data]: A list of input data, where each record contains the ID, name, and description of the input vertex. """ inputs = [] for vertex in graph.vertices: diff --git a/src/backend/base/langflow/helpers/record.py b/src/backend/base/langflow/helpers/record.py index 88d0bcd13..7acb6e4e0 100644 --- a/src/backend/base/langflow/helpers/record.py +++ b/src/backend/base/langflow/helpers/record.py @@ -2,11 +2,11 @@ from typing import Union from langchain_core.documents import Document -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.message import Message -def docs_to_records(documents: list[Document]) -> list[Record]: +def docs_to_data(documents: list[Document]) -> list[Data]: """ Converts a list of Documents to a list of Records. @@ -14,33 +14,33 @@ def docs_to_records(documents: list[Document]) -> list[Record]: documents (list[Document]): The list of Documents to convert. Returns: - list[Record]: The converted list of Records. + list[Data]: The converted list of Records. """ - return [Record.from_document(document) for document in documents] + return [Data.from_document(document) for document in documents] -def records_to_text(template: str, records: Union[Record, list[Record]]) -> str: +def data_to_text(template: str, data: Union[Data, list[Data]]) -> str: """ Converts a list of Records to a list of texts. Args: - records (list[Record]): The list of Records to convert. + data (list[Data]): The list of Records to convert. Returns: list[str]: The converted list of texts. """ - if isinstance(records, (Record)): - records = [records] + if isinstance(data, (Data)): + data = [data] # Check if there are any format strings in the template - _records = [] - for record in records: + _data = [] + for value in data: # If it is not a record, create one with the key "text" - if not isinstance(record, Record): - record = Record(text=record) - _records.append(record) + if not isinstance(value, Data): + value = Data(text=value) + _data.append(value) - formated_records = [template.format(data=record.data, **record.data) for record in _records] - return "\n".join(formated_records) + formated_data = [template.format(data=value.data, **value.data) for value in _data] + return "\n".join(formated_data) def messages_to_text(template: str, messages: Union[Message, list[Message]]) -> str: diff --git a/src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting (Hello, world!).json b/src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting (Hello, world!).json index 39fa12976..b7ad96ea8 100644 --- a/src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting (Hello, world!).json +++ b/src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting (Hello, world!).json @@ -47,7 +47,7 @@ "id": "OpenAIModel-k39HS", "inputTypes": [ "Text", - "Record", + "Data", "Prompt" ], "type": "str" @@ -60,7 +60,7 @@ "stroke": "#555" }, "target": "OpenAIModel-k39HS", - "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-k39HSœ, œinputTypesœ: [œTextœ, œRecordœ, œPromptœ], œtypeœ: œstrœ}" + "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-k39HSœ, œinputTypesœ: [œTextœ, œDataœ, œPromptœ], œtypeœ: œstrœ}" }, { "className": "stroke-gray-900 stroke-connection", @@ -321,7 +321,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Record\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" + "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Data\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" }, "input_value": { "advanced": false, @@ -332,7 +332,7 @@ "info": "", "input_types": [ "Text", - "Record", + "Data", "Prompt" ], "list": false, @@ -618,7 +618,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\"),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Record Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" + "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\", advanced=True),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Data Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" }, "input_value": { "advanced": false, @@ -645,11 +645,11 @@ }, "record_template": { "advanced": true, - "display_name": "Record Template", + "display_name": "Data Template", "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + "info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", "input_types": [ "Text" ], @@ -692,7 +692,7 @@ "value": "Machine" }, "sender_name": { - "advanced": false, + "advanced": true, "display_name": "Sender Name", "dynamic": false, "fileTypes": [], @@ -781,17 +781,6 @@ "icon": "ChatInput", "output_types": [], "outputs": [ - { - "cache": true, - "display_name": "Text", - "method": "text_response", - "name": "text", - "selected": "Text", - "types": [ - "Text" - ], - "value": "__UNDEFINED__" - }, { "cache": true, "display_name": "Message", @@ -822,7 +811,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatInput(ChatComponent):\n display_name = \"Chat Input\"\n description = \"Get chat inputs from the Playground.\"\n icon = \"ChatInput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n input_types=[],\n info=\"Message to be passed as input.\",\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"User\",\n info=\"Type of sender.\",\n advanced=True,\n ),\n StrInput(name=\"sender_name\", type=str, display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"User\"),\n StrInput(\n name=\"session_id\", type=str, display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, (Message, str)):\n self.store_message(message)\n self.status = message\n return message\n" + "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.inputs import DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatInput(ChatComponent):\n display_name = \"Chat Input\"\n description = \"Get chat inputs from the Playground.\"\n icon = \"ChatInput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n input_types=[],\n info=\"Message to be passed as input.\",\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"User\",\n info=\"Type of sender.\",\n advanced=True,\n ),\n StrInput(\n name=\"sender_name\",\n type=str,\n display_name=\"Sender Name\",\n info=\"Name of the sender.\",\n value=\"User\",\n advanced=True,\n ),\n StrInput(\n name=\"session_id\", type=str, display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True\n ),\n ]\n outputs = [\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, (Message, str)):\n self.store_message(message)\n self.status = message\n return message\n" }, "input_value": { "advanced": false, @@ -871,7 +860,7 @@ "value": "User" }, "sender_name": { - "advanced": false, + "advanced": true, "display_name": "Sender Name", "dynamic": false, "fileTypes": [], diff --git a/src/backend/base/langflow/initial_setup/starter_projects/Langflow Blog Writter.json b/src/backend/base/langflow/initial_setup/starter_projects/Langflow Blog Writter.json index ddf9d38b1..6720aee0d 100644 --- a/src/backend/base/langflow/initial_setup/starter_projects/Langflow Blog Writter.json +++ b/src/backend/base/langflow/initial_setup/starter_projects/Langflow Blog Writter.json @@ -8,9 +8,7 @@ "dataType": "URL", "id": "URL-HYPkR", "name": "record", - "output_types": [ - "Record" - ] + "output_types": [] }, "targetHandle": { "fieldName": "reference_2", @@ -27,7 +25,7 @@ "id": "reactflow__edge-URL-HYPkR{œbaseClassesœ:[œRecordœ],œdataTypeœ:œURLœ,œidœ:œURL-HYPkRœ}-Prompt-Rse03{œfieldNameœ:œreference_2œ,œidœ:œPrompt-Rse03œ,œinputTypesœ:[œDocumentœ,œBaseOutputParserœ,œRecordœ,œTextœ],œtypeœ:œstrœ}", "selected": false, "source": "URL-HYPkR", - "sourceHandle": "{œdataTypeœ: œURLœ, œidœ: œURL-HYPkRœ, œoutput_typesœ: [œRecordœ], œnameœ: œrecordœ}", + "sourceHandle": "{œdataTypeœ: œURLœ, œidœ: œURL-HYPkRœ, œoutput_typesœ: [], œnameœ: œrecordœ}", "style": { "stroke": "#555" }, @@ -71,9 +69,7 @@ "dataType": "URL", "id": "URL-2cX90", "name": "record", - "output_types": [ - "Record" - ] + "output_types": [] }, "targetHandle": { "fieldName": "reference_1", @@ -89,7 +85,7 @@ }, "id": "reactflow__edge-URL-2cX90{œbaseClassesœ:[œRecordœ],œdataTypeœ:œURLœ,œidœ:œURL-2cX90œ}-Prompt-Rse03{œfieldNameœ:œreference_1œ,œidœ:œPrompt-Rse03œ,œinputTypesœ:[œDocumentœ,œBaseOutputParserœ,œRecordœ,œTextœ],œtypeœ:œstrœ}", "source": "URL-2cX90", - "sourceHandle": "{œdataTypeœ: œURLœ, œidœ: œURL-2cX90œ, œoutput_typesœ: [œRecordœ], œnameœ: œrecordœ}", + "sourceHandle": "{œdataTypeœ: œURLœ, œidœ: œURL-2cX90œ, œoutput_typesœ: [], œnameœ: œrecordœ}", "style": { "stroke": "#555" }, @@ -144,7 +140,7 @@ "id": "OpenAIModel-gi29P", "inputTypes": [ "Text", - "Record", + "Data", "Prompt" ], "type": "str" @@ -158,7 +154,7 @@ "stroke": "#555" }, "target": "OpenAIModel-gi29P", - "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-gi29Pœ, œinputTypesœ: [œTextœ, œRecordœ, œPromptœ], œtypeœ: œstrœ}" + "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-gi29Pœ, œinputTypesœ: [œTextœ, œDataœ, œPromptœ], œtypeœ: œstrœ}" } ], "nodes": [ @@ -377,17 +373,17 @@ "frozen": false, "icon": "layout-template", "output_types": [ - "Record" + "Data" ], "outputs": [ { "cache": true, - "display_name": "Record", + "display_name": "Data", "method": null, - "name": "record", - "selected": "Record", + "name": "data", + "selected": "Data", "types": [ - "Record" + "Data" ], "value": "__UNDEFINED__" } @@ -410,7 +406,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from typing import Any, Dict\n\nfrom langchain_community.document_loaders.web_base import WebBaseLoader\n\nfrom langflow.custom import CustomComponent\nfrom langflow.schema import Record\n\n\nclass URLComponent(CustomComponent):\n display_name = \"URL\"\n description = \"Fetch content from one or more URLs.\"\n icon = \"layout-template\"\n\n def build_config(self) -> Dict[str, Any]:\n return {\n \"urls\": {\"display_name\": \"URL\"},\n }\n\n def build(\n self,\n urls: list[str],\n ) -> list[Record]:\n loader = WebBaseLoader(web_paths=[url for url in urls if url])\n docs = loader.load()\n records = self.to_records(docs)\n self.status = records\n return records\n" + "value": "from typing import Any, Dict\n\nfrom langchain_community.document_loaders.web_base import WebBaseLoader\n\nfrom langflow.custom import CustomComponent\nfrom langflow.schema import Data\n\n\nclass URLComponent(CustomComponent):\n display_name = \"URL\"\n description = \"Fetch content from one or more URLs.\"\n icon = \"layout-template\"\n\n def build_config(self) -> Dict[str, Any]:\n return {\n \"urls\": {\"display_name\": \"URL\"},\n }\n\n def build(\n self,\n urls: list[str],\n ) -> list[Data]:\n loader = WebBaseLoader(web_paths=[url for url in urls if url])\n docs = loader.load()\n data = self.to_data(docs)\n self.status = data\n return data\n" }, "urls": { "advanced": false, @@ -524,7 +520,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\"),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Record Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" + "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\", advanced=True),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Data Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" }, "input_value": { "advanced": false, @@ -551,11 +547,11 @@ }, "record_template": { "advanced": true, - "display_name": "Record Template", + "display_name": "Data Template", "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + "info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", "input_types": [ "Text" ], @@ -598,7 +594,7 @@ "value": "" }, "sender_name": { - "advanced": false, + "advanced": true, "display_name": "Sender Name", "dynamic": false, "fileTypes": [], @@ -736,7 +732,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Record\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" + "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Data\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" }, "input_value": { "advanced": false, @@ -747,7 +743,7 @@ "info": "", "input_types": [ "Text", - "Record", + "Data", "Prompt" ], "list": false, @@ -983,17 +979,17 @@ "frozen": false, "icon": "layout-template", "output_types": [ - "Record" + "Data" ], "outputs": [ { "cache": true, - "display_name": "Record", + "display_name": "Data", "method": null, - "name": "record", - "selected": "Record", + "name": "data", + "selected": "Data", "types": [ - "Record" + "Data" ], "value": "__UNDEFINED__" } @@ -1016,7 +1012,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from typing import Any, Dict\n\nfrom langchain_community.document_loaders.web_base import WebBaseLoader\n\nfrom langflow.custom import CustomComponent\nfrom langflow.schema import Record\n\n\nclass URLComponent(CustomComponent):\n display_name = \"URL\"\n description = \"Fetch content from one or more URLs.\"\n icon = \"layout-template\"\n\n def build_config(self) -> Dict[str, Any]:\n return {\n \"urls\": {\"display_name\": \"URL\"},\n }\n\n def build(\n self,\n urls: list[str],\n ) -> list[Record]:\n loader = WebBaseLoader(web_paths=[url for url in urls if url])\n docs = loader.load()\n records = self.to_records(docs)\n self.status = records\n return records\n" + "value": "from typing import Any, Dict\n\nfrom langchain_community.document_loaders.web_base import WebBaseLoader\n\nfrom langflow.custom import CustomComponent\nfrom langflow.schema import Data\n\n\nclass URLComponent(CustomComponent):\n display_name = \"URL\"\n description = \"Fetch content from one or more URLs.\"\n icon = \"layout-template\"\n\n def build_config(self) -> Dict[str, Any]:\n return {\n \"urls\": {\"display_name\": \"URL\"},\n }\n\n def build(\n self,\n urls: list[str],\n ) -> list[Data]:\n loader = WebBaseLoader(web_paths=[url for url in urls if url])\n docs = loader.load()\n data = self.to_data(docs)\n self.status = data\n return data\n" }, "urls": { "advanced": false, diff --git a/src/backend/base/langflow/initial_setup/starter_projects/Langflow Document QA.json b/src/backend/base/langflow/initial_setup/starter_projects/Langflow Document QA.json index a1724ab82..ecf0451ea 100644 --- a/src/backend/base/langflow/initial_setup/starter_projects/Langflow Document QA.json +++ b/src/backend/base/langflow/initial_setup/starter_projects/Langflow Document QA.json @@ -81,7 +81,7 @@ "id": "OpenAIModel-Bt067", "inputTypes": [ "Text", - "Record", + "Data", "Prompt" ], "type": "str" @@ -94,7 +94,7 @@ "stroke": "#555" }, "target": "OpenAIModel-Bt067", - "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-Bt067œ, œinputTypesœ: [œTextœ, œRecordœ, œPromptœ], œtypeœ: œstrœ}" + "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-Bt067œ, œinputTypesœ: [œTextœ, œDataœ, œPromptœ], œtypeœ: œstrœ}" }, { "className": "stroke-gray-900 stroke-connection", @@ -443,17 +443,6 @@ "icon": "ChatInput", "output_types": [], "outputs": [ - { - "cache": true, - "display_name": "Text", - "method": "text_response", - "name": "text", - "selected": "Text", - "types": [ - "Text" - ], - "value": "__UNDEFINED__" - }, { "cache": true, "display_name": "Message", @@ -484,7 +473,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatInput(ChatComponent):\n display_name = \"Chat Input\"\n description = \"Get chat inputs from the Playground.\"\n icon = \"ChatInput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n input_types=[],\n info=\"Message to be passed as input.\",\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"User\",\n info=\"Type of sender.\",\n advanced=True,\n ),\n StrInput(name=\"sender_name\", type=str, display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"User\"),\n StrInput(\n name=\"session_id\", type=str, display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, (Message, str)):\n self.store_message(message)\n self.status = message\n return message\n" + "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.inputs import DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatInput(ChatComponent):\n display_name = \"Chat Input\"\n description = \"Get chat inputs from the Playground.\"\n icon = \"ChatInput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n input_types=[],\n info=\"Message to be passed as input.\",\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"User\",\n info=\"Type of sender.\",\n advanced=True,\n ),\n StrInput(\n name=\"sender_name\",\n type=str,\n display_name=\"Sender Name\",\n info=\"Name of the sender.\",\n value=\"User\",\n advanced=True,\n ),\n StrInput(\n name=\"session_id\", type=str, display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True\n ),\n ]\n outputs = [\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, (Message, str)):\n self.store_message(message)\n self.status = message\n return message\n" }, "input_value": { "advanced": false, @@ -533,7 +522,7 @@ "value": "" }, "sender_name": { - "advanced": false, + "advanced": true, "display_name": "Sender Name", "dynamic": false, "fileTypes": [], @@ -663,7 +652,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\"),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Record Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" + "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\", advanced=True),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Data Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" }, "input_value": { "advanced": false, @@ -690,11 +679,11 @@ }, "record_template": { "advanced": true, - "display_name": "Record Template", + "display_name": "Data Template", "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + "info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", "input_types": [ "Text" ], @@ -737,7 +726,7 @@ "value": "" }, "sender_name": { - "advanced": false, + "advanced": true, "display_name": "Sender Name", "dynamic": false, "fileTypes": [], @@ -880,7 +869,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Record\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" + "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Data\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" }, "input_value": { "advanced": false, @@ -891,7 +880,7 @@ "info": "", "input_types": [ "Text", - "Record", + "Data", "Prompt" ], "list": false, diff --git a/src/backend/base/langflow/initial_setup/starter_projects/Langflow Memory Conversation.json b/src/backend/base/langflow/initial_setup/starter_projects/Langflow Memory Conversation.json index f80dec5ba..fb7bd84b4 100644 --- a/src/backend/base/langflow/initial_setup/starter_projects/Langflow Memory Conversation.json +++ b/src/backend/base/langflow/initial_setup/starter_projects/Langflow Memory Conversation.json @@ -83,7 +83,7 @@ "id": "OpenAIModel-9RykF", "inputTypes": [ "Text", - "Record", + "Data", "Prompt" ], "type": "str" @@ -96,7 +96,7 @@ "stroke": "#555" }, "target": "OpenAIModel-9RykF", - "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-9RykFœ, œinputTypesœ: [œTextœ, œRecordœ, œPromptœ], œtypeœ: œstrœ}" + "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-9RykFœ, œinputTypesœ: [œTextœ, œDataœ, œPromptœ], œtypeœ: œstrœ}" }, { "className": "stroke-gray-900 stroke-connection", @@ -187,17 +187,6 @@ "icon": "ChatInput", "output_types": [], "outputs": [ - { - "cache": true, - "display_name": "Text", - "method": "text_response", - "name": "text", - "selected": "Text", - "types": [ - "Text" - ], - "value": "__UNDEFINED__" - }, { "cache": true, "display_name": "Message", @@ -228,7 +217,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatInput(ChatComponent):\n display_name = \"Chat Input\"\n description = \"Get chat inputs from the Playground.\"\n icon = \"ChatInput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n input_types=[],\n info=\"Message to be passed as input.\",\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"User\",\n info=\"Type of sender.\",\n advanced=True,\n ),\n StrInput(name=\"sender_name\", type=str, display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"User\"),\n StrInput(\n name=\"session_id\", type=str, display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, (Message, str)):\n self.store_message(message)\n self.status = message\n return message\n" + "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.inputs import DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatInput(ChatComponent):\n display_name = \"Chat Input\"\n description = \"Get chat inputs from the Playground.\"\n icon = \"ChatInput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n input_types=[],\n info=\"Message to be passed as input.\",\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"User\",\n info=\"Type of sender.\",\n advanced=True,\n ),\n StrInput(\n name=\"sender_name\",\n type=str,\n display_name=\"Sender Name\",\n info=\"Name of the sender.\",\n value=\"User\",\n advanced=True,\n ),\n StrInput(\n name=\"session_id\", type=str, display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True\n ),\n ]\n outputs = [\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, (Message, str)):\n self.store_message(message)\n self.status = message\n return message\n" }, "input_value": { "advanced": false, @@ -277,7 +266,7 @@ "value": "" }, "sender_name": { - "advanced": false, + "advanced": true, "display_name": "Sender Name", "dynamic": false, "fileTypes": [], @@ -407,7 +396,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\"),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Record Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" + "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\", advanced=True),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Data Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" }, "input_value": { "advanced": false, @@ -434,11 +423,11 @@ }, "record_template": { "advanced": true, - "display_name": "Record Template", + "display_name": "Data Template", "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + "info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", "input_types": [ "Text" ], @@ -481,7 +470,7 @@ "value": "" }, "sender_name": { - "advanced": false, + "advanced": true, "display_name": "Sender Name", "dynamic": false, "fileTypes": [], @@ -604,7 +593,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from typing import Optional\n\nfrom langflow.base.memory.memory import BaseMemoryComponent\nfrom langflow.field_typing import Text\nfrom langflow.helpers.record import messages_to_text\nfrom langflow.memory import get_messages\nfrom langflow.schema.message import Message\n\n\nclass MemoryComponent(BaseMemoryComponent):\n display_name = \"Chat Memory\"\n description = \"Retrieves stored chat messages given a specific Session ID.\"\n beta: bool = True\n icon = \"history\"\n\n def build_config(self):\n return {\n \"sender\": {\n \"options\": [\"Machine\", \"User\", \"Machine and User\"],\n \"display_name\": \"Sender Type\",\n },\n \"sender_name\": {\"display_name\": \"Sender Name\", \"advanced\": True},\n \"n_messages\": {\n \"display_name\": \"Number of Messages\",\n \"info\": \"Number of messages to retrieve.\",\n },\n \"session_id\": {\n \"display_name\": \"Session ID\",\n \"info\": \"Session ID of the chat history.\",\n \"input_types\": [\"Text\"],\n },\n \"order\": {\n \"options\": [\"Ascending\", \"Descending\"],\n \"display_name\": \"Order\",\n \"info\": \"Order of the messages.\",\n \"advanced\": True,\n },\n \"record_template\": {\n \"display_name\": \"Record Template\",\n \"multiline\": True,\n \"info\": \"Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.\",\n \"advanced\": True,\n },\n }\n\n def get_messages(self, **kwargs) -> list[Message]:\n # Validate kwargs by checking if it contains the correct keys\n if \"sender\" not in kwargs:\n kwargs[\"sender\"] = None\n if \"sender_name\" not in kwargs:\n kwargs[\"sender_name\"] = None\n if \"session_id\" not in kwargs:\n kwargs[\"session_id\"] = None\n if \"limit\" not in kwargs:\n kwargs[\"limit\"] = 5\n if \"order\" not in kwargs:\n kwargs[\"order\"] = \"Descending\"\n\n kwargs[\"order\"] = \"DESC\" if kwargs[\"order\"] == \"Descending\" else \"ASC\"\n if kwargs[\"sender\"] == \"Machine and User\":\n kwargs[\"sender\"] = None\n return get_messages(**kwargs)\n\n def build(\n self,\n sender: Optional[str] = \"Machine and User\",\n sender_name: Optional[str] = None,\n session_id: Optional[str] = None,\n n_messages: int = 5,\n order: Optional[str] = \"Descending\",\n record_template: Optional[str] = \"{sender_name}: {text}\",\n ) -> Text:\n messages = self.get_messages(\n sender=sender,\n sender_name=sender_name,\n session_id=session_id,\n limit=n_messages,\n order=order,\n )\n messages_str = messages_to_text(template=record_template or \"\", messages=messages)\n self.status = messages_str\n return messages_str\n" + "value": "from typing import Optional\n\nfrom langflow.base.memory.memory import BaseMemoryComponent\nfrom langflow.field_typing import Text\nfrom langflow.helpers.record import messages_to_text\nfrom langflow.memory import get_messages\nfrom langflow.schema.message import Message\n\n\nclass MemoryComponent(BaseMemoryComponent):\n display_name = \"Chat Memory\"\n description = \"Retrieves stored chat messages given a specific Session ID.\"\n beta: bool = True\n icon = \"history\"\n\n def build_config(self):\n return {\n \"sender\": {\n \"options\": [\"Machine\", \"User\", \"Machine and User\"],\n \"display_name\": \"Sender Type\",\n },\n \"sender_name\": {\"display_name\": \"Sender Name\", \"advanced\": True},\n \"n_messages\": {\n \"display_name\": \"Number of Messages\",\n \"info\": \"Number of messages to retrieve.\",\n },\n \"session_id\": {\n \"display_name\": \"Session ID\",\n \"info\": \"Session ID of the chat history.\",\n \"input_types\": [\"Text\"],\n },\n \"order\": {\n \"options\": [\"Ascending\", \"Descending\"],\n \"display_name\": \"Order\",\n \"info\": \"Order of the messages.\",\n \"advanced\": True,\n },\n \"record_template\": {\n \"display_name\": \"Data Template\",\n \"multiline\": True,\n \"info\": \"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n \"advanced\": True,\n },\n }\n\n def get_messages(self, **kwargs) -> list[Message]:\n # Validate kwargs by checking if it contains the correct keys\n if \"sender\" not in kwargs:\n kwargs[\"sender\"] = None\n if \"sender_name\" not in kwargs:\n kwargs[\"sender_name\"] = None\n if \"session_id\" not in kwargs:\n kwargs[\"session_id\"] = None\n if \"limit\" not in kwargs:\n kwargs[\"limit\"] = 5\n if \"order\" not in kwargs:\n kwargs[\"order\"] = \"Descending\"\n\n kwargs[\"order\"] = \"DESC\" if kwargs[\"order\"] == \"Descending\" else \"ASC\"\n if kwargs[\"sender\"] == \"Machine and User\":\n kwargs[\"sender\"] = None\n return get_messages(**kwargs)\n\n def build(\n self,\n sender: Optional[str] = \"Machine and User\",\n sender_name: Optional[str] = None,\n session_id: Optional[str] = None,\n n_messages: int = 5,\n order: Optional[str] = \"Descending\",\n record_template: Optional[str] = \"{sender_name}: {text}\",\n ) -> Text:\n messages = self.get_messages(\n sender=sender,\n sender_name=sender_name,\n session_id=session_id,\n limit=n_messages,\n order=order,\n )\n messages_str = messages_to_text(template=record_template or \"\", messages=messages)\n self.status = messages_str\n return messages_str\n" }, "n_messages": { "advanced": false, @@ -653,11 +642,11 @@ }, "record_template": { "advanced": true, - "display_name": "Record Template", + "display_name": "Data Template", "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + "info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", "input_types": [ "Text" ], @@ -1012,7 +1001,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Record\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" + "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Data\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" }, "input_value": { "advanced": false, @@ -1023,7 +1012,7 @@ "info": "", "input_types": [ "Text", - "Record", + "Data", "Prompt" ], "list": false, diff --git a/src/backend/base/langflow/initial_setup/starter_projects/Langflow Prompt Chaining.json b/src/backend/base/langflow/initial_setup/starter_projects/Langflow Prompt Chaining.json index 4876fe2d9..76699b87f 100644 --- a/src/backend/base/langflow/initial_setup/starter_projects/Langflow Prompt Chaining.json +++ b/src/backend/base/langflow/initial_setup/starter_projects/Langflow Prompt Chaining.json @@ -79,7 +79,7 @@ "id": "OpenAIModel-uYXZJ", "inputTypes": [ "Text", - "Record", + "Data", "Prompt" ], "type": "str" @@ -92,7 +92,7 @@ "stroke": "#555" }, "target": "OpenAIModel-uYXZJ", - "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-uYXZJœ, œinputTypesœ: [œTextœ, œRecordœ, œPromptœ], œtypeœ: œstrœ}" + "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-uYXZJœ, œinputTypesœ: [œTextœ, œDataœ, œPromptœ], œtypeœ: œstrœ}" }, { "className": "stroke-gray-900 stroke-connection", @@ -202,7 +202,7 @@ "id": "OpenAIModel-XawYB", "inputTypes": [ "Text", - "Record", + "Data", "Prompt" ], "type": "str" @@ -215,7 +215,7 @@ "stroke": "#555" }, "target": "OpenAIModel-XawYB", - "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-XawYBœ, œinputTypesœ: [œTextœ, œRecordœ, œPromptœ], œtypeœ: œstrœ}" + "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-XawYBœ, œinputTypesœ: [œTextœ, œDataœ, œPromptœ], œtypeœ: œstrœ}" }, { "className": "stroke-gray-900 stroke-connection", @@ -598,7 +598,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\"),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Record Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" + "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\", advanced=True),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Data Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" }, "input_value": { "advanced": false, @@ -625,11 +625,11 @@ }, "record_template": { "advanced": true, - "display_name": "Record Template", + "display_name": "Data Template", "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + "info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", "input_types": [ "Text" ], @@ -672,7 +672,7 @@ "value": "" }, "sender_name": { - "advanced": false, + "advanced": true, "display_name": "Sender Name", "dynamic": false, "fileTypes": [], @@ -799,7 +799,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\"),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Record Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" + "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\", advanced=True),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Data Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" }, "input_value": { "advanced": false, @@ -826,11 +826,11 @@ }, "record_template": { "advanced": true, - "display_name": "Record Template", + "display_name": "Data Template", "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + "info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", "input_types": [ "Text" ], @@ -873,7 +873,7 @@ "value": "" }, "sender_name": { - "advanced": false, + "advanced": true, "display_name": "Sender Name", "dynamic": false, "fileTypes": [], @@ -983,7 +983,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langflow.base.io.text import TextComponent\nfrom langflow.field_typing import Text\nfrom langflow.template import Input, Output\n\n\nclass TextInput(TextComponent):\n display_name = \"Text Input\"\n description = \"Get text inputs from the Playground.\"\n icon = \"type\"\n\n inputs = [\n Input(\n name=\"input_value\",\n type=str,\n display_name=\"Value\",\n info=\"Text or Record to be passed as input.\",\n input_types=[\"Record\", \"Text\"],\n ),\n Input(\n name=\"record_template\",\n type=str,\n display_name=\"Record Template\",\n multiline=True,\n info=\"Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n ]\n\n def text_response(self) -> Text:\n return self.build(input_value=self.input_value, record_template=self.record_template)\n" + "value": "from langflow.base.io.text import TextComponent\nfrom langflow.field_typing import Text\nfrom langflow.template import Input, Output\n\n\nclass TextInput(TextComponent):\n display_name = \"Text Input\"\n description = \"Get text inputs from the Playground.\"\n icon = \"type\"\n\n inputs = [\n Input(\n name=\"input_value\",\n type=str,\n display_name=\"Value\",\n info=\"Text or Data to be passed as input.\",\n input_types=[\"Data\", \"Text\"],\n ),\n Input(\n name=\"record_template\",\n type=str,\n display_name=\"Data Template\",\n multiline=True,\n info=\"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n ]\n\n def text_response(self) -> Text:\n return self.build(input_value=self.input_value, record_template=self.record_template)\n" }, "input_value": { "advanced": false, @@ -991,9 +991,9 @@ "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Text or Record to be passed as input.", + "info": "Text or Data to be passed as input.", "input_types": [ - "Record", + "Data", "Text" ], "list": false, @@ -1010,11 +1010,11 @@ }, "record_template": { "advanced": true, - "display_name": "Record Template", + "display_name": "Data Template", "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + "info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", "input_types": [ "Text" ], @@ -1238,7 +1238,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Record\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" + "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Data\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" }, "input_value": { "advanced": false, @@ -1249,7 +1249,7 @@ "info": "", "input_types": [ "Text", - "Record", + "Data", "Prompt" ], "list": false, @@ -1655,7 +1655,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Record\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" + "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Data\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" }, "input_value": { "advanced": false, @@ -1666,7 +1666,7 @@ "info": "", "input_types": [ "Text", - "Record", + "Data", "Prompt" ], "list": false, diff --git a/src/backend/base/langflow/initial_setup/starter_projects/VectorStore-RAG-Flows.json b/src/backend/base/langflow/initial_setup/starter_projects/VectorStore-RAG-Flows.json index 132869f72..d5ab19839 100644 --- a/src/backend/base/langflow/initial_setup/starter_projects/VectorStore-RAG-Flows.json +++ b/src/backend/base/langflow/initial_setup/starter_projects/VectorStore-RAG-Flows.json @@ -83,7 +83,7 @@ "id": "OpenAIModel-EjXlN", "inputTypes": [ "Text", - "Record", + "Data", "Prompt" ], "type": "str" @@ -97,7 +97,7 @@ "stroke": "#555" }, "target": "OpenAIModel-EjXlN", - "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-EjXlNœ, œinputTypesœ: [œTextœ, œRecordœ, œPromptœ], œtypeœ: œstrœ}" + "targetHandle": "{œfieldNameœ: œinput_valueœ, œidœ: œOpenAIModel-EjXlNœ, œinputTypesœ: [œTextœ, œDataœ, œPromptœ], œtypeœ: œstrœ}" }, { "className": "stroke-gray-900 stroke-connection", @@ -137,16 +137,14 @@ "dataType": "File", "id": "File-t0a6a", "name": "record", - "output_types": [ - "Record" - ] + "output_types": [] }, "targetHandle": { "fieldName": "inputs", "id": "RecursiveCharacterTextSplitter-tR9QM", "inputTypes": [ "Document", - "Record" + "Data" ], "type": "Document" } @@ -154,12 +152,12 @@ "id": "reactflow__edge-File-t0a6a{œbaseClassesœ:[œRecordœ],œdataTypeœ:œFileœ,œidœ:œFile-t0a6aœ}-RecursiveCharacterTextSplitter-tR9QM{œfieldNameœ:œinputsœ,œidœ:œRecursiveCharacterTextSplitter-tR9QMœ,œinputTypesœ:[œDocumentœ,œRecordœ],œtypeœ:œDocumentœ}", "selected": false, "source": "File-t0a6a", - "sourceHandle": "{œdataTypeœ: œFileœ, œidœ: œFile-t0a6aœ, œoutput_typesœ: [œRecordœ], œnameœ: œrecordœ}", + "sourceHandle": "{œdataTypeœ: œFileœ, œidœ: œFile-t0a6aœ, œoutput_typesœ: [], œnameœ: œrecordœ}", "style": { "stroke": "#555" }, "target": "RecursiveCharacterTextSplitter-tR9QM", - "targetHandle": "{œfieldNameœ: œinputsœ, œidœ: œRecursiveCharacterTextSplitter-tR9QMœ, œinputTypesœ: [œDocumentœ, œRecordœ], œtypeœ: œDocumentœ}" + "targetHandle": "{œfieldNameœ: œinputsœ, œidœ: œRecursiveCharacterTextSplitter-tR9QMœ, œinputTypesœ: [œDocumentœ, œDataœ], œtypeœ: œDocumentœ}" }, { "className": "stroke-gray-900 stroke-connection", @@ -224,9 +222,7 @@ "dataType": "RecursiveCharacterTextSplitter", "id": "RecursiveCharacterTextSplitter-tR9QM", "name": "record", - "output_types": [ - "Record" - ] + "output_types": [] }, "targetHandle": { "fieldName": "inputs", @@ -238,7 +234,7 @@ "id": "reactflow__edge-RecursiveCharacterTextSplitter-tR9QM{œbaseClassesœ:[œRecordœ],œdataTypeœ:œRecursiveCharacterTextSplitterœ,œidœ:œRecursiveCharacterTextSplitter-tR9QMœ}-AstraDB-eUCSS{œfieldNameœ:œinputsœ,œidœ:œAstraDB-eUCSSœ,œinputTypesœ:null,œtypeœ:œRecordœ}", "selected": false, "source": "RecursiveCharacterTextSplitter-tR9QM", - "sourceHandle": "{œdataTypeœ: œRecursiveCharacterTextSplitterœ, œidœ: œRecursiveCharacterTextSplitter-tR9QMœ, œoutput_typesœ: [œRecordœ], œnameœ: œrecordœ}", + "sourceHandle": "{œdataTypeœ: œRecursiveCharacterTextSplitterœ, œidœ: œRecursiveCharacterTextSplitter-tR9QMœ, œoutput_typesœ: [], œnameœ: œrecordœ}", "style": { "stroke": "#555" }, @@ -280,9 +276,7 @@ "dataType": "AstraDBSearch", "id": "AstraDBSearch-41nRz", "name": "record", - "output_types": [ - "Record" - ] + "output_types": [] }, "targetHandle": { "fieldName": "input_value", @@ -296,7 +290,7 @@ }, "id": "reactflow__edge-AstraDBSearch-41nRz{œbaseClassesœ:[œRecordœ],œdataTypeœ:œAstraDBSearchœ,œidœ:œAstraDBSearch-41nRzœ}-TextOutput-BDknO{œfieldNameœ:œinput_valueœ,œidœ:œTextOutput-BDknOœ,œinputTypesœ:[œRecordœ,œTextœ],œtypeœ:œstrœ}", "source": "AstraDBSearch-41nRz", - "sourceHandle": "{œdataTypeœ: œAstraDBSearchœ, œidœ: œAstraDBSearch-41nRzœ, œoutput_typesœ: [œRecordœ], œnameœ: œrecordœ}", + "sourceHandle": "{œdataTypeœ: œAstraDBSearchœ, œidœ: œAstraDBSearch-41nRzœ, œoutput_typesœ: [], œnameœ: œrecordœ}", "style": { "stroke": "#555" }, @@ -332,17 +326,6 @@ "icon": "ChatInput", "output_types": [], "outputs": [ - { - "cache": true, - "display_name": "Text", - "method": "text_response", - "name": "text", - "selected": "Text", - "types": [ - "Text" - ], - "value": "__UNDEFINED__" - }, { "cache": true, "display_name": "Message", @@ -373,7 +356,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatInput(ChatComponent):\n display_name = \"Chat Input\"\n description = \"Get chat inputs from the Playground.\"\n icon = \"ChatInput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n input_types=[],\n info=\"Message to be passed as input.\",\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"User\",\n info=\"Type of sender.\",\n advanced=True,\n ),\n StrInput(name=\"sender_name\", type=str, display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"User\"),\n StrInput(\n name=\"session_id\", type=str, display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, (Message, str)):\n self.store_message(message)\n self.status = message\n return message\n" + "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.inputs import DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatInput(ChatComponent):\n display_name = \"Chat Input\"\n description = \"Get chat inputs from the Playground.\"\n icon = \"ChatInput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n input_types=[],\n info=\"Message to be passed as input.\",\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"User\",\n info=\"Type of sender.\",\n advanced=True,\n ),\n StrInput(\n name=\"sender_name\",\n type=str,\n display_name=\"Sender Name\",\n info=\"Name of the sender.\",\n value=\"User\",\n advanced=True,\n ),\n StrInput(\n name=\"session_id\", type=str, display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True\n ),\n ]\n outputs = [\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, (Message, str)):\n self.store_message(message)\n self.status = message\n return message\n" }, "input_value": { "advanced": false, @@ -422,7 +405,7 @@ "value": "" }, "sender_name": { - "advanced": false, + "advanced": true, "display_name": "Sender Name", "dynamic": false, "fileTypes": [], @@ -1203,7 +1186,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Record\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" + "value": "from langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.constants import STREAM_INFO_TEXT\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import MODEL_NAMES\nfrom langflow.field_typing import BaseLanguageModel, Text\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, FloatInput, SecretStrInput, StrInput\nfrom langflow.inputs.inputs import IntInput\nfrom langflow.template import Output\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n\n inputs = [\n StrInput(name=\"input_value\", display_name=\"Input\", input_types=[\"Text\", \"Data\", \"Prompt\"]),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n ),\n DictInput(name=\"model_kwargs\", display_name=\"Model Kwargs\", advanced=True),\n DropdownInput(\n name=\"model_name\", display_name=\"Model Name\", advanced=False, options=MODEL_NAMES, value=MODEL_NAMES[0]\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"openai_api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n ),\n FloatInput(name=\"temperature\", display_name=\"Temperature\", value=0.1),\n BoolInput(name=\"stream\", display_name=\"Stream\", info=STREAM_INFO_TEXT, advanced=True),\n StrInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"System message to pass to the model.\",\n advanced=True,\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text_output\", method=\"text_response\"),\n Output(display_name=\"Language Model\", name=\"model_output\", method=\"build_model\"),\n ]\n\n def text_response(self) -> Text:\n input_value = self.input_value\n stream = self.stream\n system_message = self.system_message\n output = self.build_model()\n result = self.get_chat_result(output, stream, input_value, system_message)\n self.status = result\n return result\n\n def build_model(self) -> BaseLanguageModel:\n openai_api_key = self.openai_api_key\n temperature = self.temperature\n model_name = self.model_name\n max_tokens = self.max_tokens\n model_kwargs = self.model_kwargs\n openai_api_base = self.openai_api_base or \"https://api.openai.com/v1\"\n\n if openai_api_key:\n api_key = SecretStr(openai_api_key)\n else:\n api_key = None\n\n output = ChatOpenAI(\n max_tokens=max_tokens or None,\n model_kwargs=model_kwargs or {},\n model=model_name,\n base_url=openai_api_base,\n api_key=api_key,\n temperature=temperature,\n )\n return output\n" }, "input_value": { "advanced": false, @@ -1214,7 +1197,7 @@ "info": "", "input_types": [ "Text", - "Record", + "Data", "Prompt" ], "list": false, @@ -1669,7 +1652,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\"),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Record Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" + "value": "from langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.inputs import BoolInput, DropdownInput, StrInput\nfrom langflow.schema.message import Message\nfrom langflow.template import Output\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n icon = \"ChatOutput\"\n\n inputs = [\n StrInput(\n name=\"input_value\",\n display_name=\"Text\",\n multiline=True,\n info=\"Message to be passed as output.\",\n input_types=[\"Text\", \"Message\"],\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[\"Machine\", \"User\"],\n value=\"Machine\",\n advanced=True,\n info=\"Type of sender.\",\n ),\n StrInput(name=\"sender_name\", display_name=\"Sender Name\", info=\"Name of the sender.\", value=\"AI\", advanced=True),\n StrInput(name=\"session_id\", display_name=\"Session ID\", info=\"Session ID for the message.\", advanced=True),\n BoolInput(\n name=\"record_template\",\n display_name=\"Data Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n ),\n ]\n outputs = [\n Output(display_name=\"Text\", name=\"text\", method=\"text_response\"),\n Output(display_name=\"Message\", name=\"message\", method=\"message_response\"),\n ]\n\n def text_response(self) -> Text:\n result = self.input_value\n if self.session_id:\n self.message_response()\n self.status = result\n return result\n\n def message_response(self) -> Message:\n message = Message(\n text=self.input_value,\n sender=self.sender,\n sender_name=self.sender_name,\n session_id=self.session_id,\n )\n if self.session_id and isinstance(message, Message) and isinstance(message.text, str):\n self.store_message(message)\n self.message.value = message\n\n self.status = message\n return message\n" }, "input_value": { "advanced": false, @@ -1696,11 +1679,11 @@ }, "record_template": { "advanced": true, - "display_name": "Record Template", + "display_name": "Data Template", "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.", + "info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.", "input_types": [ "Text" ], @@ -1743,7 +1726,7 @@ "value": "" }, "sender_name": { - "advanced": false, + "advanced": true, "display_name": "Sender Name", "dynamic": false, "fileTypes": [], @@ -1825,17 +1808,17 @@ "frozen": false, "icon": "file-text", "output_types": [ - "Record" + "Data" ], "outputs": [ { "cache": true, - "display_name": "Record", + "display_name": "Data", "method": null, - "name": "record", - "selected": "Record", + "name": "data", + "selected": "Data", "types": [ - "Record" + "Data" ], "value": "__UNDEFINED__" } @@ -1858,7 +1841,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from pathlib import Path\nfrom typing import Any, Dict\n\nfrom langflow.base.data.utils import TEXT_FILE_TYPES, parse_text_file_to_record\nfrom langflow.custom import CustomComponent\nfrom langflow.schema import Record\n\n\nclass FileComponent(CustomComponent):\n display_name = \"File\"\n description = \"A generic file loader.\"\n icon = \"file-text\"\n\n def build_config(self) -> Dict[str, Any]:\n return {\n \"path\": {\n \"display_name\": \"Path\",\n \"field_type\": \"file\",\n \"file_types\": TEXT_FILE_TYPES,\n \"info\": f\"Supported file types: {', '.join(TEXT_FILE_TYPES)}\",\n },\n \"silent_errors\": {\n \"display_name\": \"Silent Errors\",\n \"advanced\": True,\n \"info\": \"If true, errors will not raise an exception.\",\n },\n }\n\n def load_file(self, path: str, silent_errors: bool = False) -> Record:\n resolved_path = self.resolve_path(path)\n path_obj = Path(resolved_path)\n extension = path_obj.suffix[1:].lower()\n if extension == \"doc\":\n raise ValueError(\"doc files are not supported. Please save as .docx\")\n if extension not in TEXT_FILE_TYPES:\n raise ValueError(f\"Unsupported file type: {extension}\")\n record = parse_text_file_to_record(resolved_path, silent_errors)\n self.status = record if record else \"No data\"\n return record or Record()\n\n def build(\n self,\n path: str,\n silent_errors: bool = False,\n ) -> Record:\n record = self.load_file(path, silent_errors)\n self.status = record\n return record\n" + "value": "from pathlib import Path\nfrom typing import Any, Dict\n\nfrom langflow.base.data.utils import TEXT_FILE_TYPES, parse_text_file_to_record\nfrom langflow.custom import CustomComponent\nfrom langflow.schema import Data\n\n\nclass FileComponent(CustomComponent):\n display_name = \"File\"\n description = \"A generic file loader.\"\n icon = \"file-text\"\n\n def build_config(self) -> Dict[str, Any]:\n return {\n \"path\": {\n \"display_name\": \"Path\",\n \"field_type\": \"file\",\n \"file_types\": TEXT_FILE_TYPES,\n \"info\": f\"Supported file types: {', '.join(TEXT_FILE_TYPES)}\",\n },\n \"silent_errors\": {\n \"display_name\": \"Silent Errors\",\n \"advanced\": True,\n \"info\": \"If true, errors will not raise an exception.\",\n },\n }\n\n def load_file(self, path: str, silent_errors: bool = False) -> Data:\n resolved_path = self.resolve_path(path)\n path_obj = Path(resolved_path)\n extension = path_obj.suffix[1:].lower()\n if extension == \"doc\":\n raise ValueError(\"doc files are not supported. Please save as .docx\")\n if extension not in TEXT_FILE_TYPES:\n raise ValueError(f\"Unsupported file type: {extension}\")\n record = parse_text_file_to_record(resolved_path, silent_errors)\n self.status = record if record else \"No data\"\n return record or Data()\n\n def build(\n self,\n path: str,\n silent_errors: bool = False,\n ) -> Data:\n record = self.load_file(path, silent_errors)\n self.status = record\n return record\n" }, "path": { "advanced": false, @@ -1957,17 +1940,17 @@ "field_order": [], "frozen": false, "output_types": [ - "Record" + "Data" ], "outputs": [ { "cache": true, - "display_name": "Record", + "display_name": "Data", "method": null, - "name": "record", - "selected": "Record", + "name": "data", + "selected": "Data", "types": [ - "Record" + "Data" ], "value": "__UNDEFINED__" } @@ -2028,7 +2011,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from typing import Optional\n\nfrom langchain_core.documents import Document\nfrom langchain_text_splitters import RecursiveCharacterTextSplitter\n\nfrom langflow.custom import CustomComponent\nfrom langflow.schema import Record\nfrom langflow.utils.util import build_loader_repr_from_records, unescape_string\n\n\nclass RecursiveCharacterTextSplitterComponent(CustomComponent):\n display_name: str = \"Recursive Character Text Splitter\"\n description: str = \"Split text into chunks of a specified length.\"\n documentation: str = \"https://docs.langflow.org/components/text-splitters#recursivecharactertextsplitter\"\n\n def build_config(self):\n return {\n \"inputs\": {\n \"display_name\": \"Input\",\n \"info\": \"The texts to split.\",\n \"input_types\": [\"Document\", \"Record\"],\n },\n \"separators\": {\n \"display_name\": \"Separators\",\n \"info\": 'The characters to split on.\\nIf left empty defaults to [\"\\\\n\\\\n\", \"\\\\n\", \" \", \"\"].',\n \"is_list\": True,\n },\n \"chunk_size\": {\n \"display_name\": \"Chunk Size\",\n \"info\": \"The maximum length of each chunk.\",\n \"field_type\": \"int\",\n \"value\": 1000,\n },\n \"chunk_overlap\": {\n \"display_name\": \"Chunk Overlap\",\n \"info\": \"The amount of overlap between chunks.\",\n \"field_type\": \"int\",\n \"value\": 200,\n },\n \"code\": {\"show\": False},\n }\n\n def build(\n self,\n inputs: list[Document],\n separators: Optional[list[str]] = None,\n chunk_size: Optional[int] = 1000,\n chunk_overlap: Optional[int] = 200,\n ) -> list[Record]:\n \"\"\"\n Split text into chunks of a specified length.\n\n Args:\n separators (list[str]): The characters to split on.\n chunk_size (int): The maximum length of each chunk.\n chunk_overlap (int): The amount of overlap between chunks.\n length_function (function): The function to use to calculate the length of the text.\n\n Returns:\n list[str]: The chunks of text.\n \"\"\"\n\n if separators == \"\":\n separators = None\n elif separators:\n # check if the separators list has escaped characters\n # if there are escaped characters, unescape them\n separators = [unescape_string(x) for x in separators]\n\n # Make sure chunk_size and chunk_overlap are ints\n if isinstance(chunk_size, str):\n chunk_size = int(chunk_size)\n if isinstance(chunk_overlap, str):\n chunk_overlap = int(chunk_overlap)\n splitter = RecursiveCharacterTextSplitter(\n separators=separators,\n chunk_size=chunk_size,\n chunk_overlap=chunk_overlap,\n )\n documents = []\n for _input in inputs:\n if isinstance(_input, Record):\n documents.append(_input.to_lc_document())\n else:\n documents.append(_input)\n docs = splitter.split_documents(documents)\n records = self.to_records(docs)\n self.repr_value = build_loader_repr_from_records(records)\n return records\n" + "value": "from typing import Optional\n\nfrom langchain_core.documents import Document\nfrom langchain_text_splitters import RecursiveCharacterTextSplitter\n\nfrom langflow.custom import CustomComponent\nfrom langflow.schema import Data\nfrom langflow.utils.util import build_loader_repr_from_data, unescape_string\n\n\nclass RecursiveCharacterTextSplitterComponent(CustomComponent):\n display_name: str = \"Recursive Character Text Splitter\"\n description: str = \"Split text into chunks of a specified length.\"\n documentation: str = \"https://docs.langflow.org/components/text-splitters#recursivecharactertextsplitter\"\n\n def build_config(self):\n return {\n \"inputs\": {\n \"display_name\": \"Input\",\n \"info\": \"The texts to split.\",\n \"input_types\": [\"Document\", \"Data\"],\n },\n \"separators\": {\n \"display_name\": \"Separators\",\n \"info\": 'The characters to split on.\\nIf left empty defaults to [\"\\\\n\\\\n\", \"\\\\n\", \" \", \"\"].',\n \"is_list\": True,\n },\n \"chunk_size\": {\n \"display_name\": \"Chunk Size\",\n \"info\": \"The maximum length of each chunk.\",\n \"field_type\": \"int\",\n \"value\": 1000,\n },\n \"chunk_overlap\": {\n \"display_name\": \"Chunk Overlap\",\n \"info\": \"The amount of overlap between chunks.\",\n \"field_type\": \"int\",\n \"value\": 200,\n },\n \"code\": {\"show\": False},\n }\n\n def build(\n self,\n inputs: list[Document],\n separators: Optional[list[str]] = None,\n chunk_size: Optional[int] = 1000,\n chunk_overlap: Optional[int] = 200,\n ) -> list[Data]:\n \"\"\"\n Split text into chunks of a specified length.\n\n Args:\n separators (list[str]): The characters to split on.\n chunk_size (int): The maximum length of each chunk.\n chunk_overlap (int): The amount of overlap between chunks.\n length_function (function): The function to use to calculate the length of the text.\n\n Returns:\n list[str]: The chunks of text.\n \"\"\"\n\n if separators == \"\":\n separators = None\n elif separators:\n # check if the separators list has escaped characters\n # if there are escaped characters, unescape them\n separators = [unescape_string(x) for x in separators]\n\n # Make sure chunk_size and chunk_overlap are ints\n if isinstance(chunk_size, str):\n chunk_size = int(chunk_size)\n if isinstance(chunk_overlap, str):\n chunk_overlap = int(chunk_overlap)\n splitter = RecursiveCharacterTextSplitter(\n separators=separators,\n chunk_size=chunk_size,\n chunk_overlap=chunk_overlap,\n )\n documents = []\n for _input in inputs:\n if isinstance(_input, Data):\n documents.append(_input.to_lc_document())\n else:\n documents.append(_input)\n docs = splitter.split_documents(documents)\n data = self.to_data(docs)\n self.repr_value = build_loader_repr_from_data(data)\n return data\n" }, "inputs": { "advanced": false, @@ -2039,7 +2022,7 @@ "info": "The texts to split.", "input_types": [ "Document", - "Record" + "Data" ], "list": true, "load_from_db": false, @@ -2137,17 +2120,17 @@ "frozen": false, "icon": "AstraDB", "output_types": [ - "Record" + "Data" ], "outputs": [ { "cache": true, - "display_name": "Record", + "display_name": "Data", "method": null, - "name": "record", - "selected": "Record", + "name": "data", + "selected": "Data", "types": [ - "Record" + "Data" ], "value": "__UNDEFINED__" } @@ -2182,7 +2165,7 @@ "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Optional number of records to process in a single batch.", + "info": "Optional number of data to process in a single batch.", "list": false, "load_from_db": false, "multiline": false, @@ -2236,7 +2219,7 @@ "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Optional concurrency level for bulk insert operations that overwrite existing records.", + "info": "Optional concurrency level for bulk insert operations that overwrite existing data.", "list": false, "load_from_db": false, "multiline": false, @@ -2264,7 +2247,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from typing import List, Optional\n\nfrom langflow.components.vectorstores.AstraDB import AstraDBVectorStoreComponent\nfrom langflow.components.vectorstores.base.model import LCVectorStoreComponent\nfrom langflow.field_typing import Embeddings, Text\nfrom langflow.schema import Record\n\n\nclass AstraDBSearchComponent(LCVectorStoreComponent):\n display_name = \"Astra DB Search\"\n description = \"Searches an existing Astra DB Vector Store.\"\n icon = \"AstraDB\"\n field_order = [\"token\", \"api_endpoint\", \"collection_name\", \"input_value\", \"embedding\"]\n\n def build_config(self):\n return {\n \"search_type\": {\n \"display_name\": \"Search Type\",\n \"options\": [\"Similarity\", \"MMR\"],\n },\n \"input_value\": {\n \"display_name\": \"Input Value\",\n \"info\": \"Input value to search\",\n },\n \"embedding\": {\"display_name\": \"Embedding\", \"info\": \"Embedding to use\"},\n \"collection_name\": {\n \"display_name\": \"Collection Name\",\n \"info\": \"The name of the collection within Astra DB where the vectors will be stored.\",\n },\n \"token\": {\n \"display_name\": \"Token\",\n \"info\": \"Authentication token for accessing Astra DB.\",\n \"password\": True,\n },\n \"api_endpoint\": {\n \"display_name\": \"API Endpoint\",\n \"info\": \"API endpoint URL for the Astra DB service.\",\n },\n \"namespace\": {\n \"display_name\": \"Namespace\",\n \"info\": \"Optional namespace within Astra DB to use for the collection.\",\n \"advanced\": True,\n },\n \"metric\": {\n \"display_name\": \"Metric\",\n \"info\": \"Optional distance metric for vector comparisons in the vector store.\",\n \"advanced\": True,\n },\n \"batch_size\": {\n \"display_name\": \"Batch Size\",\n \"info\": \"Optional number of records to process in a single batch.\",\n \"advanced\": True,\n },\n \"bulk_insert_batch_concurrency\": {\n \"display_name\": \"Bulk Insert Batch Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations.\",\n \"advanced\": True,\n },\n \"bulk_insert_overwrite_concurrency\": {\n \"display_name\": \"Bulk Insert Overwrite Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations that overwrite existing records.\",\n \"advanced\": True,\n },\n \"bulk_delete_concurrency\": {\n \"display_name\": \"Bulk Delete Concurrency\",\n \"info\": \"Optional concurrency level for bulk delete operations.\",\n \"advanced\": True,\n },\n \"setup_mode\": {\n \"display_name\": \"Setup Mode\",\n \"info\": \"Configuration mode for setting up the vector store, with options like “Sync”, “Async”, or “Off”.\",\n \"options\": [\"Sync\", \"Async\", \"Off\"],\n \"advanced\": True,\n },\n \"pre_delete_collection\": {\n \"display_name\": \"Pre Delete Collection\",\n \"info\": \"Boolean flag to determine whether to delete the collection before creating a new one.\",\n \"advanced\": True,\n },\n \"metadata_indexing_include\": {\n \"display_name\": \"Metadata Indexing Include\",\n \"info\": \"Optional list of metadata fields to include in the indexing.\",\n \"advanced\": True,\n },\n \"metadata_indexing_exclude\": {\n \"display_name\": \"Metadata Indexing Exclude\",\n \"info\": \"Optional list of metadata fields to exclude from the indexing.\",\n \"advanced\": True,\n },\n \"collection_indexing_policy\": {\n \"display_name\": \"Collection Indexing Policy\",\n \"info\": \"Optional dictionary defining the indexing policy for the collection.\",\n \"advanced\": True,\n },\n \"number_of_results\": {\n \"display_name\": \"Number of Results\",\n \"info\": \"Number of results to return.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n embedding: Embeddings,\n collection_name: str,\n input_value: Text,\n token: str,\n api_endpoint: str,\n search_type: str = \"Similarity\",\n number_of_results: int = 4,\n namespace: Optional[str] = None,\n metric: Optional[str] = None,\n batch_size: Optional[int] = None,\n bulk_insert_batch_concurrency: Optional[int] = None,\n bulk_insert_overwrite_concurrency: Optional[int] = None,\n bulk_delete_concurrency: Optional[int] = None,\n setup_mode: str = \"Sync\",\n pre_delete_collection: bool = False,\n metadata_indexing_include: Optional[List[str]] = None,\n metadata_indexing_exclude: Optional[List[str]] = None,\n collection_indexing_policy: Optional[dict] = None,\n ) -> List[Record]:\n vector_store = AstraDBVectorStoreComponent().build(\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n try:\n return self.search_with_vector_store(input_value, search_type, vector_store, k=number_of_results)\n except KeyError as e:\n if \"content\" in str(e):\n raise ValueError(\n \"You should ingest data through Langflow (or LangChain) to query it in Langflow. Your collection does not contain a field name 'content'.\"\n )\n else:\n raise e\n" + "value": "from typing import List, Optional\n\nfrom langflow.components.vectorstores.AstraDB import AstraDBVectorStoreComponent\nfrom langflow.components.vectorstores.base.model import LCVectorStoreComponent\nfrom langflow.field_typing import Embeddings, Text\nfrom langflow.schema import Data\n\n\nclass AstraDBSearchComponent(LCVectorStoreComponent):\n display_name = \"Astra DB Search\"\n description = \"Searches an existing Astra DB Vector Store.\"\n icon = \"AstraDB\"\n field_order = [\"token\", \"api_endpoint\", \"collection_name\", \"input_value\", \"embedding\"]\n\n def build_config(self):\n return {\n \"search_type\": {\n \"display_name\": \"Search Type\",\n \"options\": [\"Similarity\", \"MMR\"],\n },\n \"input_value\": {\n \"display_name\": \"Input Value\",\n \"info\": \"Input value to search\",\n },\n \"embedding\": {\"display_name\": \"Embedding\", \"info\": \"Embedding to use\"},\n \"collection_name\": {\n \"display_name\": \"Collection Name\",\n \"info\": \"The name of the collection within Astra DB where the vectors will be stored.\",\n },\n \"token\": {\n \"display_name\": \"Token\",\n \"info\": \"Authentication token for accessing Astra DB.\",\n \"password\": True,\n },\n \"api_endpoint\": {\n \"display_name\": \"API Endpoint\",\n \"info\": \"API endpoint URL for the Astra DB service.\",\n },\n \"namespace\": {\n \"display_name\": \"Namespace\",\n \"info\": \"Optional namespace within Astra DB to use for the collection.\",\n \"advanced\": True,\n },\n \"metric\": {\n \"display_name\": \"Metric\",\n \"info\": \"Optional distance metric for vector comparisons in the vector store.\",\n \"advanced\": True,\n },\n \"batch_size\": {\n \"display_name\": \"Batch Size\",\n \"info\": \"Optional number of data to process in a single batch.\",\n \"advanced\": True,\n },\n \"bulk_insert_batch_concurrency\": {\n \"display_name\": \"Bulk Insert Batch Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations.\",\n \"advanced\": True,\n },\n \"bulk_insert_overwrite_concurrency\": {\n \"display_name\": \"Bulk Insert Overwrite Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations that overwrite existing data.\",\n \"advanced\": True,\n },\n \"bulk_delete_concurrency\": {\n \"display_name\": \"Bulk Delete Concurrency\",\n \"info\": \"Optional concurrency level for bulk delete operations.\",\n \"advanced\": True,\n },\n \"setup_mode\": {\n \"display_name\": \"Setup Mode\",\n \"info\": \"Configuration mode for setting up the vector store, with options like “Sync”, “Async”, or “Off”.\",\n \"options\": [\"Sync\", \"Async\", \"Off\"],\n \"advanced\": True,\n },\n \"pre_delete_collection\": {\n \"display_name\": \"Pre Delete Collection\",\n \"info\": \"Boolean flag to determine whether to delete the collection before creating a new one.\",\n \"advanced\": True,\n },\n \"metadata_indexing_include\": {\n \"display_name\": \"Metadata Indexing Include\",\n \"info\": \"Optional list of metadata fields to include in the indexing.\",\n \"advanced\": True,\n },\n \"metadata_indexing_exclude\": {\n \"display_name\": \"Metadata Indexing Exclude\",\n \"info\": \"Optional list of metadata fields to exclude from the indexing.\",\n \"advanced\": True,\n },\n \"collection_indexing_policy\": {\n \"display_name\": \"Collection Indexing Policy\",\n \"info\": \"Optional dictionary defining the indexing policy for the collection.\",\n \"advanced\": True,\n },\n \"number_of_results\": {\n \"display_name\": \"Number of Results\",\n \"info\": \"Number of results to return.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n embedding: Embeddings,\n collection_name: str,\n input_value: Text,\n token: str,\n api_endpoint: str,\n search_type: str = \"Similarity\",\n number_of_results: int = 4,\n namespace: Optional[str] = None,\n metric: Optional[str] = None,\n batch_size: Optional[int] = None,\n bulk_insert_batch_concurrency: Optional[int] = None,\n bulk_insert_overwrite_concurrency: Optional[int] = None,\n bulk_delete_concurrency: Optional[int] = None,\n setup_mode: str = \"Sync\",\n pre_delete_collection: bool = False,\n metadata_indexing_include: Optional[List[str]] = None,\n metadata_indexing_exclude: Optional[List[str]] = None,\n collection_indexing_policy: Optional[dict] = None,\n ) -> List[Data]:\n vector_store = AstraDBVectorStoreComponent().build(\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n try:\n return self.search_with_vector_store(input_value, search_type, vector_store, k=number_of_results)\n except KeyError as e:\n if \"content\" in str(e):\n raise ValueError(\n \"You should ingest data through Langflow (or LangChain) to query it in Langflow. Your collection does not contain a field name 'content'.\"\n )\n else:\n raise e\n" }, "collection_indexing_policy": { "advanced": true, @@ -2658,7 +2641,7 @@ "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Optional number of records to process in a single batch.", + "info": "Optional number of data to process in a single batch.", "list": false, "load_from_db": false, "multiline": false, @@ -2712,7 +2695,7 @@ "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Optional concurrency level for bulk insert operations that overwrite existing records.", + "info": "Optional concurrency level for bulk insert operations that overwrite existing data.", "list": false, "load_from_db": false, "multiline": false, @@ -2740,7 +2723,7 @@ "show": true, "title_case": false, "type": "code", - "value": "from typing import List, Optional, Union\n\nfrom langflow.custom import CustomComponent\nfrom langflow.field_typing import Embeddings, VectorStore\nfrom langflow.schema import Record\nfrom langchain_core.retrievers import BaseRetriever\n\n\nclass AstraDBVectorStoreComponent(CustomComponent):\n display_name = \"Astra DB\"\n description = \"Builds or loads an Astra DB Vector Store.\"\n icon = \"AstraDB\"\n field_order = [\"token\", \"api_endpoint\", \"collection_name\", \"inputs\", \"embedding\"]\n\n def build_config(self):\n return {\n \"inputs\": {\n \"display_name\": \"Inputs\",\n \"info\": \"Optional list of records to be processed and stored in the vector store.\",\n },\n \"embedding\": {\"display_name\": \"Embedding\", \"info\": \"Embedding to use\"},\n \"collection_name\": {\n \"display_name\": \"Collection Name\",\n \"info\": \"The name of the collection within Astra DB where the vectors will be stored.\",\n },\n \"token\": {\n \"display_name\": \"Token\",\n \"info\": \"Authentication token for accessing Astra DB.\",\n \"password\": True,\n },\n \"api_endpoint\": {\n \"display_name\": \"API Endpoint\",\n \"info\": \"API endpoint URL for the Astra DB service.\",\n },\n \"namespace\": {\n \"display_name\": \"Namespace\",\n \"info\": \"Optional namespace within Astra DB to use for the collection.\",\n \"advanced\": True,\n },\n \"metric\": {\n \"display_name\": \"Metric\",\n \"info\": \"Optional distance metric for vector comparisons in the vector store.\",\n \"advanced\": True,\n },\n \"batch_size\": {\n \"display_name\": \"Batch Size\",\n \"info\": \"Optional number of records to process in a single batch.\",\n \"advanced\": True,\n },\n \"bulk_insert_batch_concurrency\": {\n \"display_name\": \"Bulk Insert Batch Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations.\",\n \"advanced\": True,\n },\n \"bulk_insert_overwrite_concurrency\": {\n \"display_name\": \"Bulk Insert Overwrite Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations that overwrite existing records.\",\n \"advanced\": True,\n },\n \"bulk_delete_concurrency\": {\n \"display_name\": \"Bulk Delete Concurrency\",\n \"info\": \"Optional concurrency level for bulk delete operations.\",\n \"advanced\": True,\n },\n \"setup_mode\": {\n \"display_name\": \"Setup Mode\",\n \"info\": \"Configuration mode for setting up the vector store, with options like “Sync”, “Async”, or “Off”.\",\n \"options\": [\"Sync\", \"Async\", \"Off\"],\n \"advanced\": True,\n },\n \"pre_delete_collection\": {\n \"display_name\": \"Pre Delete Collection\",\n \"info\": \"Boolean flag to determine whether to delete the collection before creating a new one.\",\n \"advanced\": True,\n },\n \"metadata_indexing_include\": {\n \"display_name\": \"Metadata Indexing Include\",\n \"info\": \"Optional list of metadata fields to include in the indexing.\",\n \"advanced\": True,\n },\n \"metadata_indexing_exclude\": {\n \"display_name\": \"Metadata Indexing Exclude\",\n \"info\": \"Optional list of metadata fields to exclude from the indexing.\",\n \"advanced\": True,\n },\n \"collection_indexing_policy\": {\n \"display_name\": \"Collection Indexing Policy\",\n \"info\": \"Optional dictionary defining the indexing policy for the collection.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n embedding: Embeddings,\n token: str,\n api_endpoint: str,\n collection_name: str,\n inputs: Optional[List[Record]] = None,\n namespace: Optional[str] = None,\n metric: Optional[str] = None,\n batch_size: Optional[int] = None,\n bulk_insert_batch_concurrency: Optional[int] = None,\n bulk_insert_overwrite_concurrency: Optional[int] = None,\n bulk_delete_concurrency: Optional[int] = None,\n setup_mode: str = \"Sync\",\n pre_delete_collection: bool = False,\n metadata_indexing_include: Optional[List[str]] = None,\n metadata_indexing_exclude: Optional[List[str]] = None,\n collection_indexing_policy: Optional[dict] = None,\n ) -> Union[VectorStore, BaseRetriever]:\n try:\n from langchain_astradb import AstraDBVectorStore\n from langchain_astradb.utils.astradb import SetupMode\n except ImportError:\n raise ImportError(\n \"Could not import langchain Astra DB integration package. \"\n \"Please install it with `pip install langchain-astradb`.\"\n )\n\n try:\n setup_mode_value = SetupMode[setup_mode.upper()]\n except KeyError:\n raise ValueError(f\"Invalid setup mode: {setup_mode}\")\n if inputs:\n documents = [_input.to_lc_document() for _input in inputs]\n\n vector_store = AstraDBVectorStore.from_documents(\n documents=documents,\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode_value,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n else:\n vector_store = AstraDBVectorStore(\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode_value,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n\n return vector_store\n return vector_store\n" + "value": "from typing import List, Optional, Union\n\nfrom langchain_core.retrievers import BaseRetriever\n\nfrom langflow.custom import CustomComponent\nfrom langflow.field_typing import Embeddings, VectorStore\nfrom langflow.schema import Data\n\n\nclass AstraDBVectorStoreComponent(CustomComponent):\n display_name = \"Astra DB\"\n description = \"Builds or loads an Astra DB Vector Store.\"\n icon = \"AstraDB\"\n field_order = [\"token\", \"api_endpoint\", \"collection_name\", \"inputs\", \"embedding\"]\n\n def build_config(self):\n return {\n \"inputs\": {\n \"display_name\": \"Inputs\",\n \"info\": \"Optional list of data to be processed and stored in the vector store.\",\n },\n \"embedding\": {\"display_name\": \"Embedding\", \"info\": \"Embedding to use\"},\n \"collection_name\": {\n \"display_name\": \"Collection Name\",\n \"info\": \"The name of the collection within Astra DB where the vectors will be stored.\",\n },\n \"token\": {\n \"display_name\": \"Token\",\n \"info\": \"Authentication token for accessing Astra DB.\",\n \"password\": True,\n },\n \"api_endpoint\": {\n \"display_name\": \"API Endpoint\",\n \"info\": \"API endpoint URL for the Astra DB service.\",\n },\n \"namespace\": {\n \"display_name\": \"Namespace\",\n \"info\": \"Optional namespace within Astra DB to use for the collection.\",\n \"advanced\": True,\n },\n \"metric\": {\n \"display_name\": \"Metric\",\n \"info\": \"Optional distance metric for vector comparisons in the vector store.\",\n \"advanced\": True,\n },\n \"batch_size\": {\n \"display_name\": \"Batch Size\",\n \"info\": \"Optional number of data to process in a single batch.\",\n \"advanced\": True,\n },\n \"bulk_insert_batch_concurrency\": {\n \"display_name\": \"Bulk Insert Batch Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations.\",\n \"advanced\": True,\n },\n \"bulk_insert_overwrite_concurrency\": {\n \"display_name\": \"Bulk Insert Overwrite Concurrency\",\n \"info\": \"Optional concurrency level for bulk insert operations that overwrite existing data.\",\n \"advanced\": True,\n },\n \"bulk_delete_concurrency\": {\n \"display_name\": \"Bulk Delete Concurrency\",\n \"info\": \"Optional concurrency level for bulk delete operations.\",\n \"advanced\": True,\n },\n \"setup_mode\": {\n \"display_name\": \"Setup Mode\",\n \"info\": \"Configuration mode for setting up the vector store, with options like “Sync”, “Async”, or “Off”.\",\n \"options\": [\"Sync\", \"Async\", \"Off\"],\n \"advanced\": True,\n },\n \"pre_delete_collection\": {\n \"display_name\": \"Pre Delete Collection\",\n \"info\": \"Boolean flag to determine whether to delete the collection before creating a new one.\",\n \"advanced\": True,\n },\n \"metadata_indexing_include\": {\n \"display_name\": \"Metadata Indexing Include\",\n \"info\": \"Optional list of metadata fields to include in the indexing.\",\n \"advanced\": True,\n },\n \"metadata_indexing_exclude\": {\n \"display_name\": \"Metadata Indexing Exclude\",\n \"info\": \"Optional list of metadata fields to exclude from the indexing.\",\n \"advanced\": True,\n },\n \"collection_indexing_policy\": {\n \"display_name\": \"Collection Indexing Policy\",\n \"info\": \"Optional dictionary defining the indexing policy for the collection.\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n embedding: Embeddings,\n token: str,\n api_endpoint: str,\n collection_name: str,\n inputs: Optional[List[Data]] = None,\n namespace: Optional[str] = None,\n metric: Optional[str] = None,\n batch_size: Optional[int] = None,\n bulk_insert_batch_concurrency: Optional[int] = None,\n bulk_insert_overwrite_concurrency: Optional[int] = None,\n bulk_delete_concurrency: Optional[int] = None,\n setup_mode: str = \"Sync\",\n pre_delete_collection: bool = False,\n metadata_indexing_include: Optional[List[str]] = None,\n metadata_indexing_exclude: Optional[List[str]] = None,\n collection_indexing_policy: Optional[dict] = None,\n ) -> Union[VectorStore, BaseRetriever]:\n try:\n from langchain_astradb import AstraDBVectorStore\n from langchain_astradb.utils.astradb import SetupMode\n except ImportError:\n raise ImportError(\n \"Could not import langchain Astra DB integration package. \"\n \"Please install it with `pip install langchain-astradb`.\"\n )\n\n try:\n setup_mode_value = SetupMode[setup_mode.upper()]\n except KeyError:\n raise ValueError(f\"Invalid setup mode: {setup_mode}\")\n if inputs:\n documents = [_input.to_lc_document() for _input in inputs]\n\n vector_store = AstraDBVectorStore.from_documents(\n documents=documents,\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode_value,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n else:\n vector_store = AstraDBVectorStore(\n embedding=embedding,\n collection_name=collection_name,\n token=token,\n api_endpoint=api_endpoint,\n namespace=namespace,\n metric=metric,\n batch_size=batch_size,\n bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,\n bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,\n bulk_delete_concurrency=bulk_delete_concurrency,\n setup_mode=setup_mode_value,\n pre_delete_collection=pre_delete_collection,\n metadata_indexing_include=metadata_indexing_include,\n metadata_indexing_exclude=metadata_indexing_exclude,\n collection_indexing_policy=collection_indexing_policy,\n )\n\n return vector_store\n return vector_store\n" }, "collection_indexing_policy": { "advanced": true, @@ -2806,7 +2789,7 @@ "dynamic": false, "fileTypes": [], "file_path": "", - "info": "Optional list of records to be processed and stored in the vector store.", + "info": "Optional list of data to be processed and stored in the vector store.", "list": true, "load_from_db": false, "multiline": false, diff --git a/src/backend/base/langflow/interface/initialize/loading.py b/src/backend/base/langflow/interface/initialize/loading.py index d3d52d8b2..a8901c2c7 100644 --- a/src/backend/base/langflow/interface/initialize/loading.py +++ b/src/backend/base/langflow/interface/initialize/loading.py @@ -7,7 +7,7 @@ import orjson from loguru import logger from langflow.custom.eval import eval_custom_component_code -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.artifact import get_artifact_type, post_process_raw if TYPE_CHECKING: @@ -147,7 +147,7 @@ async def build_custom_component(params: dict, custom_component: "CustomComponen # Call the build method directly if it's sync build_result = custom_component.build(**params) custom_repr = custom_component.custom_repr() - if custom_repr is None and isinstance(build_result, (dict, Record, str)): + if custom_repr is None and isinstance(build_result, (dict, Data, str)): custom_repr = build_result if not isinstance(custom_repr, str): custom_repr = str(custom_repr) diff --git a/src/backend/base/langflow/memory.py b/src/backend/base/langflow/memory.py index e812f449c..6b2ceca50 100644 --- a/src/backend/base/langflow/memory.py +++ b/src/backend/base/langflow/memory.py @@ -27,7 +27,7 @@ def get_messages( limit (Optional[int]): The maximum number of messages to retrieve. Returns: - List[Record]: A list of Record objects representing the retrieved messages. + List[Data]: A list of Data objects representing the retrieved messages. """ monitor_service = get_monitor_service() messages_df = monitor_service.get_messages( @@ -113,7 +113,7 @@ def store_message( flow_id (Optional[str]): The flow ID associated with the message. When running from the CustomComponent you can access this using `self.graph.flow_id`. Returns: - List[Message]: A list of records containing the stored message. + List[Message]: A list of data containing the stored message. Raises: ValueError: If any of the required parameters (session_id, sender, sender_name) is not provided. diff --git a/src/backend/base/langflow/schema/__init__.py b/src/backend/base/langflow/schema/__init__.py index 9f7e3b384..ae65fd05a 100644 --- a/src/backend/base/langflow/schema/__init__.py +++ b/src/backend/base/langflow/schema/__init__.py @@ -1,4 +1,4 @@ from .dotdict import dotdict -from .record import Record +from .data import Data -__all__ = ["Record", "dotdict"] +__all__ = ["Data", "dotdict"] diff --git a/src/backend/base/langflow/schema/artifact.py b/src/backend/base/langflow/schema/artifact.py index 1cf0cad18..45a68281d 100644 --- a/src/backend/base/langflow/schema/artifact.py +++ b/src/backend/base/langflow/schema/artifact.py @@ -1,7 +1,7 @@ from enum import Enum from typing import Generator -from langflow.schema import Record +from langflow.schema import Data from langflow.schema.message import Message @@ -18,7 +18,7 @@ class ArtifactType(str, Enum): def get_artifact_type(value, build_result=None) -> str: result = ArtifactType.UNKNOWN match value: - case Record(): + case Data(): result = ArtifactType.RECORD case str(): diff --git a/src/backend/base/langflow/schema/record.py b/src/backend/base/langflow/schema/data.py similarity index 84% rename from src/backend/base/langflow/schema/record.py rename to src/backend/base/langflow/schema/data.py index 67d9b5da8..ef3d37987 100644 --- a/src/backend/base/langflow/schema/record.py +++ b/src/backend/base/langflow/schema/data.py @@ -9,7 +9,7 @@ from langchain_core.prompts.image import ImagePromptTemplate from pydantic import BaseModel, model_serializer, model_validator -class Record(BaseModel): +class Data(BaseModel): """ Represents a record with text and optional data. @@ -49,44 +49,44 @@ class Record(BaseModel): return self.data.get(self.text_key, self.default_value) @classmethod - def from_document(cls, document: Document) -> "Record": + def from_document(cls, document: Document) -> "Data": """ - Converts a Document to a Record. + Converts a Document to a Data. Args: document (Document): The Document to convert. Returns: - Record: The converted Record. + Data: The converted Data. """ data = document.metadata data["text"] = document.page_content return cls(data=data, text_key="text") @classmethod - def from_lc_message(cls, message: BaseMessage) -> "Record": + def from_lc_message(cls, message: BaseMessage) -> "Data": """ - Converts a BaseMessage to a Record. + Converts a BaseMessage to a Data. Args: message (BaseMessage): The BaseMessage to convert. Returns: - Record: The converted Record. + Data: The converted Data. """ data: dict = {"text": message.content} data["metadata"] = cast(dict, message.to_json()) return cls(data=data, text_key="text") - def __add__(self, other: "Record") -> "Record": + def __add__(self, other: "Data") -> "Data": """ - Combines the data of two records by attempting to add values for overlapping keys + Combines the data of two data by attempting to add values for overlapping keys for all types that support the addition operation. Falls back to the value from 'other' record when addition is not supported. """ combined_data = self.data.copy() for key, value in other.data.items(): - # If the key exists in both records and both values support the addition operation + # If the key exists in both data and both values support the addition operation if key in combined_data: try: combined_data[key] += value @@ -97,11 +97,11 @@ class Record(BaseModel): # If the key is not in the first record, simply add it combined_data[key] = value - return Record(data=combined_data) + return Data(data=combined_data) def to_lc_document(self) -> Document: """ - Converts the Record to a Document. + Converts the Data to a Document. Returns: Document: The converted Document. @@ -113,18 +113,18 @@ class Record(BaseModel): self, ) -> HumanMessage | SystemMessage: """ - Converts the Record to a BaseMessage. + Converts the Data to a BaseMessage. Returns: BaseMessage: The converted BaseMessage. """ - # The idea of this function is to be a helper to convert a Record to a BaseMessage + # The idea of this function is to be a helper to convert a Data to a BaseMessage # It will use the "sender" key to determine if the message is Human or AI # If the key is not present, it will default to AI # But first we check if all required keys are present in the data dictionary # they are: "text", "sender" if not all(key in self.data for key in ["text", "sender"]): - raise ValueError(f"Missing required keys ('text', 'sender') in Record: {self.data}") + raise ValueError(f"Missing required keys ('text', 'sender') in Data: {self.data}") sender = self.data.get("sender", "Machine") text = self.data.get("text", "") files = self.data.get("files", []) @@ -181,17 +181,17 @@ class Record(BaseModel): def __deepcopy__(self, memo): """ - Custom deepcopy implementation to handle copying of the Record object. + Custom deepcopy implementation to handle copying of the Data object. """ - # Create a new Record object with a deep copy of the data dictionary - return Record(data=copy.deepcopy(self.data, memo), text_key=self.text_key, default_value=self.default_value) + # Create a new Data object with a deep copy of the data dictionary + return Data(data=copy.deepcopy(self.data, memo), text_key=self.text_key, default_value=self.default_value) - # check which attributes the Record has by checking the keys in the data dictionary + # check which attributes the Data has by checking the keys in the data dictionary def __dir__(self): return super().__dir__() + list(self.data.keys()) def __str__(self) -> str: - # return a JSON string representation of the Record atributes + # return a JSON string representation of the Data atributes try: data = {k: v.to_json() if hasattr(v, "to_json") else v for k, v in self.data.items()} return json.dumps(data, indent=4) @@ -202,4 +202,4 @@ class Record(BaseModel): return key in self.data def __eq__(self, other): - return isinstance(other, Record) and self.data == other.data + return isinstance(other, Data) and self.data == other.data diff --git a/src/backend/base/langflow/schema/message.py b/src/backend/base/langflow/schema/message.py index 865d684bf..33ccbb62e 100644 --- a/src/backend/base/langflow/schema/message.py +++ b/src/backend/base/langflow/schema/message.py @@ -7,7 +7,7 @@ from langchain_core.prompts.image import ImagePromptTemplate from pydantic import BaseModel, BeforeValidator, ConfigDict, Field, field_serializer from langflow.schema.image import Image, get_file_paths, is_image_file -from langflow.schema.record import Record +from langflow.schema.data import Data def _timestamp_to_str(timestamp: datetime) -> str: @@ -40,12 +40,12 @@ class Message(BaseModel): self, ) -> BaseMessage: """ - Converts the Record to a BaseMessage. + Converts the Data to a BaseMessage. Returns: BaseMessage: The converted BaseMessage. """ - # The idea of this function is to be a helper to convert a Record to a BaseMessage + # The idea of this function is to be a helper to convert a Data to a BaseMessage # It will use the "sender" key to determine if the message is Human or AI # If the key is not present, it will default to AI # But first we check if all required keys are present in the data dictionary @@ -68,15 +68,15 @@ class Message(BaseModel): return AIMessage(content=self.text) @classmethod - def from_record(cls, record: Record) -> "Message": + def from_record(cls, record: Data) -> "Message": """ - Converts a BaseMessage to a Record. + Converts a BaseMessage to a Data. Args: record (BaseMessage): The BaseMessage to convert. Returns: - Record: The converted Record. + Data: The converted Data. """ return cls( diff --git a/src/backend/base/langflow/services/database/models/flow/model.py b/src/backend/base/langflow/services/database/models/flow/model.py index 05953e736..23b330af9 100644 --- a/src/backend/base/langflow/services/database/models/flow/model.py +++ b/src/backend/base/langflow/services/database/models/flow/model.py @@ -13,7 +13,7 @@ from pydantic import field_serializer, field_validator from sqlalchemy import UniqueConstraint from sqlmodel import JSON, Column, Field, Relationship, SQLModel -from langflow.schema import Record +from langflow.schema import Data if TYPE_CHECKING: from langflow.services.database.models.folder import Folder @@ -151,7 +151,7 @@ class Flow(FlowBase, table=True): "description": serialized.pop("description"), "updated_at": serialized.pop("updated_at"), } - record = Record(data=data) + record = Data(data=data) return record __table_args__ = ( diff --git a/src/backend/base/langflow/template/field/prompt.py b/src/backend/base/langflow/template/field/prompt.py index f8f3b51bd..0df5a77a1 100644 --- a/src/backend/base/langflow/template/field/prompt.py +++ b/src/backend/base/langflow/template/field/prompt.py @@ -2,7 +2,7 @@ from typing import Optional from langflow.template.field.base import Input -DEFAULT_PROMPT_INTUT_TYPES = ["Document", "Message", "Record", "Text"] +DEFAULT_PROMPT_INTUT_TYPES = ["Document", "Message", "Data", "Text"] class DefaultPromptField(Input): diff --git a/src/backend/base/langflow/utils/schemas.py b/src/backend/base/langflow/utils/schemas.py index 647941f59..a55965ff4 100644 --- a/src/backend/base/langflow/utils/schemas.py +++ b/src/backend/base/langflow/utils/schemas.py @@ -98,9 +98,9 @@ class ChatOutputResponse(BaseModel): class RecordOutputResponse(BaseModel): - """Record output response schema.""" + """Data output response schema.""" - records: List[Optional[Dict]] + data: List[Optional[Dict]] class ContainsEnumMeta(enum.EnumMeta): diff --git a/src/backend/base/langflow/utils/util.py b/src/backend/base/langflow/utils/util.py index 89b44bd0e..688ccb36e 100644 --- a/src/backend/base/langflow/utils/util.py +++ b/src/backend/base/langflow/utils/util.py @@ -7,7 +7,7 @@ from typing import Any, Dict, List, Optional, Union from docstring_parser import parse -from langflow.schema import Record +from langflow.schema import Data from langflow.services.deps import get_settings_service from langflow.template.frontend_node.constants import FORCE_SHOW_FIELDS from langflow.utils import constants @@ -400,23 +400,23 @@ def add_options_to_field(value: Dict[str, Any], class_name: Optional[str], key: value["value"] = options_map[class_name][0] -def build_loader_repr_from_records(records: List[Record]) -> str: +def build_loader_repr_from_data(data: List[Data]) -> str: """ - Builds a string representation of the loader based on the given records. + Builds a string representation of the loader based on the given data. Args: - records (List[Record]): A list of records. + data (List[Data]): A list of data. Returns: str: A string representation of the loader. """ - if records: - avg_length = sum(len(doc.text) for doc in records) / len(records) - return f"""{len(records)} records - \nAvg. Record Length (characters): {int(avg_length)} - Records: {records[:3]}...""" - return "0 records" + if data: + avg_length = sum(len(doc.text) for doc in data) / len(data) + return f"""{len(data)} data + \nAvg. Data Length (characters): {int(avg_length)} + Records: {data[:3]}...""" + return "0 data" def update_settings( diff --git a/src/frontend/src/CustomNodes/GenericNode/components/outputModal/components/switchOutputView/index.tsx b/src/frontend/src/CustomNodes/GenericNode/components/outputModal/components/switchOutputView/index.tsx index b699bc6e1..bb4fc34ca 100644 --- a/src/frontend/src/CustomNodes/GenericNode/components/outputModal/components/switchOutputView/index.tsx +++ b/src/frontend/src/CustomNodes/GenericNode/components/outputModal/components/switchOutputView/index.tsx @@ -1,5 +1,5 @@ +import RecordsOutputComponent from "../../../../../../components/dataOutputComponent"; import ForwardedIconComponent from "../../../../../../components/genericIconComponent"; -import RecordsOutputComponent from "../../../../../../components/recordsOutputComponent"; import { Alert, AlertDescription, diff --git a/src/frontend/src/components/recordsOutputComponent/index.tsx b/src/frontend/src/components/dataOutputComponent/index.tsx similarity index 96% rename from src/frontend/src/components/recordsOutputComponent/index.tsx rename to src/frontend/src/components/dataOutputComponent/index.tsx index 957a43cc7..c0e03b556 100644 --- a/src/frontend/src/components/recordsOutputComponent/index.tsx +++ b/src/frontend/src/components/dataOutputComponent/index.tsx @@ -25,7 +25,7 @@ function RecordsOutputComponent({ return ( artifact.data, + (artifact) => artifact.data ) ?? [] : [flowPoolNode?.data?.artifacts] } diff --git a/src/frontend/src/utils/styleUtils.ts b/src/frontend/src/utils/styleUtils.ts index 518c23656..8b1a9fd45 100644 --- a/src/frontend/src/utils/styleUtils.ts +++ b/src/frontend/src/utils/styleUtils.ts @@ -273,7 +273,7 @@ export const nodeColors: { [char: string]: string } = { unknown: "#9CA3AF", custom_components: "#ab11ab", Records: "#31a3cc", - Record: "#31a3cc", + Data: "#31a3cc", Message: "#4367BF", }; diff --git a/tests/test_data_components.py b/tests/test_data_components.py index 975c81567..7e8e9187f 100644 --- a/tests/test_data_components.py +++ b/tests/test_data_components.py @@ -7,7 +7,6 @@ import pytest import respx from dictdiffer import diff from httpx import Response - from langflow.components import data @@ -109,11 +108,11 @@ async def test_build_with_multiple_urls(api_request): assert len(results) == len(urls) -@patch("langflow.components.data.Directory.parallel_load_records") +@patch("langflow.components.data.Directory.parallel_load_data") @patch("langflow.components.data.Directory.retrieve_file_paths") @patch("langflow.components.data.DirectoryComponent.resolve_path") def test_directory_component_build_with_multithreading( - mock_resolve_path, mock_retrieve_file_paths, mock_parallel_load_records + mock_resolve_path, mock_retrieve_file_paths, mock_parallel_load_data ): # Arrange directory_component = data.DirectoryComponent() @@ -129,7 +128,7 @@ def test_directory_component_build_with_multithreading( mock_retrieve_file_paths.return_value = [ os.path.join(path, file) for file in os.listdir(path) if file.endswith(".py") ] - mock_parallel_load_records.return_value = [Mock()] + mock_parallel_load_data.return_value = [Mock()] # Act directory_component.build( @@ -145,7 +144,7 @@ def test_directory_component_build_with_multithreading( # Assert mock_resolve_path.assert_called_once_with(path) mock_retrieve_file_paths.assert_called_once_with(path, load_hidden, recursive, depth) - mock_parallel_load_records.assert_called_once_with( + mock_parallel_load_data.assert_called_once_with( mock_retrieve_file_paths.return_value, silent_errors, max_concurrency ) @@ -163,7 +162,7 @@ def test_directory_without_mocks(): setup_path = Path(setup.__file__).parent / "starter_projects" results = directory_component.build(str(setup_path), use_multithreading=False) assert len(results) == len(projects) - # each result is a Record that contains the content attribute + # each result is a Data that contains the content attribute # each are dict that are exactly the same as one of the projects for i, result in enumerate(results): assert result.text in projects, list(diff(result.text, projects[i])) @@ -180,7 +179,7 @@ def test_directory_without_mocks(): def test_url_component(): url_component = data.URLComponent() # the url component can be used to load the contents of a website - records = url_component.build(["https://langflow.org"]) - assert all(record.data for record in records) - assert all(record.text for record in records) - assert all(record.source for record in records) + _data = url_component.build(["https://langflow.org"]) + assert all(value.data for value in _data) + assert all(value.text for value in _data) + assert all(value.source for value in _data) diff --git a/tests/test_endpoints.py b/tests/test_endpoints.py index ab016ab17..cea0bb16f 100644 --- a/tests/test_endpoints.py +++ b/tests/test_endpoints.py @@ -4,6 +4,7 @@ from uuid import UUID, uuid4 import pytest from fastapi import status from fastapi.testclient import TestClient + from langflow.custom.directory_reader.directory_reader import DirectoryReader from langflow.services.deps import get_settings_service @@ -479,7 +480,7 @@ def test_successful_run_with_output_type_text(client, starter_project, created_a display_names = [output.get("component_display_name") for output in outputs_dict.get("outputs")] assert all([name in display_names for name in ["Chat Output"]]), display_names inner_results = [output.get("results") for output in outputs_dict.get("outputs")] - expected_keys = ["Record", "Message"] + expected_keys = ["Data", "Message"] assert all([key in result for result in inner_results for key in expected_keys]), outputs_dict @@ -510,7 +511,7 @@ def test_successful_run_with_output_type_any(client, starter_project, created_ap display_names = [output.get("component_display_name") for output in outputs_dict.get("outputs")] assert all([name in display_names for name in ["Chat Output"]]), display_names inner_results = [output.get("results") for output in outputs_dict.get("outputs")] - expected_keys = ["Record", "Message"] + expected_keys = ["Data", "Message"] assert all([key in result for result in inner_results for key in expected_keys]), outputs_dict diff --git a/tests/test_helper_components.py b/tests/test_helper_components.py index 01a69de74..9c2523316 100644 --- a/tests/test_helper_components.py +++ b/tests/test_helper_components.py @@ -1,8 +1,7 @@ from langchain_core.documents import Document - from langflow.components import helpers from langflow.custom.utils import build_custom_component_template -from langflow.schema import Record +from langflow.schema import Data def test_update_record_component(): @@ -11,7 +10,7 @@ def test_update_record_component(): # Act new_data = {"new_key": "new_value"} - existing_record = Record(data={"existing_key": "existing_value"}) + existing_record = Data(data={"existing_key": "existing_value"}) result = update_record_component.build(existing_record, new_data) assert result.data == {"existing_key": "existing_value", "new_key": "new_value"} assert result.existing_key == "existing_value" @@ -29,7 +28,7 @@ def test_document_to_record_component(): # Assert # Replace with your actual expected result - assert result == [Record(data={"text": "key: value", "url": "https://example.com"})] + assert result == [Data(data={"text": "key: value", "url": "https://example.com"})] def test_uuid_generator_component(): @@ -52,15 +51,15 @@ def test_uuid_generator_component(): assert len(result) == 36 -def test_records_as_text_component(): +def test_data_as_text_component(): # Arrange - records_as_text_component = helpers.RecordsToTextComponent() + data_as_text_component = helpers.RecordsToTextComponent() # Act # Replace with your actual test data - records = [Record(data={"key": "value", "bacon": "eggs"})] + data = [Data(data={"key": "value", "bacon": "eggs"})] template = "Data:{data} -- Bacon:{bacon}" - result = records_as_text_component.build(records, template=template) + result = data_as_text_component.build(data, template=template) # Assert # Replace with your actual expected result @@ -78,4 +77,4 @@ def test_text_to_record_component(): # Assert # Replace with your actual expected result - assert result == Record(data={"key": "value"}) + assert result == Data(data={"key": "value"}) diff --git a/tests/test_record.py b/tests/test_record.py index 45afaa5af..e070a55c4 100644 --- a/tests/test_record.py +++ b/tests/test_record.py @@ -1,23 +1,22 @@ from langchain_core.documents import Document - -from langflow.schema import Record +from langflow.schema import Data def test_record_initialization(): - record = Record(text_key="msg", data={"msg": "Hello, World!", "extra": "value"}) + record = Data(text_key="msg", data={"msg": "Hello, World!", "extra": "value"}) assert record.msg == "Hello, World!" assert record.extra == "value" def test_validate_data_with_extra_keys(): - record = Record(dummy_key="dummy", data={"key": "value"}) + record = Data(dummy_key="dummy", data={"key": "value"}) assert record.data["dummy_key"] == "dummy" assert "dummy_key" in record.data assert record.key == "value" def test_conversion_to_document(): - record = Record(data={"text": "Sample text", "meta": "data"}) + record = Data(data={"text": "Sample text", "meta": "data"}) document = record.to_lc_document() assert document.page_content == "Sample text" assert document.metadata == {"meta": "data"} @@ -25,35 +24,35 @@ def test_conversion_to_document(): def test_conversion_from_document(): document = Document(page_content="Doc content", metadata={"meta": "info"}) - record = Record.from_document(document) + record = Data.from_document(document) assert record.text == "Doc content" assert record.meta == "info" def test_add_method_for_strings(): - record1 = Record(data={"text": "Hello"}) - record2 = Record(data={"text": " World"}) + record1 = Data(data={"text": "Hello"}) + record2 = Data(data={"text": " World"}) combined = record1 + record2 assert combined.text == "Hello World" def test_add_method_for_integers(): - record1 = Record(data={"number": 5}) - record2 = Record(data={"number": 10}) + record1 = Data(data={"number": 5}) + record2 = Data(data={"number": 10}) combined = record1 + record2 assert combined.number == 15 def test_add_method_with_non_overlapping_keys(): - record1 = Record(data={"text": "Hello"}) - record2 = Record(data={"number": 10}) + record1 = Data(data={"text": "Hello"}) + record2 = Data(data={"number": 10}) combined = record1 + record2 assert combined.text == "Hello" assert combined.number == 10 def test_custom_attribute_get_set_del(): - record = Record() + record = Data() record.custom_attr = "custom_value" assert record.custom_attr == "custom_value" del record.custom_attr @@ -63,7 +62,7 @@ def test_custom_attribute_get_set_del(): def test_deep_copy(): import copy - record1 = Record(data={"text": "Hello", "number": 10}) + record1 = Data(data={"text": "Hello", "number": 10}) record2 = copy.deepcopy(record1) assert record2.text == "Hello" assert record2.number == 10 @@ -72,20 +71,20 @@ def test_deep_copy(): def test_custom_attribute_setting_and_getting(): - record = Record() + record = Data() record.dynamic_attribute = "Dynamic Value" assert record.dynamic_attribute == "Dynamic Value" def test_str_and_dir_methods(): - record = Record(text_key="text", data={"text": "Test Text", "key": "value"}) + record = Data(text_key="text", data={"text": "Test Text", "key": "value"}) assert "Test Text" in str(record) assert "key" in dir(record) assert "data" in dir(record) def test_dir_includes_data_keys(): - record = Record(data={"text": "Hello", "new_attr": "value"}) + record = Data(data={"text": "Hello", "new_attr": "value"}) dir_output = dir(record) # Check for standard attributes @@ -103,7 +102,7 @@ def test_dir_includes_data_keys(): def test_dir_reflects_attribute_deletion(): - record = Record(data={"removable": "I can be removed"}) + record = Data(data={"removable": "I can be removed"}) assert "removable" in dir(record) # Delete the attribute and check again @@ -113,27 +112,27 @@ def test_dir_reflects_attribute_deletion(): def test_get_text_with_text_key(): data = {"text": "Hello, World!"} - schema = Record(data=data, text_key="text", default_value="default") + schema = Data(data=data, text_key="text", default_value="default") result = schema.get_text() assert result == "Hello, World!" def test_get_text_without_text_key(): data = {"other_key": "Hello, World!"} - schema = Record(data=data, text_key="text", default_value="default") + schema = Data(data=data, text_key="text", default_value="default") result = schema.get_text() assert result == "default" def test_get_text_with_empty_data(): data = {} - schema = Record(data=data, text_key="text", default_value="default") + schema = Data(data=data, text_key="text", default_value="default") result = schema.get_text() assert result == "default" def test_get_text_with_none_data(): data = None - schema = Record(data=data, text_key="text", default_value="default") + schema = Data(data=data, text_key="text", default_value="default") result = schema.get_text() assert result == "default"