rename Record to Data

This commit is contained in:
ogabrielluiz 2024-06-12 17:58:16 -03:00
commit 1d0056f4fc
142 changed files with 819 additions and 869 deletions

View file

@ -24,15 +24,15 @@ Provide the session ID to clear its message history.
---
## Extract Key From Record
## Extract Key From Data
This component extracts specified keys from a record.
**Parameters**
- **Record:**
- **Data:**
- **Display Name:** Record
- **Display Name:** Data
- **Info:** The record from which to extract keys.
- **Keys:**
@ -138,9 +138,9 @@ This component generates a notification.
- **Display Name:** Name
- **Info:** The notification's name.
- **Record:**
- **Data:**
- **Display Name:** Record
- **Display Name:** Data
- **Info:** Optionally, a record to store in the notification.
- **Append:**

View file

@ -13,7 +13,7 @@ This component retrieves stored chat messages based on a specific session ID.
- **Number of messages:** Number of messages to retrieve.
- **Session ID:** The session ID of the chat history.
- **Order:** Choose the message order, either "Ascending" or "Descending".
- **Record template:** (Optional) Template to convert a record to text. If left empty, the system dynamically sets it to the record's text key.
- **Data template:** (Optional) Template to convert a record to text. If left empty, the system dynamically sets it to the record's text key.
---
@ -124,5 +124,5 @@ Update a record with text-based key/value pairs, similar to updating a Python di
#### Parameters
- **Record:** The record to update.
- **Data:** The record to update.
- **New data:** The new data to update the record with.

View file

@ -8,11 +8,11 @@ They also dynamically change the Playground and can be renamed to facilitate bui
## Inputs
Inputs are components used to define where data enters your flow. They can receive data from the user, a database, or any other source that can be converted to Text or Record.
Inputs are components used to define where data enters your flow. They can receive data from the user, a database, or any other source that can be converted to Text or Data.
The difference between Chat Input and other Input components is the output format, the number of configurable fields, and the way they are displayed in the Playground.
Chat Input components can output `Text` or `Record`. When you want to pass the sender name or sender to the next component, use the `Record` output. To pass only the message, use the `Text` output, useful when saving the message to a database or memory system like Zep.
Chat Input components can output `Text` or `Data`. When you want to pass the sender name or sender to the next component, use the `Data` output. To pass only the message, use the `Text` output, useful when saving the message to a database or memory system like Zep.
You can find out more about Chat Input and other Inputs [here](#chat-input).
@ -38,8 +38,8 @@ This component collects user input from the chat.
<Admonition type="note" title="Note">
<p>
If `As Record` is `true` and the `Message` is a `Record`, the data of the
`Record` will be updated with the `Sender`, `Sender Name`, and `Session ID`.
If `As Data` is `true` and the `Message` is a `Data`, the data of the `Data`
will be updated with the `Sender`, `Sender Name`, and `Session ID`.
</p>
</Admonition>
@ -70,11 +70,11 @@ The **Text Input** component adds an **Input** field on the Playground. This ena
**Parameters**
- **Value:** Specifies the text input value. This is where the user inputs text data that will be passed to the next component in the sequence. If no value is provided, it defaults to an empty string.
- **Record Template:** Specifies how a `Record` should be converted into `Text`.
- **Data Template:** Specifies how a `Data` should be converted into `Text`.
The **Record Template** field is used to specify how a `Record` should be converted into `Text`. This is particularly useful when you want to extract specific information from a `Record` and pass it as text to the next component in the sequence.
The **Data Template** field is used to specify how a `Data` should be converted into `Text`. This is particularly useful when you want to extract specific information from a `Data` and pass it as text to the next component in the sequence.
For example, if you have a `Record` with the following structure:
For example, if you have a `Data` with the following structure:
```json
{
@ -84,9 +84,9 @@ For example, if you have a `Record` with the following structure:
}
```
A template with `Name: {name}, Age: {age}` will convert the `Record` into a text string of `Name: John Doe, Age: 30`.
A template with `Name: {name}, Age: {age}` will convert the `Data` into a text string of `Name: John Doe, Age: 30`.
If you pass more than one `Record`, the text will be concatenated with a new line separator.
If you pass more than one `Data`, the text will be concatenated with a new line separator.
## Outputs
@ -112,8 +112,8 @@ This component sends a message to the chat.
<Admonition type="note" title="Note">
<p>
If `As Record` is `true` and the `Message` is a `Record`, the data in the
`Record` is updated with the `Sender`, `Sender Name`, and `Session ID`.
If `As Data` is `true` and the `Message` is a `Data`, the data in the `Data`
is updated with the `Sender`, `Sender Name`, and `Session ID`.
</p>
</Admonition>
@ -154,4 +154,5 @@ The `PromptTemplate` component enables users to create prompts and define variab
After defining a variable in the prompt template, it acts as its own component input. See [Prompt Customization](../administration/prompt-customization) for more details.
- **template:** The template used to format an individual request.
- **template:** The template used to format an individual request.import Admonition from "@theme/Admonition";
import ZoomableImage from "/src/theme/ZoomableImage.js";

View file

@ -1,14 +1,14 @@
# Text and Record
# Text and Data
In Langflow 1.0, we added two main input and output types: `Text` and `Record`.
In Langflow 1.0, we added two main input and output types: `Text` and `Data`.
`Text` is a simple string input and output type, while `Record` is a structure very similar to a dictionary in Python. It is a key-value pair data structure.
`Text` is a simple string input and output type, while `Data` is a structure very similar to a dictionary in Python. It is a key-value pair data structure.
We've created a few components to help you work with these types. Let's see how a few of them work.
## Records To Text
This is a component that takes in Records and outputs a `Text`. It does this using a template string and concatenating the values of the `Record`, one per line.
This is a component that takes in Records and outputs a `Text`. It does this using a template string and concatenating the values of the `Data`, one per line.
If we have the following Records:
@ -32,13 +32,13 @@ Alice: Hello!
John: Hi!
```
## Create Record
## Create Data
This component allows you to create a `Record` from a number of inputs. You can add as many key-value pairs as you want (as long as it is less than 15). Once you've picked that number you'll need to write the name of the Key and can pass `Text` values from other components to it.
This component allows you to create a `Data` from a number of inputs. You can add as many key-value pairs as you want (as long as it is less than 15). Once you've picked that number you'll need to write the name of the Key and can pass `Text` values from other components to it.
## Documents To Records
This component takes in a LangChain `Document` and outputs a `Record`. It does this by extracting the `page_content` and the `metadata` from the `Document` and adding them to the `Record` as text and data respectively.
This component takes in a LangChain `Document` and outputs a `Data`. It does this by extracting the `page_content` and the `metadata` from the `Document` and adding them to the `Data` as text and data respectively.
## Why is this useful?

View file

@ -4,14 +4,18 @@ import ZoomableImage from "/src/theme/ZoomableImage.js";
import ReactPlayer from "react-player";
import Admonition from "@theme/Admonition";
# Create Record
# Create Data
In Langflow, a `Record` has a structure very similar to a Python dictionary. It is a key-value pair data structure.
In Langflow, a `Data` has a structure very similar to a Python dictionary. It is a key-value pair data structure.
The **Create Record** component allows you to dynamically create a `Record` from a specified number of inputs. You can add as many key-value pairs as you want (as long as it is less than 15 😅). Once you've chosen the number of `Records`, add keys and fill up values, or pass on values from other components to the component using the input handles.
The **Create Data** component allows you to dynamically create a `Data` from a specified number of inputs. You can add as many key-value pairs as you want (as long as it is less than 15 😅). Once you've chosen the number of `Records`, add keys and fill up values, or pass on values from other components to the component using the input handles.
<div
style={{ marginBottom: "20px", display: "flex", justifyContent: "center" }}
>
<ReactPlayer playing controls url="/videos/create_record.mp4" />
</div>
import ThemedImage from "@theme/ThemedImage"; import useBaseUrl from
"@docusaurus/useBaseUrl"; import ZoomableImage from
"/src/theme/ZoomableImage.js"; import ReactPlayer from "react-player"; import
Admonition from "@theme/Admonition";

View file

@ -33,7 +33,7 @@ import requests
from typing import Dict
from langflow import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class NotionDatabaseProperties(CustomComponent):
@ -61,7 +61,7 @@ class NotionDatabaseProperties(CustomComponent):
self,
database_id: str,
notion_secret: str,
) -> Record:
) -> Data:
url = f"https://api.notion.com/v1/databases/{database_id}"
headers = {
"Authorization": f"Bearer {notion_secret}",
@ -74,7 +74,7 @@ class NotionDatabaseProperties(CustomComponent):
data = response.json()
properties = data.get("properties", {})
record = Record(text=str(response.json()), data=properties)
record = Data(text=str(response.json()), data=properties)
self.status = f"Retrieved {len(properties)} properties from the Notion database.\n {record.text}"
return record
```

View file

@ -39,7 +39,7 @@ import requests
import json
from typing import Dict, Any, List
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class NotionListPages(CustomComponent):
display_name = "List Pages [Notion]"
@ -83,7 +83,7 @@ class NotionListPages(CustomComponent):
notion_secret: str,
database_id: str,
query_payload: str = "{}",
) -> List[Record]:
) -> List[Data]:
try:
query_data = json.loads(query_payload)
filter_obj = query_data.get("filter")
@ -127,14 +127,14 @@ class NotionListPages(CustomComponent):
)
combined_text += text
records.append(Record(text=text, data=page_data))
records.append(Data(text=text, data=page_data))
self.status = combined_text.strip()
return records
except Exception as e:
self.status = f"An error occurred: {str(e)}"
return [Record(text=self.status, data=[])]
return [Data(text=self.status, data=[])]
```
<Admonition type="info" title="Example Usage">

View file

@ -30,7 +30,7 @@ import requests
from typing import List
from langflow import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class NotionUserList(CustomComponent):
@ -52,7 +52,7 @@ class NotionUserList(CustomComponent):
def build(
self,
notion_secret: str,
) -> List[Record]:
) -> List[Data]:
url = "https://api.notion.com/v1/users"
headers = {
"Authorization": f"Bearer {notion_secret}",
@ -84,7 +84,7 @@ class NotionUserList(CustomComponent):
output += f"{key.replace('_', ' ').title()}: {value}\n"
output += "________________________\n"
record = Record(text=output, data=record_data)
record = Data(text=output, data=record_data)
records.append(record)
self.status = "\n".join(record.text for record in records)

View file

@ -36,7 +36,7 @@ import requests
from typing import Dict, Any
from langflow import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class NotionPageContent(CustomComponent):
@ -64,7 +64,7 @@ class NotionPageContent(CustomComponent):
self,
page_id: str,
notion_secret: str,
) -> Record:
) -> Data:
blocks_url = f"https://api.notion.com/v1/blocks/{page_id}/children?page_size=100"
headers = {
"Authorization": f"Bearer {notion_secret}",
@ -80,7 +80,7 @@ class NotionPageContent(CustomComponent):
content = self.parse_blocks(blocks_data["results"])
self.status = content
return Record(data={"content": content}, text=content)
return Data(data={"content": content}, text=content)
def parse_blocks(self, blocks: list) -> str:
content = ""

View file

@ -26,7 +26,7 @@ import requests
from typing import Dict, Any
from langflow import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class NotionPageUpdate(CustomComponent):
@ -61,7 +61,7 @@ class NotionPageUpdate(CustomComponent):
page_id: str,
properties: str,
notion_secret: str,
) -> Record:
) -> Data:
url = f"https://api.notion.com/v1/pages/{page_id}"
headers = {
"Authorization": f"Bearer {notion_secret}",
@ -88,7 +88,7 @@ class NotionPageUpdate(CustomComponent):
output += f"{prop_name}: {prop_value}\n"
self.status = output
return Record(data=updated_page)
return Data(data=updated_page)
```
Let's break down the key parts of this component:
@ -99,7 +99,7 @@ Let's break down the key parts of this component:
- The component interacts with the Notion API to update the page properties. It constructs the API URL, headers, and request data based on the provided parameters.
- The processed data is returned as a `Record` object, which can be connected to other components in the Langflow flow. The `Record` object contains the updated page data.
- The processed data is returned as a `Data` object, which can be connected to other components in the Langflow flow. The `Data` object contains the updated page data.
- The component also stores the updated page properties in the `status` attribute for logging and debugging purposes.

View file

@ -36,7 +36,7 @@ To use the `NotionSearch` component in a Langflow flow, follow these steps:
import requests
from typing import Dict, Any, List
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class NotionSearch(CustomComponent):
display_name = "Search Notion"
@ -88,7 +88,7 @@ class NotionSearch(CustomComponent):
query: str = "",
filter_value: str = "page",
sort_direction: str = "descending",
) -> List[Record]:
) -> List[Data]:
try:
url = "https://api.notion.com/v1/search"
headers = {
@ -135,14 +135,14 @@ class NotionSearch(CustomComponent):
text += f"type: {result['object']}\nlast_edited_time: {result['last_edited_time']}\n\n"
combined_text += text
records.append(Record(text=text, data=result_data))
records.append(Data(text=text, data=result_data))
self.status = combined_text
return records
except Exception as e:
self.status = f"An error occurred: {str(e)}"
return [Record(text=self.status, data=[])]
return [Data(text=self.status, data=[])]
```
## Example Usage

View file

@ -16,7 +16,7 @@ We have a special channel in our Discord server dedicated to Langflow 1.0 migrat
- Continued support for LangChain and new support for multiple frameworks
- Redesigned sidebar and customizable interaction panel
- New Native Categories and Components
- Improved user experience with Text and Record modes
- Improved user experience with Text and Data modes
- CustomComponent for all components
- Compatibility with previous versions using Runnable Executor
- Multiple flows in the canvas
@ -58,11 +58,11 @@ Langflow 1.0 introduces many new native categories, including Inputs, Outputs, H
**Guide coming soon**
## New Way of Using Langflow: Text and Record (and more to come)
## New Way of Using Langflow: Text and Data (and more to come)
With the introduction of Text and Record types connections between Components are more intuitive and easier to understand. This is the first step in a series of improvements to the way you interact with Langflow. Learn how to use Text, and Record and how they help you build better flows.
With the introduction of Text and Data types connections between Components are more intuitive and easier to understand. This is the first step in a series of improvements to the way you interact with Langflow. Learn how to use Text, and Data and how they help you build better flows.
[Learn more about Text and Record](../components/text-and-record)
[Learn more about Text and Data](../components/text-and-record)
## CustomComponent for All Components

View file

@ -4,10 +4,10 @@ from langchain.agents import AgentExecutor, BaseMultiActionAgent, BaseSingleActi
from langchain_core.messages import BaseMessage
from langchain_core.runnables import Runnable
from langflow.base.agents.utils import get_agents_list, records_to_messages
from langflow.base.agents.utils import data_to_messages, get_agents_list
from langflow.custom import CustomComponent
from langflow.field_typing import Text, Tool
from langflow.schema import Record
from langflow.schema import Data
class LCAgentComponent(CustomComponent):
@ -49,7 +49,7 @@ class LCAgentComponent(CustomComponent):
agent: Union[Runnable, BaseSingleActionAgent, BaseMultiActionAgent, AgentExecutor],
inputs: str,
tools: List[Tool],
message_history: Optional[List[Record]] = None,
message_history: Optional[List[Data]] = None,
handle_parsing_errors: bool = True,
output_key: str = "output",
) -> Text:
@ -64,7 +64,7 @@ class LCAgentComponent(CustomComponent):
)
input_dict: dict[str, str | list[BaseMessage]] = {"input": inputs}
if message_history:
input_dict["chat_history"] = records_to_messages(message_history)
input_dict["chat_history"] = data_to_messages(message_history)
result = await runnable.ainvoke(input_dict)
self.status = result
if output_key in result:

View file

@ -13,7 +13,7 @@ from langchain_core.prompts import BasePromptTemplate, ChatPromptTemplate
from langchain_core.tools import BaseTool
from pydantic import BaseModel
from langflow.schema import Record
from langflow.schema import Data
from .default_prompts import XML_AGENT_PROMPT
@ -34,17 +34,17 @@ class AgentSpec(BaseModel):
hub_repo: Optional[str] = None
def records_to_messages(records: List[Record]) -> List[BaseMessage]:
def data_to_messages(data: List[Data]) -> List[BaseMessage]:
"""
Convert a list of records to a list of messages.
Convert a list of data to a list of messages.
Args:
records (List[Record]): The records to convert.
data (List[Data]): The data to convert.
Returns:
List[Message]: The records as messages.
List[Message]: The data as messages.
"""
return [record.to_lc_message() for record in records]
return [value.to_lc_message() for value in data]
def validate_and_create_xml_agent(

View file

@ -8,7 +8,7 @@ import chardet
import orjson
import yaml
from langflow.schema import Record
from langflow.schema import Data
# Types of files that can be read simply by file.read()
# and have 100% to be completely readable
@ -82,7 +82,7 @@ def retrieve_file_paths(
# ! Removing unstructured dependency until
# ! 3.12 is supported
# def partition_file_to_record(file_path: str, silent_errors: bool) -> Optional[Record]:
# def partition_file_to_record(file_path: str, silent_errors: bool) -> Optional[Data]:
# # Use the partition function to load the file
# from unstructured.partition.auto import partition # type: ignore
@ -93,11 +93,11 @@ def retrieve_file_paths(
# raise ValueError(f"Error loading file {file_path}: {e}") from e
# return None
# # Create a Record
# # Create a Data
# text = "\n\n".join([Text(el) for el in elements])
# metadata = elements.metadata if hasattr(elements, "metadata") else {}
# metadata["file_path"] = file_path
# record = Record(text=text, data=metadata)
# record = Data(text=text, data=metadata)
# return record
@ -129,7 +129,7 @@ def parse_pdf_to_text(file_path: str) -> str:
return "\n\n".join([page.extract_text() for page in reader.pages])
def parse_text_file_to_record(file_path: str, silent_errors: bool) -> Optional[Record]:
def parse_text_file_to_record(file_path: str, silent_errors: bool) -> Optional[Data]:
try:
if file_path.endswith(".pdf"):
text = parse_pdf_to_text(file_path)
@ -156,7 +156,7 @@ def parse_text_file_to_record(file_path: str, silent_errors: bool) -> Optional[R
raise ValueError(f"Error loading file {file_path}: {e}") from e
return None
record = Record(data={"file_path": file_path, "text": text})
record = Data(data={"file_path": file_path, "text": text})
return record
@ -167,21 +167,21 @@ def parse_text_file_to_record(file_path: str, silent_errors: bool) -> Optional[R
# silent_errors: bool,
# max_concurrency: int,
# use_multithreading: bool,
# ) -> List[Optional[Record]]:
# ) -> List[Optional[Data]]:
# if use_multithreading:
# records = parallel_load_records(file_paths, silent_errors, max_concurrency)
# data = parallel_load_data(file_paths, silent_errors, max_concurrency)
# else:
# records = [partition_file_to_record(file_path, silent_errors) for file_path in file_paths]
# records = list(filter(None, records))
# return records
# data = [partition_file_to_record(file_path, silent_errors) for file_path in file_paths]
# data = list(filter(None, data))
# return data
def parallel_load_records(
def parallel_load_data(
file_paths: List[str],
silent_errors: bool,
max_concurrency: int,
load_function: Callable = parse_text_file_to_record,
) -> List[Optional[Record]]:
) -> List[Optional[Data]]:
with futures.ThreadPoolExecutor(max_workers=max_concurrency) as executor:
loaded_files = executor.map(
lambda file_path: load_function(file_path, silent_errors),

View file

@ -1,67 +1,67 @@
from typing import List
from langflow.graph.schema import ResultData, RunOutputs
from langflow.schema import Record
from langflow.schema import Data
def build_records_from_run_outputs(run_outputs: RunOutputs) -> List[Record]:
def build_data_from_run_outputs(run_outputs: RunOutputs) -> List[Data]:
"""
Build a list of records from the given RunOutputs.
Build a list of data from the given RunOutputs.
Args:
run_outputs (RunOutputs): The RunOutputs object containing the output data.
Returns:
List[Record]: A list of records built from the RunOutputs.
List[Data]: A list of data built from the RunOutputs.
"""
if not run_outputs:
return []
records = []
data = []
for result_data in run_outputs.outputs:
if result_data:
records.extend(build_records_from_result_data(result_data))
return records
data.extend(build_data_from_result_data(result_data))
return data
def build_records_from_result_data(result_data: ResultData, get_final_results_only: bool = True) -> List[Record]:
def build_data_from_result_data(result_data: ResultData, get_final_results_only: bool = True) -> List[Data]:
"""
Build a list of records from the given ResultData.
Build a list of data from the given ResultData.
Args:
result_data (ResultData): The ResultData object containing the result data.
get_final_results_only (bool, optional): Whether to include only final results. Defaults to True.
Returns:
List[Record]: A list of records built from the ResultData.
List[Data]: A list of data built from the ResultData.
"""
messages = result_data.messages
if not messages:
return []
records = []
data = []
for message in messages:
message_dict = message if isinstance(message, dict) else message.model_dump()
if get_final_results_only:
result_data_dict = result_data.model_dump()
results = result_data_dict.get("results", {})
inner_result = results.get("result", {})
record = Record(data={"result": inner_result, "message": message_dict}, text_key="result")
records.append(record)
return records
record = Data(data={"result": inner_result, "message": message_dict}, text_key="result")
data.append(record)
return data
def format_flow_output_records(records: List[Record]) -> str:
def format_flow_output_data(data: List[Data]) -> str:
"""
Format the flow output records into a string.
Format the flow output data into a string.
Args:
records (List[Record]): The list of records to format.
data (List[Data]): The list of data to format.
Returns:
str: The formatted flow output records.
str: The formatted flow output data.
"""
result = "Flow run output:\n"
results = "\n".join([record.result for record in records if record.data["message"]])
results = "\n".join([value.result for value in data if value.data["message"]])
return result + results

View file

@ -3,7 +3,7 @@ from typing import Optional, Union
from langflow.base.data.utils import IMG_FILE_TYPES, TEXT_FILE_TYPES
from langflow.custom import Component
from langflow.memory import store_message
from langflow.schema import Record
from langflow.schema import Data
from langflow.schema.message import Message
@ -35,9 +35,9 @@ class ChatComponent(Component):
"advanced": True,
},
"record_template": {
"display_name": "Record Template",
"display_name": "Data Template",
"multiline": True,
"info": "In case of Message being a Record, this template will be used to convert it to text.",
"info": "In case of Message being a Data, this template will be used to convert it to text.",
"advanced": True,
},
"files": {
@ -65,14 +65,14 @@ class ChatComponent(Component):
self,
sender: Optional[str] = "User",
sender_name: Optional[str] = "User",
input_value: Optional[Union[str, Record, Message]] = None,
input_value: Optional[Union[str, Data, Message]] = None,
files: Optional[list[str]] = None,
session_id: Optional[str] = None,
return_message: Optional[bool] = False,
) -> Message:
message: Message | None = None
if isinstance(input_value, Record):
if isinstance(input_value, Data):
# Update the data of the record
message = Message.from_record(input_value)
else:

View file

@ -2,8 +2,8 @@ from typing import Optional
from langflow.custom import Component
from langflow.field_typing import Text
from langflow.helpers.record import records_to_text
from langflow.schema import Record
from langflow.helpers.record import data_to_text
from langflow.schema import Data
class TextComponent(Component):
@ -14,13 +14,13 @@ class TextComponent(Component):
return {
"input_value": {
"display_name": "Value",
"input_types": ["Text", "Record"],
"info": "Text or Record to be passed.",
"input_types": ["Text", "Data"],
"info": "Text or Data to be passed.",
},
"record_template": {
"display_name": "Record Template",
"display_name": "Data Template",
"multiline": True,
"info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.",
"info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.",
"advanced": True,
},
}
@ -30,12 +30,12 @@ class TextComponent(Component):
input_value: Optional[Text] = "",
record_template: Optional[str] = "{text}",
) -> Text:
if isinstance(input_value, Record):
if isinstance(input_value, Data):
if record_template == "":
# it should be dynamically set to the Record's .text_key value
# it should be dynamically set to the Data's .text_key value
# meaning, if text_key = "bacon", then record_template = "{bacon}"
record_template = "{" + input_value.text_key + "}"
input_value = records_to_text(template=record_template, records=input_value)
input_value = data_to_text(template=record_template, data=input_value)
self.status = input_value
if not input_value:
input_value = ""

View file

@ -1,7 +1,7 @@
from typing import Optional
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class BaseMemoryComponent(CustomComponent):
@ -33,14 +33,14 @@ class BaseMemoryComponent(CustomComponent):
"advanced": True,
},
"record_template": {
"display_name": "Record Template",
"display_name": "Data Template",
"multiline": True,
"info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.",
"info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.",
"advanced": True,
},
}
def get_messages(self, **kwargs) -> list[Record]:
def get_messages(self, **kwargs) -> list[Data]:
raise NotImplementedError
def add_message(

View file

@ -2,16 +2,16 @@ from copy import deepcopy
from langchain_core.documents import Document
from langflow.schema import Record
from langflow.schema import Data
from langflow.schema.message import Message
def record_to_string(record: Record) -> str:
def record_to_string(record: Data) -> str:
"""
Convert a record to a string.
Args:
record (Record): The record to convert.
record (Data): The record to convert.
Returns:
str: The record as a string.
@ -32,18 +32,18 @@ def dict_values_to_string(d: dict) -> dict:
# Do something similar to the above
d_copy = deepcopy(d)
for key, value in d_copy.items():
# it could be a list of records or documents or strings
# it could be a list of data or documents or strings
if isinstance(value, list):
for i, item in enumerate(value):
if isinstance(item, Message):
d_copy[key][i] = item.text
elif isinstance(item, Record):
elif isinstance(item, Data):
d_copy[key][i] = record_to_string(item)
elif isinstance(item, Document):
d_copy[key][i] = document_to_string(item)
elif isinstance(value, Message):
d_copy[key] = value.text
elif isinstance(value, Record):
elif isinstance(value, Data):
d_copy[key] = record_to_string(value)
elif isinstance(value, Document):
d_copy[key] = document_to_string(value)

View file

@ -6,7 +6,7 @@ from langchain_core.runnables import RunnableConfig
from langchain_core.tools import ToolException
from pydantic.v1 import BaseModel
from langflow.base.flow_processing.utils import build_records_from_result_data, format_flow_output_records
from langflow.base.flow_processing.utils import build_data_from_result_data, format_flow_output_data
from langflow.graph.graph.base import Graph
from langflow.graph.vertex.base import Vertex
from langflow.helpers.flow import build_schema_from_inputs, get_arg_names, get_flow_inputs, run_flow
@ -59,14 +59,12 @@ class FlowTool(BaseTool):
return "No output"
run_output = run_outputs[0]
records = []
data = []
if run_output is not None:
for output in run_output.outputs:
if output:
records.extend(
build_records_from_result_data(output, get_final_results_only=self.get_final_results_only)
)
return format_flow_output_records(records)
data.extend(build_data_from_result_data(output, get_final_results_only=self.get_final_results_only))
return format_flow_output_data(data)
def validate_inputs(self, args_names: List[dict[str, str]], args: Any, kwargs: Any):
"""Validate the inputs."""
@ -107,11 +105,9 @@ class FlowTool(BaseTool):
return "No output"
run_output = run_outputs[0]
records = []
data = []
if run_output is not None:
for output in run_output.outputs:
if output:
records.extend(
build_records_from_result_data(output, get_final_results_only=self.get_final_results_only)
)
return format_flow_output_records(records)
data.extend(build_data_from_result_data(output, get_final_results_only=self.get_final_results_only))
return format_flow_output_data(data)

View file

@ -1,17 +1,17 @@
from langflow.schema import Record
from langflow.schema import Data
def chroma_collection_to_records(collection_dict: dict):
def chroma_collection_to_data(collection_dict: dict):
"""
Converts a collection of chroma vectors into a list of records.
Converts a collection of chroma vectors into a list of data.
Args:
collection_dict (dict): A dictionary containing the collection of chroma vectors.
Returns:
list: A list of records, where each record represents a document in the collection.
list: A list of data, where each record represents a document in the collection.
"""
records = []
data = []
for i, doc in enumerate(collection_dict["documents"]):
record_dict = {
"id": collection_dict["ids"][i],
@ -20,5 +20,5 @@ def chroma_collection_to_records(collection_dict: dict):
if "metadatas" in collection_dict:
for key, value in collection_dict["metadatas"][i].items():
record_dict[key] = value
records.append(Record(**record_dict))
return records
data.append(Data(**record_dict))
return data

View file

@ -5,7 +5,7 @@ from langchain_core.prompts import ChatPromptTemplate
from langflow.base.agents.agent import LCAgentComponent
from langflow.field_typing import BaseLanguageModel, Text, Tool
from langflow.schema import Record
from langflow.schema import Data
class ToolCallingAgentComponent(LCAgentComponent):
@ -42,7 +42,7 @@ class ToolCallingAgentComponent(LCAgentComponent):
llm: BaseLanguageModel,
tools: List[Tool],
user_prompt: str = "{input}",
message_history: Optional[List[Record]] = None,
message_history: Optional[List[Data]] = None,
system_message: str = "You are a helpful assistant",
handle_parsing_errors: bool = True,
) -> Text:

View file

@ -5,7 +5,7 @@ from langchain_core.prompts import ChatPromptTemplate
from langflow.base.agents.agent import LCAgentComponent
from langflow.field_typing import BaseLanguageModel, Text, Tool
from langflow.schema import Record
from langflow.schema import Data
class XMLAgentComponent(LCAgentComponent):
@ -76,7 +76,7 @@ class XMLAgentComponent(LCAgentComponent):
tools: List[Tool],
user_prompt: str = "{input}",
system_message: str = "You are a helpful assistant",
message_history: Optional[List[Record]] = None,
message_history: Optional[List[Data]] = None,
tool_template: str = "{name}: {description}",
handle_parsing_errors: bool = True,
) -> Text:

View file

@ -5,7 +5,7 @@ from langchain_core.documents import Document
from langflow.custom import CustomComponent
from langflow.field_typing import BaseLanguageModel, BaseMemory, BaseRetriever, Text
from langflow.schema import Record
from langflow.schema import Data
class RetrievalQAComponent(CustomComponent):
@ -23,7 +23,7 @@ class RetrievalQAComponent(CustomComponent):
"return_source_documents": {"display_name": "Return Source Documents"},
"input_value": {
"display_name": "Input",
"input_types": ["Record", "Document"],
"input_types": ["Data", "Document"],
},
}
@ -50,17 +50,17 @@ class RetrievalQAComponent(CustomComponent):
)
if isinstance(input_value, Document):
input_value = input_value.page_content
if isinstance(input_value, Record):
if isinstance(input_value, Data):
input_value = input_value.get_text()
self.status = runnable
result = runnable.invoke({input_key: input_value})
result = result.content if hasattr(result, "content") else result
# Result is a dict with keys "query", "result" and "source_documents"
# for now we just return the result
records = self.to_records(result.get("source_documents"))
data = self.to_data(result.get("source_documents"))
references_str = ""
if return_source_documents:
references_str = self.create_references_from_records(records)
references_str = self.create_references_from_data(data)
result_str = result.get("result", "")
final_result = "\n".join([Text(result_str), references_str])

View file

@ -53,10 +53,10 @@ class RetrievalQAWithSourcesChainComponent(CustomComponent):
result = result.content if hasattr(result, "content") else result
# Result is a dict with keys "query", "result" and "source_documents"
# for now we just return the result
records = self.to_records(result.get("source_documents"))
data = self.to_data(result.get("source_documents"))
references_str = ""
if return_source_documents:
references_str = self.create_references_from_records(records)
references_str = self.create_references_from_data(data)
result_str = Text(result.get("answer", ""))
final_result = "\n".join([result_str, references_str])
self.status = final_result

View file

@ -8,14 +8,14 @@ from loguru import logger
from langflow.base.curl.parse import parse_context
from langflow.custom import CustomComponent
from langflow.field_typing import NestedDict
from langflow.schema import Record
from langflow.schema import Data
from langflow.schema.dotdict import dotdict
class APIRequest(CustomComponent):
display_name: str = "API Request"
description: str = "Make HTTP requests given one or more URLs."
output_types: list[str] = ["Record"]
output_types: list[str] = ["Data"]
documentation: str = "https://docs.langflow.org/components/utilities#api-request"
icon = "Globe"
@ -36,12 +36,12 @@ class APIRequest(CustomComponent):
"headers": {
"display_name": "Headers",
"info": "The headers to send with the request.",
"input_types": ["Record"],
"input_types": ["Data"],
},
"body": {
"display_name": "Body",
"info": "The body to send with the request (for POST, PATCH, PUT).",
"input_types": ["Record"],
"input_types": ["Data"],
},
"timeout": {
"display_name": "Timeout",
@ -80,7 +80,7 @@ class APIRequest(CustomComponent):
headers: Optional[dict] = None,
body: Optional[dict] = None,
timeout: int = 5,
) -> Record:
) -> Data:
method = method.upper()
if method not in ["GET", "POST", "PATCH", "PUT", "DELETE"]:
raise ValueError(f"Unsupported method: {method}")
@ -93,7 +93,7 @@ class APIRequest(CustomComponent):
result = response.json()
except Exception:
result = response.text
return Record(
return Data(
data={
"source": url,
"headers": headers,
@ -102,7 +102,7 @@ class APIRequest(CustomComponent):
},
)
except httpx.TimeoutException:
return Record(
return Data(
data={
"source": url,
"headers": headers,
@ -111,7 +111,7 @@ class APIRequest(CustomComponent):
},
)
except Exception as exc:
return Record(
return Data(
data={
"source": url,
"headers": headers,
@ -128,10 +128,10 @@ class APIRequest(CustomComponent):
headers: Optional[NestedDict] = {},
body: Optional[NestedDict] = {},
timeout: int = 5,
) -> List[Record]:
) -> List[Data]:
if headers is None:
headers_dict = {}
elif isinstance(headers, Record):
elif isinstance(headers, Data):
headers_dict = headers.data
else:
headers_dict = headers
@ -142,7 +142,7 @@ class APIRequest(CustomComponent):
bodies = [body]
else:
bodies = body
bodies = [b.data if isinstance(b, Record) else b for b in bodies] # type: ignore
bodies = [b.data if isinstance(b, Data) else b for b in bodies] # type: ignore
if len(urls) != len(bodies):
# add bodies with None

View file

@ -1,8 +1,8 @@
from typing import Any, Dict, List, Optional
from langflow.base.data.utils import parallel_load_records, parse_text_file_to_record, retrieve_file_paths
from langflow.base.data.utils import parallel_load_data, parse_text_file_to_record, retrieve_file_paths
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class DirectoryComponent(CustomComponent):
@ -49,15 +49,15 @@ class DirectoryComponent(CustomComponent):
recursive: bool = True,
silent_errors: bool = False,
use_multithreading: bool = True,
) -> List[Optional[Record]]:
) -> List[Optional[Data]]:
resolved_path = self.resolve_path(path)
file_paths = retrieve_file_paths(resolved_path, load_hidden, recursive, depth)
loaded_records = []
loaded_data = []
if use_multithreading:
loaded_records = parallel_load_records(file_paths, silent_errors, max_concurrency)
loaded_data = parallel_load_data(file_paths, silent_errors, max_concurrency)
else:
loaded_records = [parse_text_file_to_record(file_path, silent_errors) for file_path in file_paths]
loaded_records = list(filter(None, loaded_records))
self.status = loaded_records
return loaded_records
loaded_data = [parse_text_file_to_record(file_path, silent_errors) for file_path in file_paths]
loaded_data = list(filter(None, loaded_data))
self.status = loaded_data
return loaded_data

View file

@ -3,7 +3,7 @@ from typing import Any, Dict
from langflow.base.data.utils import TEXT_FILE_TYPES, parse_text_file_to_record
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class FileComponent(CustomComponent):
@ -26,7 +26,7 @@ class FileComponent(CustomComponent):
},
}
def load_file(self, path: str, silent_errors: bool = False) -> Record:
def load_file(self, path: str, silent_errors: bool = False) -> Data:
resolved_path = self.resolve_path(path)
path_obj = Path(resolved_path)
extension = path_obj.suffix[1:].lower()
@ -36,13 +36,13 @@ class FileComponent(CustomComponent):
raise ValueError(f"Unsupported file type: {extension}")
record = parse_text_file_to_record(resolved_path, silent_errors)
self.status = record if record else "No data"
return record or Record()
return record or Data()
def build(
self,
path: str,
silent_errors: bool = False,
) -> Record:
) -> Data:
record = self.load_file(path, silent_errors)
self.status = record
return record

View file

@ -3,7 +3,7 @@ from typing import Any, Dict
from langchain_community.document_loaders.web_base import WebBaseLoader
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class URLComponent(CustomComponent):
@ -19,9 +19,9 @@ class URLComponent(CustomComponent):
def build(
self,
urls: list[str],
) -> list[Record]:
) -> list[Data]:
loader = WebBaseLoader(web_paths=[url for url in urls if url])
docs = loader.load()
records = self.to_records(docs)
self.status = records
return records
data = self.to_data(docs)
self.status = data
return data

View file

@ -3,7 +3,7 @@ import uuid
from typing import Any, Optional
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
from langflow.schema.dotdict import dotdict
@ -25,14 +25,14 @@ class WebhookComponent(CustomComponent):
}
}
def build(self, data: Optional[str] = "") -> Record:
def build(self, data: Optional[str] = "") -> Data:
message = ""
try:
body = json.loads(data or "{}")
except json.JSONDecodeError:
body = {"payload": data}
message = f"Invalid JSON payload. Please check the format.\n\n{data}"
record = Record(data=body)
record = Data(data=body)
if not message:
message = json.dumps(body, indent=2)
self.status = message

View file

@ -6,7 +6,7 @@ from langchain_core.prompts.chat import HumanMessagePromptTemplate, SystemMessag
from langflow.base.agents.agent import LCAgentComponent
from langflow.base.agents.utils import AGENTS, AgentSpec, get_agents_list
from langflow.field_typing import BaseLanguageModel, Text, Tool
from langflow.schema import Record
from langflow.schema import Data
from langflow.schema.dotdict import dotdict
@ -149,7 +149,7 @@ class AgentComponent(LCAgentComponent):
tools: List[Tool],
system_message: str = "You are a helpful assistant. Help the user answer any questions.",
user_prompt: str = "{input}",
message_history: Optional[List[Record]] = None,
message_history: Optional[List[Data]] = None,
tool_template: str = "{name}: {description}",
handle_parsing_errors: bool = True,
) -> Text:

View file

@ -21,6 +21,6 @@ class ClearMessageHistoryComponent(CustomComponent):
session_id: str,
) -> None:
delete_messages(session_id=session_id)
records = get_messages(session_id=session_id)
self.records = records
return records
data = get_messages(session_id=session_id)
self.data = data
return data

View file

@ -1,6 +1,6 @@
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.field_typing import Embeddings
from langflow.schema import Data
class EmbedComponent(CustomComponent):
@ -10,6 +10,6 @@ class EmbedComponent(CustomComponent):
return {"texts": {"display_name": "Texts"}, "embbedings": {"display_name": "Embeddings"}}
def build(self, texts: list[str], embbedings: Embeddings) -> Embeddings:
vectors = Record(vector=embbedings.embed_documents(texts))
vectors = Data(vector=embbedings.embed_documents(texts))
self.status = vectors
return vectors

View file

@ -1,14 +1,14 @@
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class ExtractKeyFromRecordComponent(CustomComponent):
display_name = "Extract Key From Record"
display_name = "Extract Key From Data"
description = "Extracts a key from a record."
beta: bool = True
field_config = {
"record": {"display_name": "Record"},
"record": {"display_name": "Data"},
"keys": {
"display_name": "Keys",
"info": "The keys to extract from the record.",
@ -21,12 +21,12 @@ class ExtractKeyFromRecordComponent(CustomComponent):
},
}
def build(self, record: Record, keys: list[str], silent_error: bool = True) -> Record:
def build(self, record: Data, keys: list[str], silent_error: bool = True) -> Data:
"""
Extracts the keys from a record.
Args:
record (Record): The record from which to extract the keys.
record (Data): The record from which to extract the keys.
keys (list[str]): The keys to extract from the record.
silent_error (bool): If True, errors will not be raised.
@ -40,6 +40,6 @@ class ExtractKeyFromRecordComponent(CustomComponent):
except AttributeError:
if not silent_error:
raise KeyError(f"The key '{key}' does not exist in the record.")
return_record = Record(data=extracted_keys)
return_record = Data(data=extracted_keys)
self.status = return_record
return return_record

View file

@ -7,7 +7,7 @@ from langflow.custom import CustomComponent
from langflow.field_typing import Tool
from langflow.graph.graph.base import Graph
from langflow.helpers.flow import get_flow_inputs
from langflow.schema import Record
from langflow.schema import Data
from langflow.schema.dotdict import dotdict
@ -17,10 +17,10 @@ class FlowToolComponent(CustomComponent):
field_order = ["flow_name", "name", "description", "return_direct"]
def get_flow_names(self) -> List[str]:
flow_records = self.list_flows()
return [flow_record.data["name"] for flow_record in flow_records]
flow_data = self.list_flows()
return [flow_record.data["name"] for flow_record in flow_data]
def get_flow(self, flow_name: str) -> Optional[Record]:
def get_flow(self, flow_name: str) -> Optional[Data]:
"""
Retrieves a flow by its name.
@ -30,8 +30,8 @@ class FlowToolComponent(CustomComponent):
Returns:
Optional[Text]: The flow record if found, None otherwise.
"""
flow_records = self.list_flows()
for flow_record in flow_records:
flow_data = self.list_flows()
for flow_record in flow_data:
if flow_record.data["name"] == flow_name:
return flow_record
return None

View file

@ -1,7 +1,7 @@
from typing import List
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class ListFlowsComponent(CustomComponent):
@ -15,7 +15,7 @@ class ListFlowsComponent(CustomComponent):
def build(
self,
) -> List[Record]:
) -> List[Data]:
flows = self.list_flows()
self.status = flows
return flows

View file

@ -1,5 +1,5 @@
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class ListenComponent(CustomComponent):
@ -15,7 +15,7 @@ class ListenComponent(CustomComponent):
},
}
def build(self, name: str) -> Record:
def build(self, name: str) -> Data:
state = self.get_state(name)
self.status = state
return state

View file

@ -1,36 +1,36 @@
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class MergeRecordsComponent(CustomComponent):
display_name = "Merge Records"
description = "Merges records."
description = "Merges data."
beta: bool = True
field_config = {
"records": {"display_name": "Records"},
"data": {"display_name": "Records"},
}
def build(self, records: list[Record]) -> Record:
if not records:
return Record()
if len(records) == 1:
return records[0]
merged_record = Record()
for record in records:
def build(self, data: list[Data]) -> Data:
if not data:
return Data()
if len(data) == 1:
return data[0]
merged_record = Data()
for value in data:
if merged_record is None:
merged_record = record
merged_record = value
else:
merged_record += record
merged_record += value
self.status = merged_record
return merged_record
if __name__ == "__main__":
records = [
Record(data={"key1": "value1"}),
Record(data={"key2": "value2"}),
data = [
Data(data={"key1": "value1"}),
Data(data={"key2": "value2"}),
]
component = MergeRecordsComponent()
result = component.build(records)
result = component.build(data)
print(result)

View file

@ -1,7 +1,7 @@
from typing import Optional
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class NotifyComponent(CustomComponent):
@ -13,23 +13,23 @@ class NotifyComponent(CustomComponent):
def build_config(self):
return {
"name": {"display_name": "Name", "info": "The name of the notification."},
"record": {"display_name": "Record", "info": "The record to store."},
"record": {"display_name": "Data", "info": "The record to store."},
"append": {
"display_name": "Append",
"info": "If True, the record will be appended to the notification.",
},
}
def build(self, name: str, record: Optional[Record] = None, append: bool = False) -> Record:
if record and not isinstance(record, Record):
def build(self, name: str, record: Optional[Data] = None, append: bool = False) -> Data:
if record and not isinstance(record, Data):
if isinstance(record, str):
record = Record(text=record)
record = Data(text=record)
elif isinstance(record, dict):
record = Record(data=record)
record = Data(data=record)
else:
record = Record(text=str(record))
record = Data(text=str(record))
elif not record:
record = Record(text="")
record = Data(text="")
if record:
if append:
self.append_state(name, record)

View file

@ -2,7 +2,7 @@ from typing import Union
from langflow.custom import CustomComponent
from langflow.field_typing import Text
from langflow.schema import Record
from langflow.schema import Data
class PassComponent(CustomComponent):
@ -15,16 +15,16 @@ class PassComponent(CustomComponent):
"ignored_input": {
"display_name": "Ignored Input",
"info": "This input is ignored. It's used to control the flow in the graph.",
"input_types": ["Text", "Record"],
"input_types": ["Text", "Data"],
},
"forwarded_input": {
"display_name": "Input",
"info": "This input is forwarded by the component.",
"input_types": ["Text", "Record"],
"input_types": ["Text", "Data"],
},
}
def build(self, ignored_input: Text, forwarded_input: Text) -> Union[Text, Record]:
def build(self, ignored_input: Text, forwarded_input: Text) -> Union[Text, Data]:
# The ignored_input is not used in the logic, it's just there for graph flow control
self.status = forwarded_input
return forwarded_input

View file

@ -1,10 +1,10 @@
from typing import Any, List, Optional
from langflow.base.flow_processing.utils import build_records_from_run_outputs
from langflow.base.flow_processing.utils import build_data_from_run_outputs
from langflow.custom import CustomComponent
from langflow.field_typing import NestedDict, Text
from langflow.graph.schema import RunOutputs
from langflow.schema import Record, dotdict
from langflow.schema import Data, dotdict
class RunFlowComponent(CustomComponent):
@ -13,8 +13,8 @@ class RunFlowComponent(CustomComponent):
beta: bool = True
def get_flow_names(self) -> List[str]:
flow_records = self.list_flows()
return [flow_record.data["name"] for flow_record in flow_records]
flow_data = self.list_flows()
return [flow_record.data["name"] for flow_record in flow_data]
def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None):
if field_name == "flow_name":
@ -40,17 +40,17 @@ class RunFlowComponent(CustomComponent):
},
}
async def build(self, input_value: Text, flow_name: str, tweaks: NestedDict) -> List[Record]:
async def build(self, input_value: Text, flow_name: str, tweaks: NestedDict) -> List[Data]:
results: List[Optional[RunOutputs]] = await self.run_flow(
inputs={"input_value": input_value}, flow_name=flow_name, tweaks=tweaks
)
if isinstance(results, list):
records = []
data = []
for result in results:
if result:
records.extend(build_records_from_run_outputs(result))
data.extend(build_data_from_run_outputs(result))
else:
records = build_records_from_run_outputs()(results)
data = build_data_from_run_outputs()(results)
self.status = records
return records
self.status = data
return data

View file

@ -2,7 +2,7 @@ from typing import Optional
from langflow.custom import CustomComponent
from langflow.field_typing import Text
from langflow.schema import Record
from langflow.schema import Data
from langflow.utils.util import unescape_string
@ -15,7 +15,7 @@ class SplitTextComponent(CustomComponent):
"inputs": {
"display_name": "Inputs",
"info": "Texts to split.",
"input_types": ["Record", "Text"],
"input_types": ["Data", "Text"],
},
"separator": {
"display_name": "Separator",
@ -32,7 +32,7 @@ class SplitTextComponent(CustomComponent):
inputs: list[Text],
separator: str = " ",
truncate_size: Optional[int] = 0,
) -> list[Record]:
) -> list[Data]:
separator = unescape_string(separator)
outputs = []
@ -43,7 +43,7 @@ class SplitTextComponent(CustomComponent):
chunks = [chunk[:truncate_size] for chunk in chunks]
for chunk in chunks:
outputs.append(Record(data={"parent": text, "text": chunk}))
outputs.append(Data(data={"parent": text, "text": chunk}))
self.status = outputs
return outputs

View file

@ -2,30 +2,32 @@ from typing import Any, List, Optional
from loguru import logger
from langflow.base.flow_processing.utils import build_records_from_result_data
from langflow.base.flow_processing.utils import build_data_from_result_data
from langflow.custom import CustomComponent
from langflow.graph.graph.base import Graph
from langflow.graph.schema import RunOutputs
from langflow.graph.vertex.base import Vertex
from langflow.helpers.flow import get_flow_inputs
from langflow.schema import Record
from langflow.schema import Data
from langflow.schema.dotdict import dotdict
from langflow.template.field.base import Input
class SubFlowComponent(CustomComponent):
display_name = "Sub Flow"
description = "Dynamically Generates a Component from a Flow. The output is a list of records with keys 'result' and 'message'."
description = (
"Dynamically Generates a Component from a Flow. The output is a list of data with keys 'result' and 'message'."
)
beta: bool = True
field_order = ["flow_name"]
def get_flow_names(self) -> List[str]:
flow_records = self.list_flows()
return [flow_record.data["name"] for flow_record in flow_records]
flow_data = self.list_flows()
return [flow_record.data["name"] for flow_record in flow_data]
def get_flow(self, flow_name: str) -> Optional[Record]:
flow_records = self.list_flows()
for flow_record in flow_records:
def get_flow(self, flow_name: str) -> Optional[Data]:
flow_data = self.list_flows()
for flow_record in flow_data:
if flow_record.data["name"] == flow_name:
return flow_record
return None
@ -93,7 +95,7 @@ class SubFlowComponent(CustomComponent):
},
}
async def build(self, flow_name: str, get_final_results_only: bool = True, **kwargs) -> List[Record]:
async def build(self, flow_name: str, get_final_results_only: bool = True, **kwargs) -> List[Data]:
tweaks = {key: {"input_value": value} for key, value in kwargs.items()}
run_outputs: List[Optional[RunOutputs]] = await self.run_flow(
tweaks=tweaks,
@ -103,12 +105,12 @@ class SubFlowComponent(CustomComponent):
return []
run_output = run_outputs[0]
records = []
data = []
if run_output is not None:
for output in run_output.outputs:
if output:
records.extend(build_records_from_result_data(output, get_final_results_only))
data.extend(build_data_from_result_data(output, get_final_results_only))
self.status = records
logger.debug(records)
return records
self.status = data
logger.debug(data)
return data

View file

@ -2,7 +2,7 @@ from typing import Union
from langflow.custom import Component
from langflow.field_typing import Text
from langflow.schema import Record
from langflow.schema import Data
from langflow.template import Input, Output
@ -29,17 +29,17 @@ class TextOperatorComponent(Component):
),
Input(
name="true_output",
type=Union[str, Record],
type=Union[str, Data],
display_name="True Output",
info="The output to return or display when the comparison is true.",
input_types=["Text", "Record"],
input_types=["Text", "Data"],
),
Input(
name="false_output",
type=Union[str, Record],
type=Union[str, Data],
display_name="False Output",
info="The output to return or display when the comparison is false.",
input_types=["Text", "Record"],
input_types=["Text", "Data"],
),
]
outputs = [
@ -47,15 +47,15 @@ class TextOperatorComponent(Component):
Output(display_name="False Result", name="false_result", method="result_response"),
]
def true_response(self) -> Union[Text, Record]:
def true_response(self) -> Union[Text, Data]:
self.stop("False Result")
return self.true_output if self.true_output else self.input_text
def false_response(self) -> Union[Text, Record]:
def false_response(self) -> Union[Text, Data]:
self.stop("True Result")
return self.false_output if self.false_output else self.input_text
def result_response(self) -> Union[Text, Record]:
def result_response(self) -> Union[Text, Data]:
input_text = self.input_text
match_text = self.match_text
operator = self.operator

View file

@ -2,14 +2,14 @@ from typing import Any
from langflow.custom import CustomComponent
from langflow.field_typing.range_spec import RangeSpec
from langflow.schema import Record
from langflow.schema import Data
from langflow.schema.dotdict import dotdict
from langflow.template.field.base import Input
class CreateRecordComponent(CustomComponent):
display_name = "Create Record"
description = "Dynamically create a Record with a specified number of fields."
display_name = "Create Data"
description = "Dynamically create a Data with a specified number of fields."
field_order = ["number_of_fields", "text_key"]
def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None):
@ -40,7 +40,7 @@ class CreateRecordComponent(CustomComponent):
name=key,
info=f"Key for field {i}.",
field_type="dict",
input_types=["Text", "Record"],
input_types=["Text", "Data"],
)
build_config[field.name] = field.to_dict()
@ -67,15 +67,15 @@ class CreateRecordComponent(CustomComponent):
number_of_fields: int = 0,
text_key: str = "text",
**kwargs,
) -> Record:
) -> Data:
data = {}
for value_dict in kwargs.values():
if isinstance(value_dict, dict):
# Check if the value of the value_dict is a Record
# Check if the value of the value_dict is a Data
value_dict = {
key: value.get_text() if isinstance(value, Record) else value for key, value in value_dict.items()
key: value.get_text() if isinstance(value, Data) else value for key, value in value_dict.items()
}
data.update(value_dict)
return_record = Record(data=data, text_key=text_key)
return_record = Data(data=data, text_key=text_key)
self.status = return_record
return return_record

View file

@ -1,6 +1,6 @@
# from langflow.field_typing import Data
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class Component(CustomComponent):
@ -12,5 +12,5 @@ class Component(CustomComponent):
def build_config(self):
return {"param": {"display_name": "Parameter"}}
def build(self, param: str) -> Record:
return Record(data=param)
def build(self, param: str) -> Data:
return Data(data=param)

View file

@ -3,7 +3,7 @@ from typing import List
from langchain_core.documents import Document
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class DocumentToRecordComponent(CustomComponent):
@ -14,9 +14,9 @@ class DocumentToRecordComponent(CustomComponent):
"documents": {"display_name": "Documents"},
}
def build(self, documents: List[Document]) -> List[Record]:
def build(self, documents: List[Document]) -> List[Data]:
if isinstance(documents, Document):
documents = [documents]
records = [Record.from_document(document) for document in documents]
self.status = records
return records
data = [Data.from_document(document) for document in documents]
self.status = data
return data

View file

@ -36,9 +36,9 @@ class MemoryComponent(BaseMemoryComponent):
"advanced": True,
},
"record_template": {
"display_name": "Record Template",
"display_name": "Data Template",
"multiline": True,
"info": "Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.",
"info": "Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.",
"advanced": True,
},
}

View file

@ -2,7 +2,7 @@ from typing import List, Optional
from langflow.custom import CustomComponent
from langflow.memory import get_messages
from langflow.schema import Record
from langflow.schema import Data
class MessageHistoryComponent(CustomComponent):
@ -43,7 +43,7 @@ class MessageHistoryComponent(CustomComponent):
session_id: Optional[str] = None,
n_messages: int = 100,
order: Optional[str] = "Descending",
) -> List[Record]:
) -> List[Data]:
order = "DESC" if order == "Descending" else "ASC"
if sender == "Machine and User":
sender = None

View file

@ -1,7 +1,7 @@
from langflow.custom import CustomComponent
from langflow.field_typing import Text
from langflow.helpers.record import records_to_text
from langflow.schema import Record
from langflow.helpers.record import data_to_text
from langflow.schema import Data
class RecordsToTextComponent(CustomComponent):
@ -10,27 +10,27 @@ class RecordsToTextComponent(CustomComponent):
def build_config(self):
return {
"records": {
"data": {
"display_name": "Records",
"info": "The records to convert to text.",
"info": "The data to convert to text.",
},
"template": {
"display_name": "Template",
"info": "The template to use for formatting the records. It can contain the keys {text}, {data} or any other key in the Record.",
"info": "The template to use for formatting the data. It can contain the keys {text}, {data} or any other key in the Data.",
"multiline": True,
},
}
def build(
self,
records: list[Record],
data: list[Data],
template: str = "Text: {text}\nData: {data}",
) -> Text:
if not records:
if not data:
return ""
if isinstance(records, Record):
records = [records]
if isinstance(data, Data):
data = [data]
result_string = records_to_text(template, records)
result_string = data_to_text(template, data)
self.status = result_string
return result_string

View file

@ -1,15 +1,15 @@
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class UpdateRecordComponent(CustomComponent):
display_name = "Update Record"
description = "Update Record with text-based key/value pairs, similar to updating a Python dictionary."
display_name = "Update Data"
description = "Update Data with text-based key/value pairs, similar to updating a Python dictionary."
def build_config(self):
return {
"record": {
"display_name": "Record",
"display_name": "Data",
"info": "The record to update.",
},
"new_data": {
@ -21,18 +21,18 @@ class UpdateRecordComponent(CustomComponent):
def build(
self,
record: Record,
record: Data,
new_data: dict,
) -> Record:
) -> Data:
"""
Updates a record with new data.
Args:
record (Record): The record to update.
record (Data): The record to update.
new_data (dict): The new data to update the record with.
Returns:
Record: The updated record.
Data: The updated record.
"""
record.data.update(new_data)
self.status = record

View file

@ -13,15 +13,15 @@ class TextInput(TextComponent):
name="input_value",
type=str,
display_name="Value",
info="Text or Record to be passed as input.",
input_types=["Record", "Text"],
info="Text or Data to be passed as input.",
input_types=["Data", "Text"],
),
Input(
name="record_template",
type=str,
display_name="Record Template",
display_name="Data Template",
multiline=True,
info="Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.",
info="Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.",
advanced=True,
),
]

View file

@ -3,7 +3,7 @@ from typing import Optional
from langchain_community.utilities.searchapi import SearchApiAPIWrapper
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
from langflow.services.database.models.base import orjson_dumps
@ -37,7 +37,7 @@ class SearchApi(CustomComponent):
engine: str,
api_key: str,
params: Optional[dict] = None,
) -> Record:
) -> Data:
if params is None:
params = {}
@ -48,6 +48,6 @@ class SearchApi(CustomComponent):
result = orjson_dumps(results, indent_2=False)
record = Record(data=result)
record = Data(data=result)
self.status = record
return record

View file

@ -1,7 +1,7 @@
from typing import Optional, cast
from langflow.base.memory.memory import BaseMemoryComponent
from langflow.schema import Record
from langflow.schema import Data
class AstraDBMessageReaderComponent(BaseMemoryComponent):
@ -38,7 +38,7 @@ class AstraDBMessageReaderComponent(BaseMemoryComponent):
},
}
def get_messages(self, **kwargs) -> list[Record]:
def get_messages(self, **kwargs) -> list[Data]:
"""
Retrieves messages from the AstraDBChatMessageHistory memory.
@ -46,7 +46,7 @@ class AstraDBMessageReaderComponent(BaseMemoryComponent):
memory (AstraDBChatMessageHistory): The AstraDBChatMessageHistory instance to retrieve messages from.
Returns:
list[Record]: A list of Record objects representing the search results.
list[Data]: A list of Data objects representing the search results.
"""
try:
from langchain_astradb.chat_message_histories import AstraDBChatMessageHistory
@ -62,7 +62,7 @@ class AstraDBMessageReaderComponent(BaseMemoryComponent):
# Get messages from the memory
messages = memory.messages
results = [Record.from_lc_message(message) for message in messages]
results = [Data.from_lc_message(message) for message in messages]
return list(results)
@ -73,7 +73,7 @@ class AstraDBMessageReaderComponent(BaseMemoryComponent):
token: str,
api_endpoint: str,
namespace: Optional[str] = None,
) -> list[Record]:
) -> list[Data]:
try:
from langchain_astradb.chat_message_histories import AstraDBChatMessageHistory
except ImportError:
@ -90,7 +90,7 @@ class AstraDBMessageReaderComponent(BaseMemoryComponent):
namespace=namespace,
)
records = self.get_messages(memory=memory)
self.status = records
data = self.get_messages(memory=memory)
self.status = data
return records
return data

View file

@ -3,7 +3,7 @@ from typing import Optional
from langchain_core.messages import BaseMessage
from langflow.base.memory.memory import BaseMemoryComponent
from langflow.schema import Record
from langflow.schema import Data
class AstraDBMessageWriterComponent(BaseMemoryComponent):
@ -13,8 +13,8 @@ class AstraDBMessageWriterComponent(BaseMemoryComponent):
def build_config(self):
return {
"input_value": {
"display_name": "Input Record",
"info": "Record to write to Astra DB.",
"display_name": "Input Data",
"info": "Data to write to Astra DB.",
},
"session_id": {
"display_name": "Session ID",
@ -96,13 +96,13 @@ class AstraDBMessageWriterComponent(BaseMemoryComponent):
def build(
self,
input_value: Record,
input_value: Data,
session_id: str,
collection_name: str,
token: str,
api_endpoint: str,
namespace: Optional[str] = None,
) -> Record:
) -> Data:
try:
from langchain_astradb.chat_message_histories import AstraDBChatMessageHistory
except ImportError:

View file

@ -3,7 +3,7 @@ from typing import Optional, cast
from langchain_community.chat_message_histories import CassandraChatMessageHistory
from langflow.base.memory.memory import BaseMemoryComponent
from langflow.schema.record import Record
from langflow.schema.data import Data
class CassandraMessageReaderComponent(BaseMemoryComponent):
@ -38,7 +38,7 @@ class CassandraMessageReaderComponent(BaseMemoryComponent):
},
}
def get_messages(self, **kwargs) -> list[Record]:
def get_messages(self, **kwargs) -> list[Data]:
"""
Retrieves messages from the CassandraChatMessageHistory memory.
@ -46,7 +46,7 @@ class CassandraMessageReaderComponent(BaseMemoryComponent):
memory (CassandraChatMessageHistory): The CassandraChatMessageHistory instance to retrieve messages from.
Returns:
list[Record]: A list of Record objects representing the search results.
list[Data]: A list of Data objects representing the search results.
"""
memory: CassandraChatMessageHistory = cast(CassandraChatMessageHistory, kwargs.get("memory"))
if not memory:
@ -54,7 +54,7 @@ class CassandraMessageReaderComponent(BaseMemoryComponent):
# Get messages from the memory
messages = memory.messages
results = [Record.from_lc_message(message) for message in messages]
results = [Data.from_lc_message(message) for message in messages]
return list(results)
@ -65,7 +65,7 @@ class CassandraMessageReaderComponent(BaseMemoryComponent):
token: str,
database_id: str,
keyspace: Optional[str] = None,
) -> list[Record]:
) -> list[Data]:
try:
import cassio
except ImportError:
@ -80,7 +80,7 @@ class CassandraMessageReaderComponent(BaseMemoryComponent):
keyspace=keyspace,
)
records = self.get_messages(memory=memory)
self.status = records
data = self.get_messages(memory=memory)
self.status = data
return records
return data

View file

@ -4,7 +4,7 @@ from langchain_community.chat_message_histories import CassandraChatMessageHisto
from langchain_core.messages import BaseMessage
from langflow.base.memory.memory import BaseMemoryComponent
from langflow.schema.record import Record
from langflow.schema.data import Data
class CassandraMessageWriterComponent(BaseMemoryComponent):
@ -14,8 +14,8 @@ class CassandraMessageWriterComponent(BaseMemoryComponent):
def build_config(self):
return {
"input_value": {
"display_name": "Input Record",
"info": "Record to write to Cassandra.",
"display_name": "Input Data",
"info": "Data to write to Cassandra.",
},
"session_id": {
"display_name": "Session ID",
@ -93,14 +93,14 @@ class CassandraMessageWriterComponent(BaseMemoryComponent):
def build(
self,
input_value: Record,
input_value: Data,
session_id: str,
table_name: str,
token: str,
database_id: str,
keyspace: Optional[str] = None,
ttl_seconds: Optional[int] = None,
) -> Record:
) -> Data:
try:
import cassio
except ImportError:

View file

@ -4,7 +4,7 @@ from langchain_community.chat_message_histories.zep import SearchScope, SearchTy
from langflow.base.memory.memory import BaseMemoryComponent
from langflow.field_typing import Text
from langflow.schema import Record
from langflow.schema import Data
class ZepMessageReaderComponent(BaseMemoryComponent):
@ -60,7 +60,7 @@ class ZepMessageReaderComponent(BaseMemoryComponent):
},
}
def get_messages(self, **kwargs) -> list[Record]:
def get_messages(self, **kwargs) -> list[Data]:
"""
Retrieves messages from the ZepChatMessageHistory memory.
@ -75,7 +75,7 @@ class ZepMessageReaderComponent(BaseMemoryComponent):
limit (int, optional): The maximum number of search results to return. Defaults to None.
Returns:
list[Record]: A list of Record objects representing the search results.
list[Data]: A list of Data objects representing the search results.
"""
memory: ZepChatMessageHistory = cast(ZepChatMessageHistory, kwargs.get("memory"))
if not memory:
@ -103,10 +103,10 @@ class ZepMessageReaderComponent(BaseMemoryComponent):
result_dict["metadata"] = result.metadata
result_dict["score"] = result.score
result_dicts.append(result_dict)
results = [Record(data=result_dict) for result_dict in result_dicts]
results = [Data(data=result_dict) for result_dict in result_dicts]
else:
messages = memory.messages
results = [Record.from_lc_message(message) for message in messages]
results = [Data.from_lc_message(message) for message in messages]
return results
def build(
@ -119,7 +119,7 @@ class ZepMessageReaderComponent(BaseMemoryComponent):
search_scope: str = SearchScope.messages,
search_type: str = SearchType.similarity,
limit: Optional[int] = None,
) -> list[Record]:
) -> list[Data]:
try:
# Monkeypatch API_BASE_PATH to
# avoid 404
@ -139,12 +139,12 @@ class ZepMessageReaderComponent(BaseMemoryComponent):
zep_client = ZepClient(api_url=url, api_key=api_key)
memory = ZepChatMessageHistory(session_id=session_id, zep_client=zep_client)
records = self.get_messages(
data = self.get_messages(
memory=memory,
query=query,
search_scope=search_scope,
search_type=search_type,
limit=limit,
)
self.status = records
return records
self.status = data
return data

View file

@ -2,7 +2,7 @@ from typing import TYPE_CHECKING, Optional
from langflow.base.memory.memory import BaseMemoryComponent
from langflow.field_typing import Text
from langflow.schema import Record
from langflow.schema import Data
if TYPE_CHECKING:
from zep_python.langchain import ZepChatMessageHistory
@ -35,8 +35,8 @@ class ZepMessageWriterComponent(BaseMemoryComponent):
"advanced": True,
},
"input_value": {
"display_name": "Input Record",
"info": "Record to write to Zep.",
"display_name": "Input Data",
"info": "Data to write to Zep.",
},
"api_base_path": {
"display_name": "API Base Path",
@ -78,12 +78,12 @@ class ZepMessageWriterComponent(BaseMemoryComponent):
def build(
self,
input_value: Record,
input_value: Data,
session_id: Text,
api_base_path: str = "api/v1",
url: Optional[Text] = None,
api_key: Optional[Text] = None,
) -> Record:
) -> Data:
try:
# Monkeypatch API_BASE_PATH to
# avoid 404

View file

@ -58,7 +58,7 @@ class AmazonBedrockComponent(LCModelComponent):
"advanced": True,
},
"cache": {"display_name": "Cache"},
"input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]},
"input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]},
"system_message": {
"display_name": "System Message",
"info": "System message to pass to the model.",

View file

@ -63,7 +63,7 @@ class AnthropicLLM(LCModelComponent):
"info": "Endpoint of the Anthropic API. Defaults to 'https://api.anthropic.com' if not specified.",
},
"code": {"show": False},
"input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]},
"input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]},
"stream": {
"display_name": "Stream",
"advanced": True,

View file

@ -5,7 +5,7 @@ from pydantic.v1 import SecretStr
from langflow.base.constants import STREAM_INFO_TEXT
from langflow.base.models.model import LCModelComponent
from langflow.field_typing import Text, BaseLanguageModel
from langflow.field_typing import BaseLanguageModel, Text
from langflow.template import Input, Output
@ -63,7 +63,7 @@ class AzureChatOpenAIComponent(LCModelComponent):
advanced=True,
info="The maximum number of tokens to generate. Set to 0 for unlimited tokens.",
),
Input(name="input_value", type=str, display_name="Input", input_types=["Text", "Record", "Prompt"]),
Input(name="input_value", type=str, display_name="Input", input_types=["Text", "Data", "Prompt"]),
Input(name="stream", type=bool, display_name="Stream", info=STREAM_INFO_TEXT, advanced=True),
Input(
name="system_message",

View file

@ -81,7 +81,7 @@ class QianfanChatEndpointComponent(LCModelComponent):
"info": "Endpoint of the Qianfan LLM, required if custom model used.",
},
"code": {"show": False},
"input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]},
"input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]},
"stream": {
"display_name": "Stream",
"info": STREAM_INFO_TEXT,

View file

@ -111,7 +111,7 @@ class ChatLiteLLMModelComponent(LCModelComponent):
"required": False,
"default": False,
},
"input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]},
"input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]},
"stream": {
"display_name": "Stream",
"info": STREAM_INFO_TEXT,

View file

@ -43,7 +43,7 @@ class CohereComponent(LCModelComponent):
"type": "float",
"show": True,
},
"input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]},
"input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]},
"stream": {
"display_name": "Stream",
"info": STREAM_INFO_TEXT,

View file

@ -57,7 +57,7 @@ class GroqModelComponent(LCModelComponent):
info="The name of the model to use. Supported examples: gemini-pro",
options=MODEL_NAMES,
),
Input(name="input_value", field_type=str, display_name="Input", input_types=["Text", "Record", "Prompt"]),
Input(name="input_value", field_type=str, display_name="Input", input_types=["Text", "Data", "Prompt"]),
Input(name="stream", field_type=bool, display_name="Stream", advanced=True, info=STREAM_INFO_TEXT),
Input(
name="system_message",

View file

@ -37,7 +37,7 @@ class HuggingFaceEndpointsComponent(LCModelComponent):
"advanced": True,
},
"code": {"show": False},
"input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]},
"input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]},
"stream": {
"display_name": "Stream",
"info": STREAM_INFO_TEXT,

View file

@ -27,7 +27,7 @@ class MistralAIModelComponent(LCModelComponent):
def build_config(self):
return {
"input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]},
"input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]},
"max_tokens": {
"display_name": "Max Tokens",
"advanced": True,

View file

@ -120,7 +120,7 @@ class ChatOllamaComponent(LCModelComponent):
info="Controls the creativity of model responses.",
value=0.8,
),
Input(name="input_value", type=str, display_name="Input", input_types=["Text", "Record", "Prompt"]),
Input(name="input_value", type=str, display_name="Input", input_types=["Text", "Data", "Prompt"]),
Input(name="stream", type=bool, display_name="Stream", info=STREAM_INFO_TEXT, value=False),
Input(
name="system_message",

View file

@ -16,7 +16,7 @@ class OpenAIModelComponent(LCModelComponent):
icon = "OpenAI"
inputs = [
StrInput(name="input_value", display_name="Input", input_types=["Text", "Record", "Prompt"]),
StrInput(name="input_value", display_name="Input", input_types=["Text", "Data", "Prompt"]),
IntInput(
name="max_tokens",
display_name="Max Tokens",

View file

@ -73,7 +73,7 @@ class ChatVertexAIComponent(LCModelComponent):
"value": False,
"advanced": True,
},
"input_value": {"display_name": "Input", "input_types": ["Text", "Record", "Prompt"]},
"input_value": {"display_name": "Input", "input_types": ["Text", "Data", "Prompt"]},
"stream": {
"display_name": "Stream",
"info": STREAM_INFO_TEXT,

View file

@ -30,10 +30,10 @@ class ChatOutput(ChatComponent):
StrInput(name="session_id", display_name="Session ID", info="Session ID for the message.", advanced=True),
BoolInput(
name="record_template",
display_name="Record Template",
display_name="Data Template",
value="{text}",
advanced=True,
info="Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.",
info="Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.",
),
]
outputs = [

View file

@ -1,5 +1,5 @@
from langflow.custom import Component
from langflow.schema import Record
from langflow.schema import Data
from langflow.template import Input, Output
@ -8,12 +8,12 @@ class RecordsOutput(Component):
description = "Display Records as a Table"
inputs = [
Input(name="input_value", type=Record, display_name="Record Input"),
Input(name="input_value", type=Data, display_name="Data Input"),
]
outputs = [
Output(display_name="Record", name="record", method="record_response"),
Output(display_name="Data", name="record", method="record_response"),
]
def record_response(self) -> Record:
def record_response(self) -> Data:
self.status = self.input_value
return self.input_value

View file

@ -13,15 +13,15 @@ class TextOutput(TextComponent):
name="input_value",
type=str,
display_name="Value",
info="Text or Record to be passed as output.",
input_types=["Record", "Text"],
info="Text or Data to be passed as output.",
input_types=["Data", "Text"],
),
Input(
name="record_template",
type=str,
display_name="Record Template",
display_name="Data Template",
multiline=True,
info="Template to convert Record to Text. If left empty, it will be dynamically set to the Record's text key.",
info="Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.",
advanced=True,
),
]

View file

@ -5,7 +5,7 @@ from langchain_core.vectorstores import VectorStore
from langflow.custom import CustomComponent
from langflow.field_typing import BaseLanguageModel, Text
from langflow.schema import Record
from langflow.schema import Data
from langflow.schema.message import Message
@ -43,11 +43,11 @@ class SelfQueryRetrieverComponent(CustomComponent):
self,
query: Message,
vectorstore: VectorStore,
attribute_infos: list[Record],
attribute_infos: list[Data],
document_content_description: Text,
llm: BaseLanguageModel,
) -> Record:
metadata_field_infos = [AttributeInfo(**record.data) for record in attribute_infos]
) -> Data:
metadata_field_infos = [AttributeInfo(**value.data) for value in attribute_infos]
self_query_retriever = SelfQueryRetriever.from_llm(
llm=llm,
vectorstore=vectorstore,
@ -63,6 +63,6 @@ class SelfQueryRetrieverComponent(CustomComponent):
else:
raise ValueError(f"Query type {type(query)} not supported.")
documents = self_query_retriever.invoke(input=input_text)
records = [Record.from_document(document) for document in documents]
self.status = records
return records
data = [Data.from_document(document) for document in documents]
self.status = data
return data

View file

@ -3,7 +3,7 @@ from typing import List
from langchain_text_splitters import CharacterTextSplitter
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
from langflow.utils.util import unescape_string
@ -13,7 +13,7 @@ class CharacterTextSplitterComponent(CustomComponent):
def build_config(self):
return {
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"inputs": {"display_name": "Input", "input_types": ["Document", "Data"]},
"chunk_overlap": {"display_name": "Chunk Overlap", "default": 200},
"chunk_size": {"display_name": "Chunk Size", "default": 1000},
"separator": {"display_name": "Separator", "default": "\n"},
@ -21,16 +21,16 @@ class CharacterTextSplitterComponent(CustomComponent):
def build(
self,
inputs: List[Record],
inputs: List[Data],
chunk_overlap: int = 200,
chunk_size: int = 1000,
separator: str = "\n",
) -> List[Record]:
) -> List[Data]:
# separator may come escaped from the frontend
separator = unescape_string(separator)
documents = []
for _input in inputs:
if isinstance(_input, Record):
if isinstance(_input, Data):
documents.append(_input.to_lc_document())
else:
documents.append(_input)
@ -39,6 +39,6 @@ class CharacterTextSplitterComponent(CustomComponent):
chunk_size=chunk_size,
separator=separator,
).split_documents(documents)
records = self.to_records(docs)
self.status = records
return records
data = self.to_data(docs)
self.status = data
return data

View file

@ -3,7 +3,7 @@ from typing import List, Optional
from langchain_text_splitters import Language, RecursiveCharacterTextSplitter
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class LanguageRecursiveTextSplitterComponent(CustomComponent):
@ -14,7 +14,7 @@ class LanguageRecursiveTextSplitterComponent(CustomComponent):
def build_config(self):
options = [x.value for x in Language]
return {
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"inputs": {"display_name": "Input", "input_types": ["Document", "Data"]},
"separator_type": {
"display_name": "Separator Type",
"info": "The type of separator to use.",
@ -44,11 +44,11 @@ class LanguageRecursiveTextSplitterComponent(CustomComponent):
def build(
self,
inputs: List[Record],
inputs: List[Data],
chunk_size: Optional[int] = 1000,
chunk_overlap: Optional[int] = 200,
separator_type: str = "Python",
) -> list[Record]:
) -> list[Data]:
"""
Split text into chunks of a specified length.
@ -75,10 +75,10 @@ class LanguageRecursiveTextSplitterComponent(CustomComponent):
)
documents = []
for _input in inputs:
if isinstance(_input, Record):
if isinstance(_input, Data):
documents.append(_input.to_lc_document())
else:
documents.append(_input)
docs = splitter.split_documents(documents)
records = self.to_records(docs)
return records
data = self.to_data(docs)
return data

View file

@ -4,8 +4,8 @@ from langchain_core.documents import Document
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.utils.util import build_loader_repr_from_records, unescape_string
from langflow.schema import Data
from langflow.utils.util import build_loader_repr_from_data, unescape_string
class RecursiveCharacterTextSplitterComponent(CustomComponent):
@ -18,7 +18,7 @@ class RecursiveCharacterTextSplitterComponent(CustomComponent):
"inputs": {
"display_name": "Input",
"info": "The texts to split.",
"input_types": ["Document", "Record"],
"input_types": ["Document", "Data"],
},
"separators": {
"display_name": "Separators",
@ -46,7 +46,7 @@ class RecursiveCharacterTextSplitterComponent(CustomComponent):
separators: Optional[list[str]] = None,
chunk_size: Optional[int] = 1000,
chunk_overlap: Optional[int] = 200,
) -> list[Record]:
) -> list[Data]:
"""
Split text into chunks of a specified length.
@ -79,11 +79,11 @@ class RecursiveCharacterTextSplitterComponent(CustomComponent):
)
documents = []
for _input in inputs:
if isinstance(_input, Record):
if isinstance(_input, Data):
documents.append(_input.to_lc_document())
else:
documents.append(_input)
docs = splitter.split_documents(documents)
records = self.to_records(docs)
self.repr_value = build_loader_repr_from_records(records)
return records
data = self.to_data(docs)
self.repr_value = build_loader_repr_from_data(data)
return data

View file

@ -3,7 +3,7 @@ from typing import Optional
from langchain_community.utilities.searchapi import SearchApiAPIWrapper
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
from langflow.services.database.models.base import orjson_dumps
@ -37,7 +37,7 @@ class SearchApi(CustomComponent):
engine: str,
api_key: str,
params: Optional[dict] = None,
) -> Record:
) -> Data:
if params is None:
params = {}
@ -48,6 +48,6 @@ class SearchApi(CustomComponent):
result = orjson_dumps(results, indent_2=False)
record = Record(data=result)
record = Data(data=result)
self.status = record
return record

View file

@ -3,7 +3,7 @@ from typing import List, Optional
from langflow.components.vectorstores.AstraDB import AstraDBVectorStoreComponent
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.field_typing import Embeddings, Text
from langflow.schema import Record
from langflow.schema import Data
class AstraDBSearchComponent(LCVectorStoreComponent):
@ -48,7 +48,7 @@ class AstraDBSearchComponent(LCVectorStoreComponent):
},
"batch_size": {
"display_name": "Batch Size",
"info": "Optional number of records to process in a single batch.",
"info": "Optional number of data to process in a single batch.",
"advanced": True,
},
"bulk_insert_batch_concurrency": {
@ -58,7 +58,7 @@ class AstraDBSearchComponent(LCVectorStoreComponent):
},
"bulk_insert_overwrite_concurrency": {
"display_name": "Bulk Insert Overwrite Concurrency",
"info": "Optional concurrency level for bulk insert operations that overwrite existing records.",
"info": "Optional concurrency level for bulk insert operations that overwrite existing data.",
"advanced": True,
},
"bulk_delete_concurrency": {
@ -119,7 +119,7 @@ class AstraDBSearchComponent(LCVectorStoreComponent):
metadata_indexing_include: Optional[List[str]] = None,
metadata_indexing_exclude: Optional[List[str]] = None,
collection_indexing_policy: Optional[dict] = None,
) -> List[Record]:
) -> List[Data]:
vector_store = AstraDBVectorStoreComponent().build(
embedding=embedding,
collection_name=collection_name,

View file

@ -1,11 +1,12 @@
from typing import Any, List, Optional, Tuple
from langflow.components.vectorstores.Cassandra import CassandraVectorStoreComponent
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.field_typing import Embeddings, Text
from langflow.schema import Record
from langchain_community.utilities.cassandra import SetupMode
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.components.vectorstores.Cassandra import CassandraVectorStoreComponent
from langflow.field_typing import Embeddings, Text
from langflow.schema import Data
class CassandraSearchComponent(LCVectorStoreComponent):
display_name = "Cassandra Search"
@ -72,7 +73,7 @@ class CassandraSearchComponent(LCVectorStoreComponent):
keyspace: Optional[str] = None,
body_index_options: Optional[List[Tuple[str, Any]]] = None,
setup_mode: SetupMode = SetupMode.SYNC,
) -> List[Record]:
) -> List[Data]:
vector_store = CassandraVectorStoreComponent().build(
embedding=embedding,
table_name=table_name,

View file

@ -6,7 +6,7 @@ from langchain_chroma import Chroma
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.field_typing import Embeddings, Text
from langflow.schema import Record
from langflow.schema import Data
class ChromaSearchComponent(LCVectorStoreComponent):
@ -69,7 +69,7 @@ class ChromaSearchComponent(LCVectorStoreComponent):
chroma_server_host: Optional[str] = None,
chroma_server_http_port: Optional[int] = None,
chroma_server_grpc_port: Optional[int] = None,
) -> List[Record]:
) -> List[Data]:
"""
Builds the Vector Store or BaseRetriever object.
@ -87,7 +87,7 @@ class ChromaSearchComponent(LCVectorStoreComponent):
- chroma_server_grpc_port (int, optional): The gRPC port for the Chroma server. Defaults to None.
Returns:
- List[Record]: The list of records.
- List[Data]: The list of data.
"""
# Chroma settings

View file

@ -3,7 +3,7 @@ from typing import List
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.components.vectorstores.Couchbase import CouchbaseComponent
from langflow.field_typing import Embeddings, Text
from langflow.schema import Record
from langflow.schema import Data
class CouchbaseSearchComponent(LCVectorStoreComponent):
@ -51,7 +51,7 @@ class CouchbaseSearchComponent(LCVectorStoreComponent):
couchbase_connection_string: str = "",
couchbase_username: str = "",
couchbase_password: str = "",
) -> List[Record]:
) -> List[Data]:
vector_store = CouchbaseComponent().build(
couchbase_connection_string=couchbase_connection_string,
couchbase_username=couchbase_username,

View file

@ -4,7 +4,7 @@ from langchain_community.vectorstores.faiss import FAISS
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.field_typing import Embeddings, Text
from langflow.schema import Record
from langflow.schema import Data
class FAISSSearchComponent(LCVectorStoreComponent):
@ -35,7 +35,7 @@ class FAISSSearchComponent(LCVectorStoreComponent):
folder_path: str,
number_of_results: int = 4,
index_name: str = "langflow_index",
) -> List[Record]:
) -> List[Data]:
if not folder_path:
raise ValueError("Folder path is required to save the FAISS index.")
path = self.resolve_path(folder_path)

View file

@ -3,7 +3,7 @@ from typing import List, Optional
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.components.vectorstores.MongoDBAtlasVector import MongoDBAtlasComponent
from langflow.field_typing import Embeddings, NestedDict, Text
from langflow.schema import Record
from langflow.schema import Data
class MongoDBAtlasSearchComponent(LCVectorStoreComponent):
@ -41,7 +41,7 @@ class MongoDBAtlasSearchComponent(LCVectorStoreComponent):
index_name: str = "",
mongodb_atlas_cluster_uri: str = "",
search_kwargs: Optional[NestedDict] = None,
) -> List[Record]:
) -> List[Data]:
search_kwargs = search_kwargs or {}
vector_store = MongoDBAtlasComponent().build(
mongodb_atlas_cluster_uri=mongodb_atlas_cluster_uri,

View file

@ -6,7 +6,7 @@ from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.components.vectorstores.Pinecone import PineconeComponent
from langflow.field_typing import Embeddings, Text
from langflow.field_typing.constants import NestedDict
from langflow.schema import Record
from langflow.schema import Data
class PineconeSearchComponent(PineconeComponent, LCVectorStoreComponent):
@ -70,7 +70,7 @@ class PineconeSearchComponent(PineconeComponent, LCVectorStoreComponent):
namespace: Optional[str] = "default",
search_type: str = "similarity",
search_kwargs: Optional[NestedDict] = None,
) -> List[Record]: # type: ignore[override]
) -> List[Data]: # type: ignore[override]
vector_store = super().build(
embedding=embedding,
distance_strategy=distance_strategy,

View file

@ -3,7 +3,7 @@ from typing import List, Optional
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.components.vectorstores.Qdrant import QdrantComponent
from langflow.field_typing import Embeddings, NestedDict, Text
from langflow.schema import Record
from langflow.schema import Data
class QdrantSearchComponent(QdrantComponent, LCVectorStoreComponent):
@ -70,7 +70,7 @@ class QdrantSearchComponent(QdrantComponent, LCVectorStoreComponent):
search_kwargs: Optional[NestedDict] = None,
timeout: Optional[int] = None,
url: Optional[str] = None,
) -> List[Record]: # type: ignore[override]
) -> List[Data]: # type: ignore[override]
vector_store = super().build(
embedding=embedding,
collection_name=collection_name,

View file

@ -5,7 +5,7 @@ from langchain_core.embeddings import Embeddings
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.components.vectorstores.Redis import RedisComponent
from langflow.field_typing import Text
from langflow.schema import Record
from langflow.schema import Data
class RedisSearchComponent(RedisComponent, LCVectorStoreComponent):
@ -55,7 +55,7 @@ class RedisSearchComponent(RedisComponent, LCVectorStoreComponent):
redis_index_name: str,
number_of_results: int = 4,
schema: Optional[str] = None,
) -> List[Record]:
) -> List[Data]:
"""
Builds the Vector Store or BaseRetriever object.

View file

@ -5,7 +5,7 @@ from supabase.client import Client, create_client
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.field_typing import Embeddings, Text
from langflow.schema import Record
from langflow.schema import Data
class SupabaseSearchComponent(LCVectorStoreComponent):
@ -43,7 +43,7 @@ class SupabaseSearchComponent(LCVectorStoreComponent):
supabase_service_key: str = "",
supabase_url: str = "",
table_name: str = "",
) -> List[Record]:
) -> List[Data]:
supabase: Client = create_client(supabase_url, supabase_key=supabase_service_key)
vector_store = SupabaseVectorStore(
client=supabase,

View file

@ -5,7 +5,7 @@ from langchain_core.embeddings import Embeddings
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.components.vectorstores.Upstash import UpstashVectorStoreComponent
from langflow.field_typing import Text
from langflow.schema import Record
from langflow.schema import Data
class UpstashSearchComponent(UpstashVectorStoreComponent, LCVectorStoreComponent):
@ -29,7 +29,7 @@ class UpstashSearchComponent(UpstashVectorStoreComponent, LCVectorStoreComponent
"options": ["Similarity", "MMR"],
},
"input_value": {"display_name": "Input"},
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"inputs": {"display_name": "Input", "input_types": ["Document", "Data"]},
"embedding": {
"display_name": "Embedding",
"input_types": ["Embeddings"],
@ -64,7 +64,7 @@ class UpstashSearchComponent(UpstashVectorStoreComponent, LCVectorStoreComponent
index_token: Optional[str] = None,
embedding: Optional[Embeddings] = None,
number_of_results: int = 4,
) -> List[Record]:
) -> List[Data]:
vector_store = super().build(
embedding=embedding,
text_key=text_key,

View file

@ -5,7 +5,7 @@ from langchain_community.vectorstores.vectara import Vectara
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.components.vectorstores.Vectara import VectaraComponent
from langflow.field_typing import Text
from langflow.schema import Record
from langflow.schema import Data
class VectaraSearchComponent(VectaraComponent, LCVectorStoreComponent):
@ -49,7 +49,7 @@ class VectaraSearchComponent(VectaraComponent, LCVectorStoreComponent):
vectara_corpus_id: str,
vectara_api_key: str,
number_of_results: int = 4,
) -> List[Record]:
) -> List[Data]:
source = "Langflow"
vector_store = Vectara(
vectara_customer_id=vectara_customer_id,

View file

@ -5,7 +5,7 @@ from langchain_core.embeddings import Embeddings
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.components.vectorstores.Weaviate import WeaviateVectorStoreComponent
from langflow.field_typing import Text
from langflow.schema import Record
from langflow.schema import Data
class WeaviateSearchVectorStore(WeaviateVectorStoreComponent, LCVectorStoreComponent):
@ -68,7 +68,7 @@ class WeaviateSearchVectorStore(WeaviateVectorStoreComponent, LCVectorStoreCompo
text_key: str = "text",
embedding: Optional[Embeddings] = None,
attributes: Optional[list] = None,
) -> List[Record]:
) -> List[Data]:
vector_store = super().build(
url=url,
api_key=api_key,

View file

@ -5,7 +5,7 @@ from langchain_core.embeddings import Embeddings
from langflow.components.vectorstores.base.model import LCVectorStoreComponent
from langflow.components.vectorstores.pgvector import PGVectorComponent
from langflow.field_typing import Text
from langflow.schema import Record
from langflow.schema import Data
class PGVectorSearchComponent(PGVectorComponent, LCVectorStoreComponent):
@ -48,7 +48,7 @@ class PGVectorSearchComponent(PGVectorComponent, LCVectorStoreComponent):
pg_server_url: str,
collection_name: str,
number_of_results: int = 4,
) -> List[Record]:
) -> List[Data]:
"""
Builds the Vector Store or BaseRetriever object.

View file

@ -1,9 +1,10 @@
from typing import List, Optional, Union
from langchain_core.retrievers import BaseRetriever
from langflow.custom import CustomComponent
from langflow.field_typing import Embeddings, VectorStore
from langflow.schema import Record
from langchain_core.retrievers import BaseRetriever
from langflow.schema import Data
class AstraDBVectorStoreComponent(CustomComponent):
@ -16,7 +17,7 @@ class AstraDBVectorStoreComponent(CustomComponent):
return {
"inputs": {
"display_name": "Inputs",
"info": "Optional list of records to be processed and stored in the vector store.",
"info": "Optional list of data to be processed and stored in the vector store.",
},
"embedding": {"display_name": "Embedding", "info": "Embedding to use"},
"collection_name": {
@ -44,7 +45,7 @@ class AstraDBVectorStoreComponent(CustomComponent):
},
"batch_size": {
"display_name": "Batch Size",
"info": "Optional number of records to process in a single batch.",
"info": "Optional number of data to process in a single batch.",
"advanced": True,
},
"bulk_insert_batch_concurrency": {
@ -54,7 +55,7 @@ class AstraDBVectorStoreComponent(CustomComponent):
},
"bulk_insert_overwrite_concurrency": {
"display_name": "Bulk Insert Overwrite Concurrency",
"info": "Optional concurrency level for bulk insert operations that overwrite existing records.",
"info": "Optional concurrency level for bulk insert operations that overwrite existing data.",
"advanced": True,
},
"bulk_delete_concurrency": {
@ -96,7 +97,7 @@ class AstraDBVectorStoreComponent(CustomComponent):
token: str,
api_endpoint: str,
collection_name: str,
inputs: Optional[List[Record]] = None,
inputs: Optional[List[Data]] = None,
namespace: Optional[str] = None,
metric: Optional[str] = None,
batch_size: Optional[int] = None,

View file

@ -1,10 +1,11 @@
from typing import Any, List, Optional, Tuple
from langchain_community.vectorstores import Cassandra
from langchain_community.utilities.cassandra import SetupMode
from langchain_community.vectorstores import Cassandra
from langflow.custom import CustomComponent
from langflow.field_typing import Embeddings, VectorStore
from langflow.schema import Record
from langflow.schema import Data
class CassandraVectorStoreComponent(CustomComponent):
@ -17,7 +18,7 @@ class CassandraVectorStoreComponent(CustomComponent):
return {
"inputs": {
"display_name": "Inputs",
"info": "Optional list of records to be processed and stored in the vector store.",
"info": "Optional list of data to be processed and stored in the vector store.",
},
"embedding": {"display_name": "Embedding", "info": "Embedding to use"},
"token": {
@ -45,7 +46,7 @@ class CassandraVectorStoreComponent(CustomComponent):
},
"batch_size": {
"display_name": "Batch Size",
"info": "Optional number of records to process in a single batch.",
"info": "Optional number of data to process in a single batch.",
"advanced": True,
},
"body_index_options": {
@ -66,7 +67,7 @@ class CassandraVectorStoreComponent(CustomComponent):
embedding: Embeddings,
token: str,
database_id: str,
inputs: Optional[List[Record]] = None,
inputs: Optional[List[Data]] = None,
keyspace: Optional[str] = None,
table_name: str = "",
ttl_seconds: Optional[int] = None,

View file

@ -8,9 +8,9 @@ from langchain_core.embeddings import Embeddings
from langchain_core.retrievers import BaseRetriever
from langchain_core.vectorstores import VectorStore
from langflow.base.vectorstores.utils import chroma_collection_to_records
from langflow.base.vectorstores.utils import chroma_collection_to_data
from langflow.custom import CustomComponent
from langflow.schema import Record
from langflow.schema import Data
class ChromaComponent(CustomComponent):
@ -34,7 +34,7 @@ class ChromaComponent(CustomComponent):
"collection_name": {"display_name": "Collection Name", "value": "langflow"},
"index_directory": {"display_name": "Persist Directory"},
"code": {"advanced": True, "display_name": "Code"},
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"inputs": {"display_name": "Input", "input_types": ["Document", "Data"]},
"embedding": {"display_name": "Embedding"},
"chroma_server_cors_allow_origins": {
"display_name": "Server CORS Allow Origins",
@ -63,7 +63,7 @@ class ChromaComponent(CustomComponent):
embedding: Embeddings,
chroma_server_ssl_enabled: bool,
index_directory: Optional[str] = None,
inputs: Optional[List[Record]] = None,
inputs: Optional[List[Data]] = None,
chroma_server_cors_allow_origins: List[str] = [],
chroma_server_host: Optional[str] = None,
chroma_server_http_port: Optional[int] = None,
@ -78,7 +78,7 @@ class ChromaComponent(CustomComponent):
- embedding (Embeddings): The embeddings to use for the Vector Store.
- chroma_server_ssl_enabled (bool): Whether to enable SSL for the Chroma server.
- index_directory (Optional[str]): The directory to persist the Vector Store to.
- inputs (Optional[List[Record]]): The input records to use for the Vector Store.
- inputs (Optional[List[Data]]): The input data to use for the Vector Store.
- chroma_server_cors_allow_origins (List[str]): The CORS allow origins for the Chroma server.
- chroma_server_host (Optional[str]): The host for the Chroma server.
- chroma_server_http_port (Optional[int]): The HTTP port for the Chroma server.
@ -113,23 +113,23 @@ class ChromaComponent(CustomComponent):
collection_name=collection_name,
)
if allow_duplicates:
stored_records = []
stored_data = []
else:
stored_records = chroma_collection_to_records(chroma.get())
stored_data = chroma_collection_to_data(chroma.get())
_stored_documents_without_id = []
for record in deepcopy(stored_records):
del record.id
_stored_documents_without_id.append(record)
for value in deepcopy(stored_data):
del value.id
_stored_documents_without_id.append(value)
documents = []
for _input in inputs or []:
if isinstance(_input, Record):
if isinstance(_input, Data):
if _input not in _stored_documents_without_id:
documents.append(_input.to_lc_document())
else:
raise ValueError("Inputs must be a Record objects.")
raise ValueError("Inputs must be a Data objects.")
if documents and embedding is not None:
chroma.add_documents(documents)
self.status = stored_records
self.status = stored_data
return chroma

View file

@ -5,7 +5,7 @@ from langchain_core.retrievers import BaseRetriever
from langflow.custom import CustomComponent
from langflow.field_typing import Embeddings, VectorStore
from langflow.schema import Record
from langflow.schema import Data
class CouchbaseComponent(CustomComponent):
@ -25,7 +25,7 @@ class CouchbaseComponent(CustomComponent):
def build_config(self):
return {
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"inputs": {"display_name": "Input", "input_types": ["Document", "Data"]},
"embedding": {"display_name": "Embedding"},
"couchbase_connection_string": {"display_name": "Couchbase Cluster connection string", "required": True},
"couchbase_username": {"display_name": "Couchbase username", "required": True},
@ -39,7 +39,7 @@ class CouchbaseComponent(CustomComponent):
def build(
self,
embedding: Embeddings,
inputs: Optional[List[Record]] = None,
inputs: Optional[List[Data]] = None,
bucket_name: str = "",
scope_name: str = "",
collection_name: str = "",
@ -68,7 +68,7 @@ class CouchbaseComponent(CustomComponent):
raise ValueError(f"Failed to connect to Couchbase: {e}")
documents = []
for _input in inputs or []:
if isinstance(_input, Record):
if isinstance(_input, Data):
documents.append(_input.to_lc_document())
else:
documents.append(_input)

View file

@ -6,7 +6,7 @@ from langchain_core.vectorstores import VectorStore
from langflow.custom import CustomComponent
from langflow.field_typing import Embeddings
from langflow.schema import Record
from langflow.schema import Data
class FAISSComponent(CustomComponent):
@ -16,7 +16,7 @@ class FAISSComponent(CustomComponent):
def build_config(self):
return {
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"inputs": {"display_name": "Input", "input_types": ["Document", "Data"]},
"embedding": {"display_name": "Embedding"},
"folder_path": {
"display_name": "Folder Path",
@ -28,13 +28,13 @@ class FAISSComponent(CustomComponent):
def build(
self,
embedding: Embeddings,
inputs: List[Record],
inputs: List[Data],
folder_path: str,
index_name: str = "langflow_index",
) -> Union[VectorStore, FAISS, BaseRetriever]:
documents = []
for _input in inputs or []:
if isinstance(_input, Record):
if isinstance(_input, Data):
documents.append(_input.to_lc_document())
else:
documents.append(_input)

View file

@ -4,7 +4,7 @@ from langchain_community.vectorstores.mongodb_atlas import MongoDBAtlasVectorSea
from langflow.custom import CustomComponent
from langflow.field_typing import Embeddings
from langflow.schema import Record
from langflow.schema import Data
class MongoDBAtlasComponent(CustomComponent):
@ -14,7 +14,7 @@ class MongoDBAtlasComponent(CustomComponent):
def build_config(self):
return {
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"inputs": {"display_name": "Input", "input_types": ["Document", "Data"]},
"embedding": {"display_name": "Embedding"},
"collection_name": {"display_name": "Collection Name"},
"db_name": {"display_name": "Database Name"},
@ -25,7 +25,7 @@ class MongoDBAtlasComponent(CustomComponent):
def build(
self,
embedding: Embeddings,
inputs: Optional[List[Record]] = None,
inputs: Optional[List[Data]] = None,
collection_name: str = "",
db_name: str = "",
index_name: str = "",
@ -42,7 +42,7 @@ class MongoDBAtlasComponent(CustomComponent):
raise ValueError(f"Failed to connect to MongoDB Atlas: {e}")
documents = []
for _input in inputs or []:
if isinstance(_input, Record):
if isinstance(_input, Data):
documents.append(_input.to_lc_document())
else:
documents.append(_input)

Some files were not shown because too many files have changed in this diff Show more