Merge zustand/io/migration into codeShortcut

This commit is contained in:
igorrCarvalho 2024-03-07 18:21:47 -03:00
commit b16848c0b4
211 changed files with 6695 additions and 1763 deletions

View file

@ -83,15 +83,14 @@ The CustomComponent class serves as the foundation for creating custom component
| _`file_types: List[str]`_ | This is a requirement if the _`field_type`_ is _file_. Defines which file types will be accepted. For example, _json_, _yaml_ or _yml_. |
| _`range_spec: langflow.field_typing.RangeSpec`_ | This is a requirement if the _`field_type`_ is _`float`_. Defines the range of values accepted and the step size. If none is defined, the default is _`[-1, 1, 0.1]`_. |
| _`title_case: bool`_ | Formats the name of the field when _`display_name`_ is not defined. Set it to False to keep the name as you set it in the _`build`_ method. |
| _`refresh_button: bool`_ | If set to True a button will appear to the right of the field, and when clicked, it will call the _`update_build_config`_ method which takes in the _`build_config`_, the name of the field (_`field_name`_) and the latest value of the field (_`field_value`_). This is useful when you want to update the _`build_config`_ based on the value of the field. |
| _`real_time_refresh: bool`_ | If set to True, the _`update_build_config`_ method will be called every time the field value changes. |
<Admonition type="info" label="Tip">
Keys _`options`_ and _`value`_ can receive a method or function that returns a list of strings or a string, respectively. This is useful when you want to dynamically generate the options or the default value of a field. A refresh button will appear next to the field in the component, allowing the user to update the options or the default value.
</Admonition>
<Admonition type="info" label="Tip">
By using the _`update_build_config`_ method, you can update the _`build_config`_ in whatever way you want based on the value of the field or not.
</Admonition>
- The CustomComponent class also provides helpful methods for specific tasks (e.g., to load and use other flows from the Langflow platform):

View file

@ -1,6 +1,6 @@
import Admonition from "@theme/Admonition";
import Admonition from '@theme/Admonition';
# I/O
# Inputs
### ChatInput
@ -22,27 +22,6 @@ This component is designed to get user input from the chat.
</p>
</Admonition>
### ChatOutput
This component is designed to send a message to the chat.
**Params**
- **Sender Type:** specifies the sender type. Defaults to _`"Machine"`_. Options are _`"Machine"`_ and _`"User"`_.
- **Sender Name:** specifies the name of the sender. Defaults to _`"AI"`_.
- **Session ID:** specifies the session ID of the chat history. If provided, the message will be saved in the Message History.
- **Message:** specifies the message text.
<Admonition type="note" title="Note">
<p>
If _`As Record`_ is _`true`_ and the _`Message`_ is a _`Record`_, the data of the _`Record`_ will be updated with the _`Sender`_, _`Sender Name`_, and _`Session ID`_.
</p>
</Admonition>
### TextInput
This component is designed for simple text input, allowing users to pass textual data to subsequent components in the workflow. It's particularly useful for scenarios where a brief user input is required to initiate or influence the flow.
@ -58,16 +37,3 @@ This component is designed for simple text input, allowing users to pass textual
</p>
</Admonition>
### TextOutput
This component is designed to display text data to the user. It's particularly useful for scenarios where you don't want to send the text data to the chat, but still want to display it.
**Params**
- **Value:** Specifies the text data to be displayed. This is where the text data to be displayed is provided. If no value is provided, it defaults to an empty string.
<Admonition type="note" title="Note">
<p>
The `TextOutput` component serves as a straightforward means for displaying text data. It ensures that textual data can be seamlessly observed in the chat window throughout your flow.
</p>
</Admonition>

View file

@ -0,0 +1,37 @@
import Admonition from '@theme/Admonition';
# Outputs
### ChatOutput
This component is designed to send a message to the chat.
**Params**
- **Sender Type:** specifies the sender type. Defaults to _`"Machine"`_. Options are _`"Machine"`_ and _`"User"`_.
- **Sender Name:** specifies the name of the sender. Defaults to _`"AI"`_.
- **Session ID:** specifies the session ID of the chat history. If provided, the message will be saved in the Message History.
- **Message:** specifies the message text.
<Admonition type="note" title="Note">
<p>
If _`As Record`_ is _`true`_ and the _`Message`_ is a _`Record`_, the data of the _`Record`_ will be updated with the _`Sender`_, _`Sender Name`_, and _`Session ID`_.
</p>
</Admonition>
### TextOutput
This component is designed to display text data to the user. It's particularly useful for scenarios where you don't want to send the text data to the chat, but still want to display it.
**Params**
- **Value:** Specifies the text data to be displayed. This is where the text data to be displayed is provided. If no value is provided, it defaults to an empty string.
<Admonition type="note" title="Note">
<p>
The `TextOutput` component serves as a straightforward means for displaying text data. It ensures that textual data can be seamlessly observed in the chat window throughout your flow.
</p>
</Admonition>

View file

@ -7,7 +7,8 @@ import ReactPlayer from "react-player";
## Compose
Creating flows with Langflow is easy. Drag sidebar components onto the canvas and connect them together to create your pipeline. Langflow provides a range of [LangChain components](https://python.langchain.com/docs/modules/) to choose from, including LLMs, prompt serializers, agents, and chains.
Creating flows with Langflow is easy. Drag sidebar components onto the canvas and connect them together to create your pipeline.
Langflow provides a range of Components to choose from, including **Chat Input**, **Chat Output**, **API Request** and **Prompt**.
<ZoomableImage
alt="Docusaurus themed image"
@ -17,9 +18,9 @@ Creating flows with Langflow is easy. Drag sidebar components onto the canvas an
}}
/>
## Fork
## Starter Flows
The easiest way to start with Langflow is by forking a **community example**. Forking an example stores a copy in your project collection, allowing you to edit and save the modified version as a new flow.
Langflow provides a range of starter flows to help you get started. These flows are pre-built and can be used as a starting point for your own flows.
<div
style={{ marginBottom: "20px", display: "flex", justifyContent: "center" }}
@ -27,9 +28,21 @@ The easiest way to start with Langflow is by forking a **community example**. Fo
<ReactPlayer playing controls url="/videos/langflow_fork.mp4" />
</div>
## Build
## Defining Inputs and Outputs
Each flow can have multiple inputs and outputs. These can be defined by placing **Inputs** and **Outputs** components on the canvas.
The **Inputs** components define the inputs to the flow.
Whenever you place an Input component on the canvas, it will allow you to interactively define change its value
from the Interactive Panel.
The **Text Input** component allows you to define a text input, and the **Chat Input** component allows you to use the chat input from the Interactive Panel.
The **Outputs** components define the outputs of the flow and work similarly to the Inputs components.
Both Inputs and Outputs components can be connected to other components on the canvas and are used to define how the API works too.
Building a flow means validating if the components have prerequisites fulfilled and are properly instantiated. When a chat message is sent, the flow will run for the first time, executing the pipeline.
<div
style={{ marginBottom: "20px", display: "flex", justifyContent: "center" }}

View file

@ -33,12 +33,18 @@ module.exports = {
label: "Component Reference",
collapsed: false,
items: [
"components/inputs",
"components/outputs",
"components/data",
"components/prompts",
"components/models",
"components/helpers",
"components/experimental",
"components/agents",
"components/chains",
"components/custom",
"components/embeddings",
"components/io",
"components/llms",
"components/model_specs",
"components/loaders",
"components/memories",
"components/prompts",

View file

@ -62,12 +62,26 @@ def run_migrations_online() -> None:
and associate a connection with the context.
"""
from langflow.services.deps import get_db_service
try:
from langflow.services.database.factory import DatabaseServiceFactory
from langflow.services.deps import get_db_service
from langflow.services.manager import (
initialize_settings_service,
service_manager,
)
from langflow.services.schema import ServiceType
initialize_settings_service()
service_manager.register_factory(
DatabaseServiceFactory(), [ServiceType.SETTINGS_SERVICE]
)
connectable = get_db_service().engine
except Exception as e:
logger.error(f"Error getting database engine: {e}")
url = os.getenv("LANGFLOW_DATABASE_URL")
url = url or config.get_main_option("sqlalchemy.url")
config.set_main_option("sqlalchemy.url", url)
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",

View file

@ -23,10 +23,12 @@ depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}
def upgrade() -> None:
conn = op.get_bind()
inspector = Inspector.from_engine(conn) # type: ignore
table_names = inspector.get_table_names()
${upgrades if upgrades else "pass"}
def downgrade() -> None:
conn = op.get_bind()
inspector = Inspector.from_engine(conn) # type: ignore
table_names = inspector.get_table_names()
${downgrades if downgrades else "pass"}

View file

@ -0,0 +1,56 @@
"""Add icon and icon_bg_color to Flow
Revision ID: 63b9c451fd30
Revises: bc2f01c40e4a
Create Date: 2024-03-06 10:53:47.148658
"""
from typing import Sequence, Union
import sqlalchemy as sa
import sqlmodel
from alembic import op
from sqlalchemy.engine.reflection import Inspector
# revision identifiers, used by Alembic.
revision: str = "63b9c451fd30"
down_revision: Union[str, None] = "bc2f01c40e4a"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
conn = op.get_bind()
inspector = Inspector.from_engine(conn) # type: ignore
table_names = inspector.get_table_names()
column_names = [column["name"] for column in inspector.get_columns("flow")]
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("flow", schema=None) as batch_op:
if "icon" not in column_names:
batch_op.add_column(
sa.Column("icon", sqlmodel.sql.sqltypes.AutoString(), nullable=True)
)
if "icon_bg_color" not in column_names:
batch_op.add_column(
sa.Column(
"icon_bg_color", sqlmodel.sql.sqltypes.AutoString(), nullable=True
)
)
# ### end Alembic commands ###
def downgrade() -> None:
conn = op.get_bind()
inspector = Inspector.from_engine(conn) # type: ignore
table_names = inspector.get_table_names()
column_names = [column["name"] for column in inspector.get_columns("flow")]
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("flow", schema=None) as batch_op:
if "icon" in column_names:
batch_op.drop_column("icon")
if "icon_bg_color" in column_names:
batch_op.drop_column("icon_bg_color")
# ### end Alembic commands ###

View file

@ -1,9 +1,7 @@
from typing import Optional
from langchain.prompts import PromptTemplate
from pydantic import BaseModel, field_validator, model_serializer
from langflow.interface.utils import extract_input_variables_from_prompt
from langflow.template.frontend_node.base import FrontendNode
@ -80,24 +78,6 @@ INVALID_NAMES = {
}
def validate_prompt(template: str):
input_variables = extract_input_variables_from_prompt(template)
# Check if there are invalid characters in the input_variables
input_variables = check_input_variables(input_variables)
if any(var in INVALID_NAMES for var in input_variables):
raise ValueError(
f"Invalid input variables. None of the variables can be named {', '.join(input_variables)}. "
)
try:
PromptTemplate(template=template, input_variables=input_variables)
except Exception as exc:
raise ValueError(f"Invalid prompt: {exc}") from exc
return input_variables
def is_json_like(var):
if var.startswith("{{") and var.endswith("}}"):
# If it is a double brance variable

View file

@ -93,7 +93,7 @@ async def build_vertex(
current_user=Depends(get_current_active_user),
):
"""Build a vertex instead of the entire graph."""
{"inputs": {"input_value": "some value"}}
start_time = time.perf_counter()
next_vertices_ids = []
try:
@ -110,7 +110,7 @@ async def build_vertex(
vertex = graph.get_vertex(vertex_id)
try:
if not vertex.pinned or not vertex._built:
if not vertex.frozen or not vertex._built:
inputs_dict = inputs.model_dump() if inputs else {}
await vertex.build(user_id=current_user.id, inputs=inputs_dict)
@ -155,10 +155,10 @@ async def build_vertex(
result_data_response.duration = duration
result_data_response.timedelta = timedelta
vertex.add_build_time(timedelta)
inactive_vertices = None
if graph.inactive_vertices:
inactive_vertices = list(graph.inactive_vertices)
graph.reset_inactive_vertices()
inactivated_vertices = None
inactivated_vertices = list(graph.inactivated_vertices)
graph.reset_inactivated_vertices()
graph.reset_activated_vertices()
chat_service.set_cache(flow_id, graph)
# graph.stop_vertex tells us if the user asked
@ -169,8 +169,8 @@ async def build_vertex(
next_vertices_ids = [graph.stop_vertex]
build_response = VertexBuildResponse(
inactivated_vertices=inactivated_vertices,
next_vertices_ids=next_vertices_ids,
inactive_vertices=inactive_vertices,
valid=valid,
params=params,
id=vertex.id,
@ -227,7 +227,7 @@ async def build_vertex_stream(
)
yield str(stream_data)
elif not vertex.pinned or not vertex._built:
elif not vertex.frozen or not vertex._built:
logger.debug(f"Streaming vertex {vertex_id}")
stream_data = StreamData(
event="message",

View file

@ -13,6 +13,7 @@ from langflow.api.v1.schemas import (
ProcessResponse,
RunResponse,
TaskStatusResponse,
Tweaks,
UploadFileResponse,
)
from langflow.interface.custom.custom_component import CustomComponent
@ -44,9 +45,10 @@ def get_all(
logger.debug("Building langchain types dict")
try:
all_types_dict = get_all_types_dict(settings_service)
all_types_dict = get_all_types_dict(settings_service.settings.COMPONENTS_PATH)
return all_types_dict
except Exception as exc:
logger.exception(exc)
raise HTTPException(status_code=500, detail=str(exc)) from exc
@ -56,19 +58,60 @@ def get_all(
async def run_flow_with_caching(
session: Annotated[Session, Depends(get_session)],
flow_id: str,
inputs: Optional[InputValueRequest] = None,
tweaks: Optional[dict] = None,
inputs: Optional[List[InputValueRequest]] = None,
outputs: Optional[List[str]] = None,
tweaks: Annotated[Optional[Tweaks], Body(embed=True)] = None, # noqa: F821
stream: Annotated[bool, Body(embed=True)] = False, # noqa: F821
session_id: Annotated[Union[None, str], Body(embed=True)] = None, # noqa: F821
api_key_user: User = Depends(api_key_security),
session_service: SessionService = Depends(get_session_service),
):
"""
Executes a specified flow by ID with optional input values, output selection, tweaks, and streaming capability.
This endpoint supports running flows with caching to enhance performance and efficiency.
### Parameters:
- `flow_id` (str): The unique identifier of the flow to be executed.
- `inputs` (List[InputValueRequest], optional): A list of inputs specifying the input values and components for the flow. Each input can target specific components and provide custom values.
- `outputs` (List[str], optional): A list of output names to retrieve from the executed flow. If not provided, all outputs are returned.
- `tweaks` (Optional[Tweaks], optional): A dictionary of tweaks to customize the flow execution. The tweaks can be used to modify the flow's parameters and components. Tweaks can be overridden by the input values.
- `stream` (bool, optional): Specifies whether the results should be streamed. Defaults to False.
- `session_id` (Union[None, str], optional): An optional session ID to utilize existing session data for the flow execution.
- `api_key_user` (User): The user associated with the current API key. Automatically resolved from the API key.
- `session_service` (SessionService): The session service object for managing flow sessions.
### Returns:
A `RunResponse` object containing the selected outputs (or all if not specified) of the executed flow and the session ID. The structure of the response accommodates multiple inputs, providing a nested list of outputs for each input.
### Raises:
HTTPException: Indicates issues with finding the specified flow, invalid input formats, or internal errors during flow execution.
### Example usage:
```json
POST /run/{flow_id}
Payload:
{
"inputs": [
{"components": ["component1"], "input_value": "value1"},
{"components": ["component3"], "input_value": "value2"}
],
"outputs": ["Component Name", "component_id"],
"tweaks": {"parameter_name": "value", "Component Name": {"parameter_name": "value"}, "component_id": {"parameter_name": "value"}}
"stream": false
}
```
This endpoint facilitates complex flow executions with customized inputs, outputs, and configurations, catering to diverse application requirements.
"""
try:
if inputs is not None:
input_values_dict: dict[str, Union[str, list[str]]] = inputs.model_dump()
else:
input_values_dict = {}
if outputs is None:
outputs = []
if session_id:
session_data = await session_service.load_session(
session_id, flow_id=flow_id
@ -82,6 +125,7 @@ async def run_flow_with_caching(
flow_id=flow_id,
session_id=session_id,
inputs=input_values_dict,
outputs=outputs,
artifacts=artifacts,
session_service=session_service,
stream=stream,
@ -107,6 +151,7 @@ async def run_flow_with_caching(
flow_id=flow_id,
session_id=session_id,
inputs=input_values_dict,
outputs=outputs,
artifacts={},
session_service=session_service,
stream=stream,
@ -262,7 +307,10 @@ async def custom_component_update(
component = CustomComponent(code=raw_code.code)
component_node = build_custom_component_template(
component, user_id=user.id, update_field=raw_code.field
component,
user_id=user.id,
update_field=raw_code.field,
update_field_value=raw_code.field_value,
)
# Update the field
return component_node

View file

@ -5,14 +5,22 @@ from uuid import UUID
import orjson
from fastapi import APIRouter, Depends, File, HTTPException, UploadFile
from fastapi.encoders import jsonable_encoder
from loguru import logger
from sqlmodel import Session, select
from langflow.api.utils import remove_api_keys, validate_is_component
from langflow.api.v1.schemas import FlowListCreate, FlowListRead
from langflow.initial_setup.setup import STARTER_FOLDER_NAME
from langflow.services.auth.utils import get_current_active_user
from langflow.services.database.models.flow import Flow, FlowCreate, FlowRead, FlowUpdate
from langflow.services.database.models.flow import (
Flow,
FlowCreate,
FlowRead,
FlowUpdate,
)
from langflow.services.database.models.user.model import User
from langflow.services.deps import get_session, get_settings_service
from langflow.services.settings.service import SettingsService
# build router
router = APIRouter(prefix="/flows", tags=["Flows"])
@ -42,11 +50,36 @@ def create_flow(
def read_flows(
*,
current_user: User = Depends(get_current_active_user),
session: Session = Depends(get_session),
settings_service: "SettingsService" = Depends(get_settings_service),
):
"""Read all flows."""
try:
flows = current_user.flows
auth_settings = settings_service.auth_settings
if auth_settings.AUTO_LOGIN:
flows = session.exec(
select(Flow).where(
(Flow.user_id == None) | (Flow.user_id == current_user.id) # noqa
)
).all()
else:
flows = current_user.flows
flows = validate_is_component(flows)
flow_ids = [flow.id for flow in flows]
# with the session get the flows that DO NOT have a user_id
try:
example_flows = session.exec(
select(Flow).where(
Flow.user_id == None, # noqa
Flow.folder == STARTER_FOLDER_NAME,
)
).all()
for example_flow in example_flows:
if example_flow.id not in flow_ids:
flows.append(example_flow)
except Exception as e:
logger.error(e)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e)) from e
return [jsonable_encoder(flow) for flow in flows]
@ -58,9 +91,18 @@ def read_flow(
session: Session = Depends(get_session),
flow_id: UUID,
current_user: User = Depends(get_current_active_user),
settings_service: "SettingsService" = Depends(get_settings_service),
):
"""Read a flow."""
if user_flow := (session.exec(select(Flow).where(Flow.id == flow_id, Flow.user_id == current_user.id)).first()):
auth_settings = settings_service.auth_settings
stmt = select(Flow).where(Flow.id == flow_id)
if auth_settings.AUTO_LOGIN:
# If auto login is enable user_id can be current_user.id or None
# so write an OR
stmt = stmt.where(
(Flow.user_id == current_user.id) | (Flow.user_id == None) # noqa
) # noqa
if user_flow := session.exec(stmt).first():
return user_flow
else:
raise HTTPException(status_code=404, detail="Flow not found")
@ -77,7 +119,12 @@ def update_flow(
):
"""Update a flow."""
db_flow = read_flow(session=session, flow_id=flow_id, current_user=current_user)
db_flow = read_flow(
session=session,
flow_id=flow_id,
current_user=current_user,
settings_service=settings_service,
)
if not db_flow:
raise HTTPException(status_code=404, detail="Flow not found")
flow_data = flow.model_dump(exclude_unset=True)
@ -99,9 +146,15 @@ def delete_flow(
session: Session = Depends(get_session),
flow_id: UUID,
current_user: User = Depends(get_current_active_user),
settings_service=Depends(get_settings_service),
):
"""Delete a flow."""
flow = read_flow(session=session, flow_id=flow_id, current_user=current_user)
flow = read_flow(
session=session,
flow_id=flow_id,
current_user=current_user,
settings_service=settings_service,
)
if not flow:
raise HTTPException(status_code=404, detail="Flow not found")
session.delete(flow)
@ -109,9 +162,6 @@ def delete_flow(
return {"message": "Flow deleted successfully"}
# Define a new model to handle multiple flows
@router.post("/batch/", response_model=List[FlowRead], status_code=201)
def create_flows(
*,

View file

@ -4,7 +4,7 @@ from pathlib import Path
from typing import Any, Dict, List, Optional, Union
from uuid import UUID
from pydantic import BaseModel, Field, field_validator, model_serializer
from pydantic import BaseModel, Field, RootModel, field_validator, model_serializer
from langflow.services.database.models.api_key.model import ApiKeyRead
from langflow.services.database.models.base import orjson_dumps
@ -158,14 +158,13 @@ class StreamData(BaseModel):
data: dict
def __str__(self) -> str:
return (
f"event: {self.event}\ndata: {orjson_dumps(self.data, indent_2=False)}\n\n"
)
return f"event: {self.event}\ndata: {orjson_dumps(self.data, indent_2=False)}\n\n"
class CustomComponentCode(BaseModel):
code: str
field: Optional[str] = None
field_value: Optional[Any] = None
frontend_node: Optional[dict] = None
@ -229,8 +228,8 @@ class ResultDataResponse(BaseModel):
class VertexBuildResponse(BaseModel):
id: Optional[str] = None
inactivated_vertices: Optional[List[str]] = None
next_vertices_ids: Optional[List[str]] = None
inactive_vertices: Optional[List[str]] = None
valid: bool
params: Optional[Any] = Field(default_factory=dict)
"""JSON string of the params."""
@ -245,4 +244,49 @@ class VerticesBuiltResponse(BaseModel):
class InputValueRequest(BaseModel):
input_value: str
components: Optional[List[str]] = None
input_value: Optional[str] = None
# add an example
model_config = {
"json_schema_extra": {
"examples": [
{
"components": ["components_id", "Component Name"],
"input_value": "input_value",
},
{"components": ["Component Name"], "input_value": "input_value"},
{"input_value": "input_value"},
]
}
}
class Tweaks(RootModel):
root: dict[str, Union[str, dict[str, str]]] = Field(
description="A dictionary of tweaks to adjust the flow's execution. Allows customizing flow behavior dynamically. All tweaks are overridden by the input values.",
)
model_config = {
"json_schema_extra": {
"examples": [
{
"parameter_name": "value",
"Component Name": {"parameter_name": "value"},
"component_id": {"parameter_name": "value"},
}
]
}
}
# This should behave like a dict
def __getitem__(self, key):
return self.root[key]
def __setitem__(self, key, value):
self.root[key] = value
def __delitem__(self, key):
del self.root[key]
def items(self):
return self.root.items()

View file

@ -1,14 +1,20 @@
from fastapi import APIRouter, HTTPException
from loguru import logger
from langflow.api.v1.base import (
Code,
CodeValidationResponse,
PromptValidationResponse,
ValidatePromptRequest,
)
from langflow.base.prompts.utils import (
add_new_variables_to_template,
get_old_custom_fields,
remove_old_variables_from_template,
update_input_variables_field,
validate_prompt,
)
from langflow.template.field.base import TemplateField
from langflow.utils.validate import PROMPT_INPUT_TYPES, validate_code
from loguru import logger
from langflow.utils.validate import validate_code
# build router
router = APIRouter(prefix="/validate", tags=["Validate"])
@ -36,13 +42,28 @@ def post_validate_prompt(prompt_request: ValidatePromptRequest):
input_variables=input_variables,
frontend_node=None,
)
old_custom_fields = get_old_custom_fields(prompt_request)
old_custom_fields = get_old_custom_fields(
prompt_request.custom_fields, prompt_request.name
)
add_new_variables_to_template(input_variables, prompt_request)
add_new_variables_to_template(
input_variables,
prompt_request.custom_fields,
prompt_request.frontend_node.template,
prompt_request.name,
)
remove_old_variables_from_template(old_custom_fields, input_variables, prompt_request)
remove_old_variables_from_template(
old_custom_fields,
input_variables,
prompt_request.custom_fields,
prompt_request.frontend_node.template,
prompt_request.name,
)
update_input_variables_field(input_variables, prompt_request)
update_input_variables_field(
input_variables, prompt_request.frontend_node.template
)
return PromptValidationResponse(
input_variables=input_variables,
@ -51,70 +72,3 @@ def post_validate_prompt(prompt_request: ValidatePromptRequest):
except Exception as e:
logger.exception(e)
raise HTTPException(status_code=500, detail=str(e)) from e
def get_old_custom_fields(prompt_request):
try:
if len(prompt_request.frontend_node.custom_fields) == 1 and prompt_request.name == "":
# If there is only one custom field and the name is empty string
# then we are dealing with the first prompt request after the node was created
prompt_request.name = list(prompt_request.frontend_node.custom_fields.keys())[0]
old_custom_fields = prompt_request.frontend_node.custom_fields[prompt_request.name]
if old_custom_fields is None:
old_custom_fields = []
old_custom_fields = old_custom_fields.copy()
except KeyError:
old_custom_fields = []
prompt_request.frontend_node.custom_fields[prompt_request.name] = []
return old_custom_fields
def add_new_variables_to_template(input_variables, prompt_request):
for variable in input_variables:
try:
template_field = TemplateField(
name=variable,
display_name=variable,
field_type="str",
show=True,
advanced=False,
multiline=True,
input_types=PROMPT_INPUT_TYPES,
value="", # Set the value to empty string
)
if variable in prompt_request.frontend_node.template:
# Set the new field with the old value
template_field.value = prompt_request.frontend_node.template[variable]["value"]
prompt_request.frontend_node.template[variable] = template_field.to_dict()
# Check if variable is not already in the list before appending
if variable not in prompt_request.frontend_node.custom_fields[prompt_request.name]:
prompt_request.frontend_node.custom_fields[prompt_request.name].append(variable)
except Exception as exc:
logger.exception(exc)
raise HTTPException(status_code=500, detail=str(exc)) from exc
def remove_old_variables_from_template(old_custom_fields, input_variables, prompt_request):
for variable in old_custom_fields:
if variable not in input_variables:
try:
# Remove the variable from custom_fields associated with the given name
if variable in prompt_request.frontend_node.custom_fields[prompt_request.name]:
prompt_request.frontend_node.custom_fields[prompt_request.name].remove(variable)
# Remove the variable from the template
prompt_request.frontend_node.template.pop(variable, None)
except Exception as exc:
logger.exception(exc)
raise HTTPException(status_code=500, detail=str(exc)) from exc
def update_input_variables_field(input_variables, prompt_request):
if "input_variables" in prompt_request.frontend_node.template:
prompt_request.frontend_node.template["input_variables"]["value"] = input_variables

View file

View file

@ -0,0 +1,83 @@
from concurrent import futures
from pathlib import Path
from typing import List, Optional, Text
from langflow.schema.schema import Record
def is_hidden(path: Path) -> bool:
return path.name.startswith(".")
def retrieve_file_paths(
path: str,
types: List[str],
load_hidden: bool,
recursive: bool,
depth: int,
) -> List[str]:
path_obj = Path(path)
if not path_obj.exists() or not path_obj.is_dir():
raise ValueError(f"Path {path} must exist and be a directory.")
def match_types(p: Path) -> bool:
return any(p.suffix == f".{t}" for t in types) if types else True
def is_not_hidden(p: Path) -> bool:
return not is_hidden(p) or load_hidden
def walk_level(directory: Path, max_depth: int):
directory = directory.resolve()
prefix_length = len(directory.parts)
for p in directory.rglob("*" if recursive else "[!.]*"):
if len(p.parts) - prefix_length <= max_depth:
yield p
glob = "**/*" if recursive else "*"
paths = walk_level(path_obj, depth) if depth else path_obj.glob(glob)
file_paths = [Text(p) for p in paths if p.is_file() and match_types(p) and is_not_hidden(p)]
return file_paths
def parse_file_to_record(file_path: str, silent_errors: bool) -> Optional[Record]:
# Use the partition function to load the file
from unstructured.partition.auto import partition # type: ignore
try:
elements = partition(file_path)
except Exception as e:
if not silent_errors:
raise ValueError(f"Error loading file {file_path}: {e}") from e
return None
# Create a Record
text = "\n\n".join([Text(el) for el in elements])
metadata = elements.metadata if hasattr(elements, "metadata") else {}
metadata["file_path"] = file_path
record = Record(text=text, data=metadata)
return record
def get_elements(
file_paths: List[str],
silent_errors: bool,
max_concurrency: int,
use_multithreading: bool,
) -> List[Optional[Record]]:
if use_multithreading:
records = parallel_load_records(file_paths, silent_errors, max_concurrency)
else:
records = [parse_file_to_record(file_path, silent_errors) for file_path in file_paths]
records = list(filter(None, records))
return records
def parallel_load_records(file_paths: List[str], silent_errors: bool, max_concurrency: int) -> List[Optional[Record]]:
with futures.ThreadPoolExecutor(max_workers=max_concurrency) as executor:
loaded_files = executor.map(
lambda file_path: parse_file_to_record(file_path, silent_errors),
file_paths,
)
# loaded_files is an iterator, so we need to convert it to a list
return list(loaded_files)

View file

View file

@ -59,8 +59,8 @@ class ChatComponent(CustomComponent):
)
else:
record = Record(
text=message,
data={
"text": message,
"session_id": session_id,
"sender": sender,
"sender_name": sender_name,

View file

@ -0,0 +1,141 @@
from fastapi import HTTPException
from langchain.prompts import PromptTemplate
from langchain_core.documents import Document
from loguru import logger
from langflow.api.v1.base import INVALID_NAMES, check_input_variables
from langflow.interface.utils import extract_input_variables_from_prompt
from langflow.schema import Record
from langflow.template.field.prompt import DefaultPromptField
def dict_values_to_string(d: dict) -> dict:
"""
Converts the values of a dictionary to strings.
Args:
d (dict): The dictionary whose values need to be converted.
Returns:
dict: The dictionary with values converted to strings.
"""
# Do something similar to the above
for key, value in d.items():
# it could be a list of records or documents or strings
if isinstance(value, list):
for i, item in enumerate(value):
if isinstance(item, Record):
d[key][i] = record_to_string(item)
elif isinstance(item, Document):
d[key][i] = document_to_string(item)
elif isinstance(value, Record):
d[key] = record_to_string(value)
elif isinstance(value, Document):
d[key] = document_to_string(value)
return d
def record_to_string(record: Record) -> str:
"""
Convert a record to a string.
Args:
record (Record): The record to convert.
Returns:
str: The record as a string.
"""
return record.text
def document_to_string(document: Document) -> str:
"""
Convert a document to a string.
Args:
document (Document): The document to convert.
Returns:
str: The document as a string.
"""
return document.page_content
def validate_prompt(prompt_template: str, silent_errors: bool = False) -> list[str]:
input_variables = extract_input_variables_from_prompt(prompt_template)
# Check if there are invalid characters in the input_variables
input_variables = check_input_variables(input_variables)
if any(var in INVALID_NAMES for var in input_variables):
raise ValueError(
f"Invalid input variables. None of the variables can be named {', '.join(input_variables)}. "
)
try:
PromptTemplate(template=prompt_template, input_variables=input_variables)
except Exception as exc:
logger.error(f"Invalid prompt: {exc}")
if not silent_errors:
raise ValueError(f"Invalid prompt: {exc}") from exc
return input_variables
def get_old_custom_fields(custom_fields, name):
try:
if len(custom_fields) == 1 and name == "":
# If there is only one custom field and the name is empty string
# then we are dealing with the first prompt request after the node was created
name = list(custom_fields.keys())[0]
old_custom_fields = custom_fields[name]
if not old_custom_fields:
old_custom_fields = []
old_custom_fields = old_custom_fields.copy()
except KeyError:
old_custom_fields = []
custom_fields[name] = []
return old_custom_fields
def add_new_variables_to_template(input_variables, custom_fields, template, name):
for variable in input_variables:
try:
template_field = DefaultPromptField(name=variable, display_name=variable)
if variable in template:
# Set the new field with the old value
template_field.value = template[variable]["value"]
template[variable] = template_field.to_dict()
# Check if variable is not already in the list before appending
if variable not in custom_fields[name]:
custom_fields[name].append(variable)
except Exception as exc:
logger.exception(exc)
raise HTTPException(status_code=500, detail=str(exc)) from exc
def remove_old_variables_from_template(
old_custom_fields, input_variables, custom_fields, template, name
):
for variable in old_custom_fields:
if variable not in input_variables:
try:
# Remove the variable from custom_fields associated with the given name
if variable in custom_fields[name]:
custom_fields[name].remove(variable)
# Remove the variable from the template
template.pop(variable, None)
except Exception as exc:
logger.exception(exc)
raise HTTPException(status_code=500, detail=str(exc)) from exc
def update_input_variables_field(input_variables, template):
if "input_variables" in template:
template["input_variables"]["value"] = input_variables

View file

@ -31,7 +31,7 @@ class ConversationChainComponent(CustomComponent):
chain = ConversationChain(llm=llm)
else:
chain = ConversationChain(llm=llm, memory=memory)
result = chain.invoke(inputs)
result = chain.invoke({"input": input_value})
if hasattr(result, "content") and isinstance(result.content, str):
result = result.content
elif isinstance(result, str):

View file

@ -0,0 +1,118 @@
import asyncio
import json
from typing import List, Optional
import httpx
from langflow import CustomComponent
from langflow.schema import Record
class APIRequest(CustomComponent):
display_name: str = "API Request"
description: str = "Make an HTTP request to the given URL."
output_types: list[str] = ["Record"]
documentation: str = "https://docs.langflow.org/components/utilities#api-request"
beta: bool = True
field_config = {
"url": {"display_name": "URL", "info": "The URL to make the request to."},
"method": {
"display_name": "Method",
"info": "The HTTP method to use.",
"field_type": "str",
"options": ["GET", "POST", "PATCH", "PUT"],
"value": "GET",
},
"headers": {
"display_name": "Headers",
"info": "The headers to send with the request.",
"input_types": ["dict"],
},
"body": {
"display_name": "Body",
"info": "The body to send with the request (for POST, PATCH, PUT).",
"input_types": ["dict"],
},
"timeout": {
"display_name": "Timeout",
"field_type": "int",
"info": "The timeout to use for the request.",
"value": 5,
},
}
async def make_request(
self,
client: httpx.AsyncClient,
method: str,
url: str,
headers: Optional[dict] = None,
body: Optional[dict] = None,
timeout: int = 5,
) -> Record:
method = method.upper()
if method not in ["GET", "POST", "PATCH", "PUT", "DELETE"]:
raise ValueError(f"Unsupported method: {method}")
data = body if body else None
data = json.dumps(data)
try:
response = await client.request(
method, url, headers=headers, content=data, timeout=timeout
)
try:
result = response.json()
except Exception:
result = response.text
return Record(
data={
"source": url,
"headers": headers,
"status_code": response.status_code,
"result": result,
},
)
except httpx.TimeoutException:
return Record(
data={
"source": url,
"headers": headers,
"status_code": 408,
"error": "Request timed out",
},
)
except Exception as exc:
return Record(
data={
"source": url,
"headers": headers,
"status_code": 500,
"error": str(exc),
},
)
async def build(
self,
method: str,
url: List[str],
headers: Optional[dict] = None,
body: Optional[List[Record]] = None,
timeout: int = 5,
) -> List[Record]:
if headers is None:
headers = {}
urls = url if isinstance(url, list) else [url]
bodies = []
if body:
if isinstance(body, list):
bodies = [b.data for b in body]
else:
bodies = [body.data]
async with httpx.AsyncClient() as client:
results = await asyncio.gather(
*[
self.make_request(client, method, u, headers, rec, timeout)
for u, rec in zip(urls, bodies)
]
)
return results

View file

@ -0,0 +1,69 @@
from typing import Any, Dict, List, Optional
from langflow import CustomComponent
from langflow.base.data.utils import (
parallel_load_records,
parse_file_to_record,
retrieve_file_paths,
)
from langflow.schema import Record
class DirectoryComponent(CustomComponent):
display_name = "Directory"
description = "Load files from a directory."
def build_config(self) -> Dict[str, Any]:
return {
"path": {"display_name": "Path"},
"types": {
"display_name": "Types",
"info": "File types to load. Leave empty to load all types.",
},
"depth": {"display_name": "Depth", "info": "Depth to search for files."},
"max_concurrency": {"display_name": "Max Concurrency", "advanced": True},
"load_hidden": {
"display_name": "Load Hidden",
"advanced": True,
"info": "If true, hidden files will be loaded.",
},
"recursive": {
"display_name": "Recursive",
"advanced": True,
"info": "If true, the search will be recursive.",
},
"silent_errors": {
"display_name": "Silent Errors",
"advanced": True,
"info": "If true, errors will not raise an exception.",
},
"use_multithreading": {
"display_name": "Use Multithreading",
"advanced": True,
},
}
def build(
self,
path: str,
types: Optional[List[str]] = None,
depth: int = 0,
max_concurrency: int = 2,
load_hidden: bool = False,
recursive: bool = True,
silent_errors: bool = False,
use_multithreading: bool = True,
) -> List[Optional[Record]]:
if types is None:
types = []
resolved_path = self.resolve_path(path)
file_paths = retrieve_file_paths(resolved_path, types, load_hidden, recursive, depth)
loaded_records = []
if use_multithreading:
loaded_records = parallel_load_records(file_paths, silent_errors, max_concurrency)
else:
loaded_records = [parse_file_to_record(file_path, silent_errors) for file_path in file_paths]
loaded_records = list(filter(None, loaded_records))
self.status = loaded_records
return loaded_records

View file

@ -0,0 +1,28 @@
from typing import Any, Dict, Optional
from langflow import CustomComponent
from langflow.base.data.utils import parse_file_to_record
from langflow.schema import Record
class FileComponent(CustomComponent):
display_name = "File"
description = "Load a file."
def build_config(self) -> Dict[str, Any]:
return {
"path": {"display_name": "Path"},
"silent_errors": {
"display_name": "Silent Errors",
"advanced": True,
"info": "If true, errors will not raise an exception.",
},
}
def build(
self,
path: str,
silent_errors: bool = False,
) -> Optional[Record]:
resolved_path = self.resolve_path(path)
return parse_file_to_record(resolved_path, silent_errors)

View file

@ -11,9 +11,7 @@ class FileLoaderComponent(CustomComponent):
beta = True
def build_config(self):
loader_options = ["Automatic"] + [
loader_info["name"] for loader_info in LOADERS_INFO
]
loader_options = ["Automatic"] + [loader_info["name"] for loader_info in LOADERS_INFO]
file_types = []
suffixes = []
@ -105,9 +103,7 @@ class FileLoaderComponent(CustomComponent):
if isinstance(selected_loader_info, dict):
loader_import: str = selected_loader_info["import"]
else:
raise ValueError(
f"Loader info for {loader} is not a dict\nLoader info:\n{selected_loader_info}"
)
raise ValueError(f"Loader info for {loader} is not a dict\nLoader info:\n{selected_loader_info}")
module_name, class_name = loader_import.rsplit(".", 1)
try:
@ -115,9 +111,7 @@ class FileLoaderComponent(CustomComponent):
loader_module = __import__(module_name, fromlist=[class_name])
loader_instance = getattr(loader_module, class_name)
except ImportError as e:
raise ValueError(
f"Loader {loader} could not be imported\nLoader info:\n{selected_loader_info}"
) from e
raise ValueError(f"Loader {loader} could not be imported\nLoader info:\n{selected_loader_info}") from e
result = loader_instance(file_path=file_path)
docs = result.load()

View file

@ -0,0 +1,25 @@
from typing import Any, Dict
from langchain_community.document_loaders.web_base import WebBaseLoader
from langflow import CustomComponent
from langflow.schema import Record
class URLComponent(CustomComponent):
display_name = "URL"
description = "Load URLs and convert them to records."
def build_config(self) -> Dict[str, Any]:
return {
"urls": {"display_name": "URL"},
}
async def build(
self,
urls: list[str],
) -> Record:
loader = WebBaseLoader(web_paths=urls)
docs = loader.load()
records = self.to_records(docs)
return records

View file

@ -1,161 +0,0 @@
from concurrent import futures
from pathlib import Path
from typing import Any, Dict, List, Optional, Text
from langflow import CustomComponent
from langflow.schema import Record
class GatherRecordsComponent(CustomComponent):
display_name = "Gather Records"
description = "Gather records from a directory."
def build_config(self) -> Dict[str, Any]:
return {
"path": {"display_name": "Path"},
"types": {
"display_name": "Types",
"info": "File types to load. Leave empty to load all types.",
},
"depth": {"display_name": "Depth", "info": "Depth to search for files."},
"max_concurrency": {"display_name": "Max Concurrency", "advanced": True},
"load_hidden": {
"display_name": "Load Hidden",
"advanced": True,
"info": "If true, hidden files will be loaded.",
},
"recursive": {
"display_name": "Recursive",
"advanced": True,
"info": "If true, the search will be recursive.",
},
"silent_errors": {
"display_name": "Silent Errors",
"advanced": True,
"info": "If true, errors will not raise an exception.",
},
"use_multithreading": {
"display_name": "Use Multithreading",
"advanced": True,
},
}
def is_hidden(self, path: Path) -> bool:
return path.name.startswith(".")
def retrieve_file_paths(
self,
path: str,
types: List[str],
load_hidden: bool,
recursive: bool,
depth: int,
) -> List[str]:
path_obj = Path(path)
if not path_obj.exists() or not path_obj.is_dir():
raise ValueError(f"Path {path} must exist and be a directory.")
def match_types(p: Path) -> bool:
return any(p.suffix == f".{t}" for t in types) if types else True
def is_not_hidden(p: Path) -> bool:
return not self.is_hidden(p) or load_hidden
def walk_level(directory: Path, max_depth: int):
directory = directory.resolve()
prefix_length = len(directory.parts)
for p in directory.rglob("*" if recursive else "[!.]*"):
if len(p.parts) - prefix_length <= max_depth:
yield p
glob = "**/*" if recursive else "*"
paths = walk_level(path_obj, depth) if depth else path_obj.glob(glob)
file_paths = [
Text(p)
for p in paths
if p.is_file() and match_types(p) and is_not_hidden(p)
]
return file_paths
def parse_file_to_record(
self, file_path: str, silent_errors: bool
) -> Optional[Record]:
# Use the partition function to load the file
from unstructured.partition.auto import partition # type: ignore
try:
elements = partition(file_path)
except Exception as e:
if not silent_errors:
raise ValueError(f"Error loading file {file_path}: {e}") from e
return None
# Create a Record
text = "\n\n".join([Text(el) for el in elements])
metadata = elements.metadata if hasattr(elements, "metadata") else {}
metadata["file_path"] = file_path
record = Record(text=text, data=metadata)
return record
def get_elements(
self,
file_paths: List[str],
silent_errors: bool,
max_concurrency: int,
use_multithreading: bool,
) -> List[Optional[Record]]:
if use_multithreading:
records = self.parallel_load_records(
file_paths, silent_errors, max_concurrency
)
else:
records = [
self.parse_file_to_record(file_path, silent_errors)
for file_path in file_paths
]
records = list(filter(None, records))
return records
def parallel_load_records(
self, file_paths: List[str], silent_errors: bool, max_concurrency: int
) -> List[Optional[Record]]:
with futures.ThreadPoolExecutor(max_workers=max_concurrency) as executor:
loaded_files = executor.map(
lambda file_path: self.parse_file_to_record(file_path, silent_errors),
file_paths,
)
# loaded_files is an iterator, so we need to convert it to a list
return list(loaded_files)
def build(
self,
path: str,
types: Optional[List[str]] = None,
depth: int = 0,
max_concurrency: int = 2,
load_hidden: bool = False,
recursive: bool = True,
silent_errors: bool = False,
use_multithreading: bool = True,
) -> List[Optional[Record]]:
if types is None:
types = []
resolved_path = self.resolve_path(path)
file_paths = self.retrieve_file_paths(
resolved_path, types, load_hidden, recursive, depth
)
loaded_records = []
if use_multithreading:
loaded_records = self.parallel_load_records(
file_paths, silent_errors, max_concurrency
)
else:
loaded_records = [
self.parse_file_to_record(file_path, silent_errors)
for file_path in file_paths
]
loaded_records = list(filter(None, loaded_records))
self.status = loaded_records
return loaded_records

View file

@ -1,47 +0,0 @@
from typing import List
from langchain import document_loaders
from langchain_core.documents import Document
from langflow import CustomComponent
class UrlLoaderComponent(CustomComponent):
display_name: str = "Url Loader"
description: str = "Generic Url Loader Component"
def build_config(self):
return {
"web_path": {
"display_name": "Url",
"required": True,
},
"loader": {
"display_name": "Loader",
"is_list": True,
"required": True,
"options": [
"AZLyricsLoader",
"CollegeConfidentialLoader",
"GitbookLoader",
"HNLoader",
"IFixitLoader",
"IMSDbLoader",
"WebBaseLoader",
],
"value": "WebBaseLoader",
},
"code": {"show": False},
}
def build(self, web_path: str, loader: str) -> List[Document]:
try:
loader_instance = getattr(document_loaders, loader)(web_path=web_path)
except Exception as e:
raise ValueError(f"No loader found for: {web_path}") from e
docs = loader_instance.load()
avg_length = sum(len(doc.page_content) for doc in docs if hasattr(doc, "page_content")) / len(docs)
self.status = f"""{len(docs)} documents)
\nAvg. Document Length (characters): {int(avg_length)}
Documents: {docs[:3]}..."""
return docs

View file

@ -0,0 +1,24 @@
from langflow import CustomComponent
from langflow.memory import delete_messages, get_messages
class ClearMessageHistoryComponent(CustomComponent):
display_name = "Clear Message History"
description = "A component to clear the message history."
def build_config(self):
return {
"session_id": {
"display_name": "Session ID",
"info": "The session ID to clear the message history.",
}
}
def build(
self,
session_id: str,
) -> None:
delete_messages(session_id=session_id)
records = get_messages(session_id=session_id)
self.records = records
return records

View file

@ -0,0 +1,16 @@
from langflow import CustomComponent
from langflow.schema import Record
class ExtractKeyFromRecordComponent(CustomComponent):
display_name = "Extract Key From Record"
description = "Extracts a key from a record."
field_config = {
"record": {"display_name": "Record"},
}
def build(self, record: Record, key: str, silent_error: bool = True) -> dict:
data = getattr(record, key)
self.status = data
return data

View file

@ -0,0 +1,20 @@
from langflow import CustomComponent
from langflow.schema import Record
class GetNotifiedComponent(CustomComponent):
display_name = "Get Notified"
description = "A component to get notified by Notify component."
def build_config(self):
return {
"name": {
"display_name": "Name",
"info": "The name of the notification to listen for.",
},
}
def build(self, name: str) -> Record:
state = self.get_state(name)
self.status = state
return state

View file

@ -0,0 +1,25 @@
from langflow import CustomComponent
from langflow.schema import Record
class MergeRecordsComponent(CustomComponent):
display_name = "Merge Records"
description = "Merges records."
field_config = {
"records": {"display_name": "Records"},
}
def build(self, records: list[Record]) -> Record:
if not records:
return records
if len(records) == 1:
return records[0]
merged_record = None
for record in records:
if merged_record is None:
merged_record = record
else:
merged_record += record
self.status = merged_record
return merged_record

View file

@ -0,0 +1,39 @@
from typing import Optional
from langflow import CustomComponent
from langflow.schema import Record
class NotifyComponent(CustomComponent):
display_name = "Notify"
description = "A component to generate a notification to Get Notified component."
def build_config(self):
return {
"name": {"display_name": "Name", "info": "The name of the notification."},
"record": {"display_name": "Record", "info": "The record to store."},
"append": {
"display_name": "Append",
"info": "If True, the record will be appended to the notification.",
},
}
def build(self, name: str, record: Optional[Record] = None, append: bool = False) -> Record:
if record and not isinstance(record, Record):
if isinstance(record, str):
record = Record(text=record)
elif isinstance(record, dict):
record = Record(data=record)
else:
record = Record(text=str(record))
elif not record:
record = Record(text="")
if record:
if append:
self.append_state(name, record)
else:
self.update_state(name, record)
else:
self.status = "No record provided."
self.status = record
return record

View file

@ -39,10 +39,7 @@ class RunFlowComponent(CustomComponent):
records.append(record)
return records
async def build(
self, input_value: Text, flow_name: str, tweaks: NestedDict
) -> Record:
async def build(self, input_value: Text, flow_name: str, tweaks: NestedDict) -> Record:
results: List[Optional[ResultData]] = await self.run_flow(
input_value=input_value, flow_name=flow_name, tweaks=tweaks
)

View file

@ -11,7 +11,10 @@ class SQLExecutorComponent(CustomComponent):
def build_config(self):
return {
"database": {"display_name": "Database"},
"database_url": {
"display_name": "Database URL",
"info": "The URL of the database.",
},
"include_columns": {
"display_name": "Include Columns",
"info": "Include columns in the result.",
@ -26,15 +29,24 @@ class SQLExecutorComponent(CustomComponent):
},
}
def clean_up_uri(self, uri: str) -> str:
if uri.startswith("postgresql://"):
uri = uri.replace("postgresql://", "postgres://")
return uri.strip()
def build(
self,
query: str,
database: SQLDatabase,
database_url: str,
include_columns: bool = False,
passthrough: bool = False,
add_error: bool = False,
) -> Text:
error = None
try:
database = SQLDatabase.from_uri(database_url)
except Exception as e:
raise ValueError(f"An error occurred while connecting to the database: {e}")
try:
tool = QuerySQLDataBaseTool(db=database)
result = tool.run(query, include_columns=include_columns)

View file

@ -0,0 +1,99 @@
from typing import Any
from langflow import CustomComponent
from langflow.schema import Record
from langflow.template.field.base import TemplateField
class TextToRecordComponent(CustomComponent):
display_name = "Text to Record"
description = "A component to create a record from key-value pairs."
field_order = ["mode", "keys", "n_keys"]
def set_key_template(self, build_config, field_value):
keys_template = TemplateField(
name="n_keys" if field_value == "Number" else "keys",
field_type="dict" if field_value == "Number" else "str",
is_list=False if field_value == "Number" else True,
display_name="Keys",
info=(
"The Number of keys to use for the record."
if field_value == "Number"
else "The keys to use for the record."
),
input_types=["Text"],
)
build_config["keys"] = keys_template.to_dict()
def set_n_keys(self, build_config, field_name, field_value):
if int(field_value) == 0:
keep = ["n_keys", "code"]
for key in build_config.copy():
if key in keep:
continue
del build_config[key]
build_config[field_name]["value"] = int(field_value)
# Add new fields depending on the field value
for i in range(int(field_value)):
field = TemplateField(
name=f"Key and Value {i}",
field_type="dict",
display_name="",
info="The key for the record.",
input_types=["Text"],
)
build_config[field.name] = field.to_dict()
def set_keys_template(self, build_config, field_value):
for key in build_config.copy():
if key == "keys":
continue
del build_config[key]
for i in range(int(field_value)):
field = TemplateField(
name=f"Key and Value {i}",
field_type="dict",
display_name="",
info="The key for the record.",
input_types=["Text"],
)
build_config[field.name] = field.to_dict()
def update_build_config(
self, build_config: dict, field_name: str, field_value: Any
):
if field_name == "mode":
build_config["mode"]["value"] = field_value
self.set_key_template(build_config, field_value)
if field_value is None:
return
if field_name == "n_keys":
self.set_n_keys(build_config, field_name, field_value)
elif field_name == "keys":
self.set_keys_template(build_config, field_value)
return build_config
def build_config(self):
return {
"mode": {
"display_name": "Mode",
"options": ["Text", "Number"],
"info": "The mode to use for creating the record.",
"real_time_refresh": True,
"input_types": [],
},
}
def build(self, mode: str, **kwargs) -> Record:
if mode == "Text":
data = kwargs
else:
data = {
k: v
for key, d in kwargs.items()
for k, v in d.items()
if key not in ["mode", "n_keys", "keys"]
}
record = Record(data=data)
return record

View file

@ -4,6 +4,7 @@ from langflow.field_typing import Data
class Component(CustomComponent):
documentation: str = "http://docs.langflow.org/components/custom"
icon = "custom_components"
def build_config(self):
return {"param": {"display_name": "Parameter"}}

View file

@ -0,0 +1,28 @@
import uuid
from typing import Any, Text
from langflow import CustomComponent
class UUIDGeneratorComponent(CustomComponent):
documentation: str = "http://docs.langflow.org/components/custom"
display_name = "Unique ID Generator"
description = "Generates a unique ID."
def update_build_config(
self, build_config: dict, field_name: Text, field_value: Any
):
if field_name == "unique_id":
build_config[field_name]["value"] = str(uuid.uuid4())
return build_config
def build_config(self):
return {
"unique_id": {
"display_name": "Value",
"real_time_refresh": True,
}
}
def build(self, unique_id: str) -> str:
return unique_id

View file

@ -6,7 +6,7 @@ from langflow.schema import Record
class RecordsAsTextComponent(CustomComponent):
display_name = "Records to Text"
description = "Converts Records a list of Records to text using a template."
description = "Converts Records into single piece of text using a template."
def build_config(self):
return {
@ -16,7 +16,7 @@ class RecordsAsTextComponent(CustomComponent):
},
"template": {
"display_name": "Template",
"info": "The template to use for formatting the records. It must contain the keys {text} and {data}.",
"info": "The template to use for formatting the records. It can contain the keys {text}, {data} or any other key in the Record.",
},
}
@ -25,6 +25,8 @@ class RecordsAsTextComponent(CustomComponent):
records: list[Record],
template: str = "Text: {text}\nData: {data}",
) -> Text:
if not records:
return ""
if isinstance(records, Record):
records = [records]

View file

@ -1,6 +1,6 @@
from typing import Optional, Union
from langflow.components.io.base.chat import ChatComponent
from langflow.base.io.chat import ChatComponent
from langflow.field_typing import Text
from langflow.schema import Record

View file

@ -1,6 +1,7 @@
from langchain_core.prompts import PromptTemplate
from langflow import CustomComponent
from langflow.base.prompts.utils import dict_values_to_string
from langflow.field_typing import Prompt, TemplateField, Text
@ -8,6 +9,7 @@ class PromptComponent(CustomComponent):
display_name: str = "Prompt"
description: str = "A component for creating prompts using templates"
beta = True
icon = "terminal-square"
def build_config(self):
return {
@ -21,16 +23,13 @@ class PromptComponent(CustomComponent):
**kwargs,
) -> Text:
prompt_template = PromptTemplate.from_template(Text(template))
attributes_to_check = ["text", "page_content"]
for key, value in kwargs.copy().items():
for attribute in attributes_to_check:
if hasattr(value, attribute):
kwargs[key] = getattr(value, attribute)
kwargs = dict_values_to_string(kwargs)
kwargs = {
k: "\n".join(v) if isinstance(v, list) else v for k, v in kwargs.items()
}
try:
formated_prompt = prompt_template.format(**kwargs)
except Exception as exc:
raise ValueError(f"Error formatting prompt: {exc}") from exc
self.status = f'Prompt: "{formated_prompt}"'
self.status = f'Prompt:\n"{formated_prompt}"'
return formated_prompt

View file

@ -1,6 +1,6 @@
from typing import Optional
from langflow.components.io.base.text import TextComponent
from langflow.base.io.text import TextComponent
from langflow.field_typing import Text

View file

@ -7,7 +7,7 @@ from langflow.field_typing import Text
class AmazonBedrockComponent(LCModelComponent):
display_name: str = "Amazon Bedrock Model"
display_name: str = "Amazon Bedrock"
description: str = "Generate text using LLM model from Amazon Bedrock."
icon = "Amazon"

View file

@ -8,7 +8,7 @@ from langflow.field_typing import Text
class AnthropicLLM(LCModelComponent):
display_name: str = "AnthropicModel"
display_name: str = "Anthropic"
description: str = "Generate text using Anthropic Chat&Completion large language models."
icon = "Anthropic"

View file

@ -9,7 +9,7 @@ from langflow.field_typing import Text
class AzureChatOpenAIComponent(LCModelComponent):
display_name: str = "AzureOpenAI Model"
display_name: str = "AzureOpenAI"
description: str = "Generate text using LLM model from Azure OpenAI."
documentation: str = "https://python.langchain.com/docs/integrations/llms/azure_openai"
beta = False

View file

@ -8,7 +8,7 @@ from langflow.field_typing import Text
class QianfanChatEndpointComponent(LCModelComponent):
display_name: str = "QianfanChat Model"
display_name: str = "QianfanChat"
description: str = (
"Generate text using Baidu Qianfan chat models. Get more detail from "
"https://python.langchain.com/docs/integrations/chat/baidu_qianfan_endpoint."

View file

@ -7,7 +7,7 @@ from langflow.field_typing import Text
class CTransformersComponent(LCModelComponent):
display_name = "CTransformersModel"
display_name = "CTransformers"
description = "Generate text using CTransformers LLM models"
documentation = "https://python.langchain.com/docs/modules/model_io/models/llms/integrations/ctransformers"

View file

@ -5,7 +5,7 @@ from langflow.field_typing import Text
class CohereComponent(LCModelComponent):
display_name = "CohereModel"
display_name = "Cohere"
description = "Generate text using Cohere large language models."
documentation = "https://python.langchain.com/docs/modules/model_io/models/llms/integrations/cohere"

View file

@ -8,7 +8,7 @@ from langflow.field_typing import RangeSpec, Text
class GoogleGenerativeAIComponent(LCModelComponent):
display_name: str = "Google Generative AIModel"
display_name: str = "Google Generative AI"
description: str = "Generate text using Google Generative AI to generate text."
icon = "GoogleGenerativeAI"
icon = "Google"

View file

@ -8,7 +8,7 @@ from langflow.field_typing import Text
class HuggingFaceEndpointsComponent(LCModelComponent):
display_name: str = "Hugging Face Inference API models"
display_name: str = "Hugging Face Inference API"
description: str = "Generate text using LLM model from Hugging Face Inference API."
icon = "HuggingFace"

View file

@ -7,7 +7,7 @@ from langflow.field_typing import Text
class LlamaCppComponent(LCModelComponent):
display_name = "LlamaCppModel"
display_name = "LlamaCpp"
description = "Generate text using llama.cpp model."
documentation = "https://python.langchain.com/docs/modules/model_io/models/llms/integrations/llamacpp"

View file

@ -13,7 +13,7 @@ from langflow.field_typing import Text
class ChatOllamaComponent(LCModelComponent):
display_name = "ChatOllamaModel"
display_name = "ChatOllama"
description = "Generate text using Local LLM for chat with Ollama."
icon = "Ollama"

View file

@ -8,7 +8,7 @@ from langflow.field_typing import NestedDict, Text
class OpenAIModelComponent(LCModelComponent):
display_name = "OpenAI Model"
display_name = "OpenAI"
description = "Generates text using OpenAI's models."
icon = "OpenAI"

View file

@ -7,7 +7,7 @@ from langflow.field_typing import Text
class ChatVertexAIComponent(LCModelComponent):
display_name = "ChatVertexAIModel"
display_name = "ChatVertexAI"
description = "Generate text using Vertex AI Chat large language models API."
icon = "VertexAI"

View file

@ -1,6 +1,6 @@
from typing import Optional, Union
from langflow.components.io.base.chat import ChatComponent
from langflow.base.io.chat import ChatComponent
from langflow.field_typing import Text
from langflow.schema import Record

View file

@ -1,6 +1,6 @@
from typing import Optional
from langflow.components.io.base.text import TextComponent
from langflow.base.io.text import TextComponent
from langflow.field_typing import Text

View file

@ -1,8 +1,9 @@
from typing import List
from langchain.text_splitter import CharacterTextSplitter
from langchain_core.documents.base import Document
from langflow import CustomComponent
from langflow.schema.schema import Record
class CharacterTextSplitterComponent(CustomComponent):
@ -11,7 +12,7 @@ class CharacterTextSplitterComponent(CustomComponent):
def build_config(self):
return {
"documents": {"display_name": "Documents"},
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"chunk_overlap": {"display_name": "Chunk Overlap", "default": 200},
"chunk_size": {"display_name": "Chunk Size", "default": 1000},
"separator": {"display_name": "Separator", "default": "\n"},
@ -19,17 +20,24 @@ class CharacterTextSplitterComponent(CustomComponent):
def build(
self,
documents: List[Document],
inputs: List[Record],
chunk_overlap: int = 200,
chunk_size: int = 1000,
separator: str = "\n",
) -> List[Document]:
) -> List[Record]:
# separator may come escaped from the frontend
separator = separator.encode().decode("unicode_escape")
documents = []
for _input in inputs:
if isinstance(_input, Record):
documents.append(_input.to_lc_document())
else:
documents.append(_input)
docs = CharacterTextSplitter(
chunk_overlap=chunk_overlap,
chunk_size=chunk_size,
separator=separator,
).split_documents(documents)
self.status = docs
return docs
records = self.to_records(docs)
self.status = records
return records

View file

@ -1,9 +1,9 @@
from typing import Optional
from typing import List, Optional
from langchain.text_splitter import Language
from langchain_core.documents import Document
from langflow import CustomComponent
from langflow.schema.schema import Record
class LanguageRecursiveTextSplitterComponent(CustomComponent):
@ -14,10 +14,7 @@ class LanguageRecursiveTextSplitterComponent(CustomComponent):
def build_config(self):
options = [x.value for x in Language]
return {
"documents": {
"display_name": "Documents",
"info": "The documents to split.",
},
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"separator_type": {
"display_name": "Separator Type",
"info": "The type of separator to use.",
@ -47,11 +44,11 @@ class LanguageRecursiveTextSplitterComponent(CustomComponent):
def build(
self,
documents: list[Document],
inputs: List[Record],
chunk_size: Optional[int] = 1000,
chunk_overlap: Optional[int] = 200,
separator_type: str = "Python",
) -> list[Document]:
) -> list[Record]:
"""
Split text into chunks of a specified length.
@ -77,6 +74,12 @@ class LanguageRecursiveTextSplitterComponent(CustomComponent):
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
)
documents = []
for _input in inputs:
if isinstance(_input, Record):
documents.append(_input.to_lc_document())
else:
documents.append(_input)
docs = splitter.split_documents(documents)
return docs
records = self.to_records(docs)
return records

View file

@ -1,10 +1,11 @@
from typing import Optional
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_core.documents import Document
from langflow import CustomComponent
from langflow.utils.util import build_loader_repr_from_documents
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langflow.schema import Record
from langflow.utils.util import build_loader_repr_from_records
class RecursiveCharacterTextSplitterComponent(CustomComponent):
@ -14,9 +15,10 @@ class RecursiveCharacterTextSplitterComponent(CustomComponent):
def build_config(self):
return {
"documents": {
"display_name": "Documents",
"info": "The documents to split.",
"inputs": {
"display_name": "Input",
"info": "The texts to split.",
"input_types": ["Document", "Record"],
},
"separators": {
"display_name": "Separators",
@ -40,11 +42,11 @@ class RecursiveCharacterTextSplitterComponent(CustomComponent):
def build(
self,
documents: list[Document],
inputs: list[Document],
separators: Optional[list[str]] = None,
chunk_size: Optional[int] = 1000,
chunk_overlap: Optional[int] = 200,
) -> list[Document]:
) -> list[Record]:
"""
Split text into chunks of a specified length.
@ -75,7 +77,13 @@ class RecursiveCharacterTextSplitterComponent(CustomComponent):
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
)
documents = []
for _input in inputs:
if isinstance(_input, Record):
documents.append(_input.to_lc_document())
else:
documents.append(_input)
docs = splitter.split_documents(documents)
self.repr_value = build_loader_repr_from_documents(docs)
return docs
records = self.to_records(docs)
self.repr_value = build_loader_repr_from_records(records)
return records

View file

@ -1,75 +0,0 @@
from typing import Optional, Text
import requests
from langchain_core.documents import Document
from langflow import CustomComponent
from langflow.services.database.models.base import orjson_dumps
class GetRequest(CustomComponent):
display_name: str = "GET Request"
description: str = "Make a GET request to the given URL."
output_types: list[str] = ["Document"]
documentation: str = "https://docs.langflow.org/components/utilities#get-request"
beta: bool = True
field_config = {
"url": {
"display_name": "URL",
"info": "The URL to make the request to",
"is_list": True,
},
"headers": {
"display_name": "Headers",
"info": "The headers to send with the request.",
},
"code": {"show": False},
"timeout": {
"display_name": "Timeout",
"field_type": "int",
"info": "The timeout to use for the request.",
"value": 5,
},
}
def get_document(self, session: requests.Session, url: str, headers: Optional[dict], timeout: int) -> Document:
try:
response = session.get(url, headers=headers, timeout=int(timeout))
try:
response_json = response.json()
result = orjson_dumps(response_json, indent_2=False)
except Exception:
result = response.text
self.repr_value = result
return Document(
page_content=result,
metadata={
"source": url,
"headers": headers,
"status_code": response.status_code,
},
)
except requests.Timeout:
return Document(
page_content="Request Timed Out",
metadata={"source": url, "headers": headers, "status_code": 408},
)
except Exception as exc:
return Document(
page_content=Text(exc),
metadata={"source": url, "headers": headers, "status_code": 500},
)
def build(
self,
url: str,
headers: Optional[dict] = None,
timeout: int = 5,
) -> list[Document]:
if headers is None:
headers = {}
urls = url if isinstance(url, list) else [url]
with requests.Session() as session:
documents = [self.get_document(session, u, headers, timeout) for u in urls]
self.repr_value = documents
return documents

View file

@ -1,19 +0,0 @@
import uuid
from typing import Text
from langflow import CustomComponent
class UUIDGeneratorComponent(CustomComponent):
documentation: str = "http://docs.langflow.org/components/custom"
display_name = "Unique ID Generator"
description = "Generates a unique ID."
def generate(self, *args, **kwargs):
return Text(uuid.uuid4().hex)
def build_config(self):
return {"unique_id": {"display_name": "Value", "value": self.generate}}
def build(self, unique_id: str) -> str:
return unique_id

View file

@ -1,78 +0,0 @@
from typing import Optional, Text
import requests
from langchain_core.documents import Document
from langflow import CustomComponent
from langflow.services.database.models.base import orjson_dumps
class PostRequest(CustomComponent):
display_name: str = "POST Request"
description: str = "Make a POST request to the given URL."
output_types: list[str] = ["Document"]
documentation: str = "https://docs.langflow.org/components/utilities#post-request"
beta: bool = True
field_config = {
"url": {"display_name": "URL", "info": "The URL to make the request to."},
"headers": {
"display_name": "Headers",
"info": "The headers to send with the request.",
},
"code": {"show": False},
"document": {"display_name": "Document"},
}
def post_document(
self,
session: requests.Session,
document: Document,
url: str,
headers: Optional[dict] = None,
) -> Document:
try:
response = session.post(url, headers=headers, data=document.page_content)
try:
response_json = response.json()
result = orjson_dumps(response_json, indent_2=False)
except Exception:
result = response.text
self.repr_value = result
return Document(
page_content=result,
metadata={
"source": url,
"headers": headers,
"status_code": response,
},
)
except Exception as exc:
return Document(
page_content=Text(exc),
metadata={
"source": url,
"headers": headers,
"status_code": 500,
},
)
def build(
self,
document: Document,
url: str,
headers: Optional[dict] = None,
) -> list[Document]:
if headers is None:
headers = {}
if not isinstance(document, list) and isinstance(document, Document):
documents: list[Document] = [document]
elif isinstance(document, list) and all(isinstance(doc, Document) for doc in document):
documents = document
else:
raise ValueError("document must be a Document or a list of Documents")
with requests.Session() as session:
documents = [self.post_document(session, doc, url, headers) for doc in documents]
self.repr_value = documents
return documents

View file

@ -1,38 +0,0 @@
from typing import Union
from langflow import CustomComponent
from langflow.field_typing import Text
from langflow.schema import Record
class SharedState(CustomComponent):
display_name = "Shared State"
description = "A component to share state between components."
def build_config(self):
return {
"name": {"display_name": "Name", "info": "The name of the state."},
"record": {"display_name": "Record", "info": "The record to store."},
"append": {
"display_name": "Append",
"info": "If True, the record will be appended to the state.",
},
}
def build(
self, name: str, record: Union[Text, Record], append: bool = False
) -> Record:
if append:
self.append_state(name, record)
else:
self.update_state(name, record)
state = self.get_state(name)
if not isinstance(state, Record):
if isinstance(state, str):
state = Record(text=state)
elif isinstance(state, dict):
state = Record(data=state)
else:
state = Record(text=str(state))
return state

View file

@ -1,49 +0,0 @@
# Implement ShouldRunNext component
from typing import Text
from langchain_core.prompts import PromptTemplate
from langflow import CustomComponent
from langflow.field_typing import BaseLanguageModel, Prompt
class ShouldRunNext(CustomComponent):
display_name = "Should Run Next"
description = "Decides whether to run the next component."
def build_config(self):
return {
"prompt": {
"display_name": "Prompt",
"info": "The prompt to use for the decision. It should generate a boolean response (True or False).",
},
"llm": {
"display_name": "LLM",
"info": "The language model to use for the decision.",
},
}
def build(self, template: Prompt, llm: BaseLanguageModel, **kwargs) -> dict:
# This is a simple component that always returns True
prompt_template = PromptTemplate.from_template(Text(template))
attributes_to_check = ["text", "page_content"]
for key, value in kwargs.items():
for attribute in attributes_to_check:
if hasattr(value, attribute):
kwargs[key] = getattr(value, attribute)
chain = prompt_template | llm
result = chain.invoke(kwargs)
if hasattr(result, "content") and isinstance(result.content, str):
result = result.content
elif isinstance(result, str):
result = result
else:
result = result.get("response")
if result.lower() not in ["true", "false"]:
raise ValueError("The prompt should generate a boolean response (True or False).")
# The string should be the words true or false
# if not raise an error
bool_result = result.lower() == "true"
return {"condition": bool_result, "result": kwargs}

View file

@ -1,89 +0,0 @@
from typing import List, Optional, Text
import requests
from langchain_core.documents import Document
from langflow import CustomComponent
from langflow.services.database.models.base import orjson_dumps
class UpdateRequest(CustomComponent):
display_name: str = "Update Request"
description: str = "Make a PATCH request to the given URL."
output_types: list[str] = ["Document"]
documentation: str = "https://docs.langflow.org/components/utilities#update-request"
beta: bool = True
field_config = {
"url": {"display_name": "URL", "info": "The URL to make the request to."},
"headers": {
"display_name": "Headers",
"field_type": "NestedDict",
"info": "The headers to send with the request.",
},
"code": {"show": False},
"document": {"display_name": "Document"},
"method": {
"display_name": "Method",
"field_type": "str",
"info": "The HTTP method to use.",
"options": ["PATCH", "PUT"],
"value": "PATCH",
},
}
def update_document(
self,
session: requests.Session,
document: Document,
url: str,
headers: Optional[dict] = None,
method: str = "PATCH",
) -> Document:
try:
if method == "PATCH":
response = session.patch(url, headers=headers, data=document.page_content)
elif method == "PUT":
response = session.put(url, headers=headers, data=document.page_content)
else:
raise ValueError(f"Unsupported method: {method}")
try:
response_json = response.json()
result = orjson_dumps(response_json, indent_2=False)
except Exception:
result = response.text
self.repr_value = result
return Document(
page_content=result,
metadata={
"source": url,
"headers": headers,
"status_code": response.status_code,
},
)
except Exception as exc:
return Document(
page_content=Text(exc),
metadata={"source": url, "headers": headers, "status_code": 500},
)
def build(
self,
method: str,
document: Document,
url: str,
headers: Optional[dict] = None,
) -> List[Document]:
if headers is None:
headers = {}
if not isinstance(document, list) and isinstance(document, Document):
documents: list[Document] = [document]
elif isinstance(document, list) and all(isinstance(doc, Document) for doc in document):
documents = document
else:
raise ValueError("document must be a Document or a list of Documents")
with requests.Session() as session:
documents = [self.update_document(session, doc, url, headers, method) for doc in documents]
self.repr_value = documents
return documents

View file

@ -2,11 +2,12 @@ from typing import List, Optional, Union
import chromadb # type: ignore
from langchain.embeddings.base import Embeddings
from langchain.schema import BaseRetriever, Document
from langchain.schema import BaseRetriever
from langchain_community.vectorstores import VectorStore
from langchain_community.vectorstores.chroma import Chroma
from langflow import CustomComponent
from langflow.schema.schema import Record
class ChromaComponent(CustomComponent):
@ -31,7 +32,7 @@ class ChromaComponent(CustomComponent):
"collection_name": {"display_name": "Collection Name", "value": "langflow"},
"index_directory": {"display_name": "Persist Directory"},
"code": {"advanced": True, "display_name": "Code"},
"documents": {"display_name": "Documents", "is_list": True},
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"embedding": {"display_name": "Embedding"},
"chroma_server_cors_allow_origins": {
"display_name": "Server CORS Allow Origins",
@ -55,7 +56,7 @@ class ChromaComponent(CustomComponent):
embedding: Embeddings,
chroma_server_ssl_enabled: bool,
index_directory: Optional[str] = None,
documents: Optional[List[Document]] = None,
inputs: Optional[List[Record]] = None,
chroma_server_cors_allow_origins: Optional[str] = None,
chroma_server_host: Optional[str] = None,
chroma_server_port: Optional[int] = None,
@ -97,6 +98,12 @@ class ChromaComponent(CustomComponent):
if index_directory is not None:
index_directory = self.resolve_path(index_directory)
documents = []
for _input in inputs:
if isinstance(_input, Record):
documents.append(_input.to_lc_document())
else:
documents.append(_input)
if documents is not None and embedding is not None:
if len(documents) == 0:
raise ValueError("If documents are provided, there must be at least one document.")

View file

@ -35,7 +35,6 @@ class ChromaSearchComponent(LCVectorStoreComponent):
# "persist": {"display_name": "Persist"},
"index_directory": {"display_name": "Index Directory"},
"code": {"show": False, "display_name": "Code"},
"documents": {"display_name": "Documents", "is_list": True},
"embedding": {
"display_name": "Embedding",
"info": "Embedding model to vectorize inputs (make sure to use same as index)",
@ -93,8 +92,7 @@ class ChromaSearchComponent(LCVectorStoreComponent):
if chroma_server_host is not None:
chroma_settings = chromadb.config.Settings(
chroma_server_cors_allow_origins=chroma_server_cors_allow_origins
or None,
chroma_server_cors_allow_origins=chroma_server_cors_allow_origins or None,
chroma_server_host=chroma_server_host,
chroma_server_port=chroma_server_port or None,
chroma_server_grpc_port=chroma_server_grpc_port or None,

View file

@ -5,7 +5,8 @@ from langchain_community.vectorstores import VectorStore
from langchain_community.vectorstores.faiss import FAISS
from langflow import CustomComponent
from langflow.field_typing import Document, Embeddings
from langflow.field_typing import Embeddings
from langflow.schema.schema import Record
class FAISSComponent(CustomComponent):
@ -15,7 +16,7 @@ class FAISSComponent(CustomComponent):
def build_config(self):
return {
"documents": {"display_name": "Documents"},
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"embedding": {"display_name": "Embedding"},
"folder_path": {
"display_name": "Folder Path",
@ -27,10 +28,16 @@ class FAISSComponent(CustomComponent):
def build(
self,
embedding: Embeddings,
documents: List[Document],
inputs: List[Record],
folder_path: str,
index_name: str = "langflow_index",
) -> Union[VectorStore, FAISS, BaseRetriever]:
documents = []
for _input in inputs:
if isinstance(_input, Record):
documents.append(_input.to_lc_document())
else:
documents.append(_input)
vector_store = FAISS.from_documents(documents=documents, embedding=embedding)
if not folder_path:
raise ValueError("Folder path is required to save the FAISS index.")

View file

@ -14,7 +14,6 @@ class FAISSSearchComponent(LCVectorStoreComponent):
def build_config(self):
return {
"documents": {"display_name": "Documents"},
"embedding": {"display_name": "Embedding"},
"folder_path": {
"display_name": "Folder Path",
@ -34,9 +33,7 @@ class FAISSSearchComponent(LCVectorStoreComponent):
if not folder_path:
raise ValueError("Folder path is required to save the FAISS index.")
path = self.resolve_path(folder_path)
vector_store = FAISS.load_local(
folder_path=Text(path), embeddings=embedding, index_name=index_name
)
vector_store = FAISS.load_local(folder_path=Text(path), embeddings=embedding, index_name=index_name)
if not vector_store:
raise ValueError("Failed to load the FAISS index.")

View file

@ -3,7 +3,8 @@ from typing import List, Optional
from langchain_community.vectorstores.mongodb_atlas import MongoDBAtlasVectorSearch
from langflow import CustomComponent
from langflow.field_typing import Document, Embeddings, NestedDict
from langflow.field_typing import Embeddings, NestedDict
from langflow.schema.schema import Record
class MongoDBAtlasComponent(CustomComponent):
@ -13,7 +14,7 @@ class MongoDBAtlasComponent(CustomComponent):
def build_config(self):
return {
"documents": {"display_name": "Documents"},
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"embedding": {"display_name": "Embedding"},
"collection_name": {"display_name": "Collection Name"},
"db_name": {"display_name": "Database Name"},
@ -25,7 +26,7 @@ class MongoDBAtlasComponent(CustomComponent):
def build(
self,
embedding: Embeddings,
documents: List[Document],
inputs: List[Record],
collection_name: str = "",
db_name: str = "",
index_name: str = "",
@ -42,6 +43,12 @@ class MongoDBAtlasComponent(CustomComponent):
collection = mongo_client[db_name][collection_name]
except Exception as e:
raise ValueError(f"Failed to connect to MongoDB Atlas: {e}")
documents = []
for _input in inputs:
if isinstance(_input, Record):
documents.append(_input.to_lc_document())
else:
documents.append(_input)
if documents:
vector_store = MongoDBAtlasVectorSearch.from_documents(
documents=documents,

View file

@ -7,7 +7,8 @@ from langchain_community.vectorstores import VectorStore
from langchain_community.vectorstores.pinecone import Pinecone
from langflow import CustomComponent
from langflow.field_typing import Document, Embeddings
from langflow.field_typing import Embeddings
from langflow.schema.schema import Record
class PineconeComponent(CustomComponent):
@ -17,7 +18,7 @@ class PineconeComponent(CustomComponent):
def build_config(self):
return {
"documents": {"display_name": "Documents"},
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"embedding": {"display_name": "Embedding"},
"index_name": {"display_name": "Index Name"},
"namespace": {"display_name": "Namespace"},
@ -44,7 +45,7 @@ class PineconeComponent(CustomComponent):
self,
embedding: Embeddings,
pinecone_env: str,
documents: List[Document],
inputs: List[Record],
text_key: str = "text",
pool_threads: int = 4,
index_name: Optional[str] = None,
@ -59,6 +60,12 @@ class PineconeComponent(CustomComponent):
pinecone.init(api_key=pinecone_api_key, environment=pinecone_env) # type: ignore
if not index_name:
raise ValueError("Index Name is required.")
documents = []
for _input in inputs:
if isinstance(_input, Record):
documents.append(_input.to_lc_document())
else:
documents.append(_input)
if documents:
return Pinecone.from_documents(
documents=documents,

View file

@ -3,8 +3,10 @@ from typing import Optional, Union
from langchain.schema import BaseRetriever
from langchain_community.vectorstores import VectorStore
from langchain_community.vectorstores.qdrant import Qdrant
from langflow import CustomComponent
from langflow.field_typing import Document, Embeddings, NestedDict
from langflow.field_typing import Embeddings, NestedDict
from langflow.schema.schema import Record
class QdrantComponent(CustomComponent):
@ -14,17 +16,23 @@ class QdrantComponent(CustomComponent):
def build_config(self):
return {
"documents": {"display_name": "Documents"},
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"embedding": {"display_name": "Embedding"},
"api_key": {"display_name": "API Key", "password": True, "advanced": True},
"collection_name": {"display_name": "Collection Name"},
"content_payload_key": {"display_name": "Content Payload Key", "advanced": True},
"content_payload_key": {
"display_name": "Content Payload Key",
"advanced": True,
},
"distance_func": {"display_name": "Distance Function", "advanced": True},
"grpc_port": {"display_name": "gRPC Port", "advanced": True},
"host": {"display_name": "Host", "advanced": True},
"https": {"display_name": "HTTPS", "advanced": True},
"location": {"display_name": "Location", "advanced": True},
"metadata_payload_key": {"display_name": "Metadata Payload Key", "advanced": True},
"metadata_payload_key": {
"display_name": "Metadata Payload Key",
"advanced": True,
},
"path": {"display_name": "Path", "advanced": True},
"port": {"display_name": "Port", "advanced": True},
"prefer_grpc": {"display_name": "Prefer gRPC", "advanced": True},
@ -38,7 +46,7 @@ class QdrantComponent(CustomComponent):
self,
embedding: Embeddings,
collection_name: str,
documents: Optional[Document] = None,
inputs: Optional[Record] = None,
api_key: Optional[str] = None,
content_payload_key: str = "page_content",
distance_func: str = "Cosine",
@ -55,6 +63,12 @@ class QdrantComponent(CustomComponent):
timeout: Optional[int] = None,
url: Optional[str] = None,
) -> Union[VectorStore, Qdrant, BaseRetriever]:
documents = []
for _input in inputs:
if isinstance(_input, Record):
documents.append(_input.to_lc_document())
else:
documents.append(_input)
if documents is None:
from qdrant_client import QdrantClient

View file

@ -3,9 +3,10 @@ from typing import Optional, Union
from langchain.embeddings.base import Embeddings
from langchain_community.vectorstores import VectorStore
from langchain_community.vectorstores.redis import Redis
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever
from langflow import CustomComponent
from langflow.schema.schema import Record
class RedisComponent(CustomComponent):
@ -28,7 +29,7 @@ class RedisComponent(CustomComponent):
return {
"index_name": {"display_name": "Index Name", "value": "your_index"},
"code": {"show": False, "display_name": "Code"},
"documents": {"display_name": "Documents", "is_list": True},
"inputs": {"display_name": "Input", "input_types": ["Document", "Record"]},
"embedding": {"display_name": "Embedding"},
"schema": {"display_name": "Schema", "file_types": [".yaml"]},
"redis_server_url": {
@ -44,7 +45,7 @@ class RedisComponent(CustomComponent):
redis_server_url: str,
redis_index_name: str,
schema: Optional[str] = None,
documents: Optional[Document] = None,
inputs: Optional[Record] = None,
) -> Union[VectorStore, BaseRetriever]:
"""
Builds the Vector Store or BaseRetriever object.
@ -58,7 +59,13 @@ class RedisComponent(CustomComponent):
Returns:
- VectorStore: The Vector Store object.
"""
if documents is None:
documents = []
for _input in inputs:
if isinstance(_input, Record):
documents.append(_input.to_lc_document())
else:
documents.append(_input)
if not documents:
if schema is None:
raise ValueError("If no documents are provided, a schema must be provided.")
redis_vs = Redis.from_existing_index(

View file

@ -33,7 +33,6 @@ class RedisSearchComponent(RedisComponent, LCVectorStoreComponent):
"input_value": {"display_name": "Input"},
"index_name": {"display_name": "Index Name", "value": "your_index"},
"code": {"show": False, "display_name": "Code"},
"documents": {"display_name": "Documents", "is_list": True},
"embedding": {"display_name": "Embedding"},
"schema": {"display_name": "Schema", "file_types": [".yaml"]},
"redis_server_url": {

Some files were not shown because too many files have changed in this diff Show more