diff --git a/docs/docs/components/custom.mdx b/docs/docs/components/custom.mdx index d8c6ff2f5..43eb336fc 100644 --- a/docs/docs/components/custom.mdx +++ b/docs/docs/components/custom.mdx @@ -83,15 +83,14 @@ The CustomComponent class serves as the foundation for creating custom component | _`file_types: List[str]`_ | This is a requirement if the _`field_type`_ is _file_. Defines which file types will be accepted. For example, _json_, _yaml_ or _yml_. | | _`range_spec: langflow.field_typing.RangeSpec`_ | This is a requirement if the _`field_type`_ is _`float`_. Defines the range of values accepted and the step size. If none is defined, the default is _`[-1, 1, 0.1]`_. | | _`title_case: bool`_ | Formats the name of the field when _`display_name`_ is not defined. Set it to False to keep the name as you set it in the _`build`_ method. | + | _`refresh_button: bool`_ | If set to True a button will appear to the right of the field, and when clicked, it will call the _`update_build_config`_ method which takes in the _`build_config`_, the name of the field (_`field_name`_) and the latest value of the field (_`field_value`_). This is useful when you want to update the _`build_config`_ based on the value of the field. | + | _`real_time_refresh: bool`_ | If set to True, the _`update_build_config`_ method will be called every time the field value changes. | - - - Keys _`options`_ and _`value`_ can receive a method or function that returns a list of strings or a string, respectively. This is useful when you want to dynamically generate the options or the default value of a field. A refresh button will appear next to the field in the component, allowing the user to update the options or the default value. - - - + +By using the _`update_build_config`_ method, you can update the _`build_config`_ in whatever way you want based on the value of the field or not. + - The CustomComponent class also provides helpful methods for specific tasks (e.g., to load and use other flows from the Langflow platform): diff --git a/src/backend/langflow/components/custom_components/__init__.py b/docs/docs/components/data.mdx similarity index 100% rename from src/backend/langflow/components/custom_components/__init__.py rename to docs/docs/components/data.mdx diff --git a/src/backend/langflow/components/io/__init__.py b/docs/docs/components/experimental.mdx similarity index 100% rename from src/backend/langflow/components/io/__init__.py rename to docs/docs/components/experimental.mdx diff --git a/src/backend/langflow/components/io/base/__init__.py b/docs/docs/components/helpers.mdx similarity index 100% rename from src/backend/langflow/components/io/base/__init__.py rename to docs/docs/components/helpers.mdx diff --git a/docs/docs/components/io.mdx b/docs/docs/components/inputs.mdx similarity index 51% rename from docs/docs/components/io.mdx rename to docs/docs/components/inputs.mdx index 0ef93d684..e4a4a8b0f 100644 --- a/docs/docs/components/io.mdx +++ b/docs/docs/components/inputs.mdx @@ -1,6 +1,6 @@ -import Admonition from "@theme/Admonition"; +import Admonition from '@theme/Admonition'; -# I/O +# Inputs ### ChatInput @@ -22,27 +22,6 @@ This component is designed to get user input from the chat.

-### ChatOutput - -This component is designed to send a message to the chat. - -**Params** - -- **Sender Type:** specifies the sender type. Defaults to _`"Machine"`_. Options are _`"Machine"`_ and _`"User"`_. - -- **Sender Name:** specifies the name of the sender. Defaults to _`"AI"`_. - -- **Session ID:** specifies the session ID of the chat history. If provided, the message will be saved in the Message History. - -- **Message:** specifies the message text. - - -

- If _`As Record`_ is _`true`_ and the _`Message`_ is a _`Record`_, the data of the _`Record`_ will be updated with the _`Sender`_, _`Sender Name`_, and _`Session ID`_. -

-
- - ### TextInput This component is designed for simple text input, allowing users to pass textual data to subsequent components in the workflow. It's particularly useful for scenarios where a brief user input is required to initiate or influence the flow. @@ -58,16 +37,3 @@ This component is designed for simple text input, allowing users to pass textual

-### TextOutput - -This component is designed to display text data to the user. It's particularly useful for scenarios where you don't want to send the text data to the chat, but still want to display it. - -**Params** - -- **Value:** Specifies the text data to be displayed. This is where the text data to be displayed is provided. If no value is provided, it defaults to an empty string. - - -

- The `TextOutput` component serves as a straightforward means for displaying text data. It ensures that textual data can be seamlessly observed in the chat window throughout your flow. -

-
\ No newline at end of file diff --git a/docs/docs/components/llms.mdx b/docs/docs/components/model_specs.mdx similarity index 100% rename from docs/docs/components/llms.mdx rename to docs/docs/components/model_specs.mdx diff --git a/src/backend/langflow/components/prompts/__init_.py b/docs/docs/components/models.mdx similarity index 100% rename from src/backend/langflow/components/prompts/__init_.py rename to docs/docs/components/models.mdx diff --git a/docs/docs/components/outputs.mdx b/docs/docs/components/outputs.mdx new file mode 100644 index 000000000..3533dfdb6 --- /dev/null +++ b/docs/docs/components/outputs.mdx @@ -0,0 +1,37 @@ +import Admonition from '@theme/Admonition'; + +# Outputs + +### ChatOutput + +This component is designed to send a message to the chat. + +**Params** + +- **Sender Type:** specifies the sender type. Defaults to _`"Machine"`_. Options are _`"Machine"`_ and _`"User"`_. + +- **Sender Name:** specifies the name of the sender. Defaults to _`"AI"`_. + +- **Session ID:** specifies the session ID of the chat history. If provided, the message will be saved in the Message History. + +- **Message:** specifies the message text. + + +

+ If _`As Record`_ is _`true`_ and the _`Message`_ is a _`Record`_, the data of the _`Record`_ will be updated with the _`Sender`_, _`Sender Name`_, and _`Session ID`_. +

+
+ +### TextOutput + +This component is designed to display text data to the user. It's particularly useful for scenarios where you don't want to send the text data to the chat, but still want to display it. + +**Params** + +- **Value:** Specifies the text data to be displayed. This is where the text data to be displayed is provided. If no value is provided, it defaults to an empty string. + + +

+ The `TextOutput` component serves as a straightforward means for displaying text data. It ensures that textual data can be seamlessly observed in the chat window throughout your flow. +

+
\ No newline at end of file diff --git a/docs/docs/getting-started/creating-flows.mdx b/docs/docs/getting-started/creating-flows.mdx index aecc3ea16..9c16d225f 100644 --- a/docs/docs/getting-started/creating-flows.mdx +++ b/docs/docs/getting-started/creating-flows.mdx @@ -7,7 +7,8 @@ import ReactPlayer from "react-player"; ## Compose -Creating flows with Langflow is easy. Drag sidebar components onto the canvas and connect them together to create your pipeline. Langflow provides a range of [LangChain components](https://python.langchain.com/docs/modules/) to choose from, including LLMs, prompt serializers, agents, and chains. +Creating flows with Langflow is easy. Drag sidebar components onto the canvas and connect them together to create your pipeline. +Langflow provides a range of Components to choose from, including **Chat Input**, **Chat Output**, **API Request** and **Prompt**. -## Fork +## Starter Flows -The easiest way to start with Langflow is by forking a **community example**. Forking an example stores a copy in your project collection, allowing you to edit and save the modified version as a new flow. +Langflow provides a range of starter flows to help you get started. These flows are pre-built and can be used as a starting point for your own flows.
-## Build +## Defining Inputs and Outputs + +Each flow can have multiple inputs and outputs. These can be defined by placing **Inputs** and **Outputs** components on the canvas. + +The **Inputs** components define the inputs to the flow. +Whenever you place an Input component on the canvas, it will allow you to interactively define change its value +from the Interactive Panel. + +The **Text Input** component allows you to define a text input, and the **Chat Input** component allows you to use the chat input from the Interactive Panel. + +The **Outputs** components define the outputs of the flow and work similarly to the Inputs components. + +Both Inputs and Outputs components can be connected to other components on the canvas and are used to define how the API works too. + -Building a flow means validating if the components have prerequisites fulfilled and are properly instantiated. When a chat message is sent, the flow will run for the first time, executing the pipeline.
None: and associate a connection with the context. """ - from langflow.services.deps import get_db_service try: + from langflow.services.database.factory import DatabaseServiceFactory + from langflow.services.deps import get_db_service + from langflow.services.manager import ( + initialize_settings_service, + service_manager, + ) + from langflow.services.schema import ServiceType + + initialize_settings_service() + service_manager.register_factory( + DatabaseServiceFactory(), [ServiceType.SETTINGS_SERVICE] + ) connectable = get_db_service().engine except Exception as e: logger.error(f"Error getting database engine: {e}") + url = os.getenv("LANGFLOW_DATABASE_URL") + url = url or config.get_main_option("sqlalchemy.url") + config.set_main_option("sqlalchemy.url", url) connectable = engine_from_config( config.get_section(config.config_ini_section, {}), prefix="sqlalchemy.", diff --git a/src/backend/langflow/alembic/script.py.mako b/src/backend/langflow/alembic/script.py.mako index 2fbdc930d..bc9bca83a 100644 --- a/src/backend/langflow/alembic/script.py.mako +++ b/src/backend/langflow/alembic/script.py.mako @@ -23,10 +23,12 @@ depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)} def upgrade() -> None: conn = op.get_bind() inspector = Inspector.from_engine(conn) # type: ignore + table_names = inspector.get_table_names() ${upgrades if upgrades else "pass"} def downgrade() -> None: conn = op.get_bind() inspector = Inspector.from_engine(conn) # type: ignore + table_names = inspector.get_table_names() ${downgrades if downgrades else "pass"} diff --git a/src/backend/langflow/alembic/versions/63b9c451fd30_add_icon_and_icon_bg_color_to_flow.py b/src/backend/langflow/alembic/versions/63b9c451fd30_add_icon_and_icon_bg_color_to_flow.py new file mode 100644 index 000000000..3deb66346 --- /dev/null +++ b/src/backend/langflow/alembic/versions/63b9c451fd30_add_icon_and_icon_bg_color_to_flow.py @@ -0,0 +1,56 @@ +"""Add icon and icon_bg_color to Flow + +Revision ID: 63b9c451fd30 +Revises: bc2f01c40e4a +Create Date: 2024-03-06 10:53:47.148658 + +""" + +from typing import Sequence, Union + +import sqlalchemy as sa +import sqlmodel +from alembic import op +from sqlalchemy.engine.reflection import Inspector + +# revision identifiers, used by Alembic. +revision: str = "63b9c451fd30" +down_revision: Union[str, None] = "bc2f01c40e4a" +branch_labels: Union[str, Sequence[str], None] = None +depends_on: Union[str, Sequence[str], None] = None + + +def upgrade() -> None: + conn = op.get_bind() + inspector = Inspector.from_engine(conn) # type: ignore + table_names = inspector.get_table_names() + column_names = [column["name"] for column in inspector.get_columns("flow")] + # ### commands auto generated by Alembic - please adjust! ### + with op.batch_alter_table("flow", schema=None) as batch_op: + if "icon" not in column_names: + batch_op.add_column( + sa.Column("icon", sqlmodel.sql.sqltypes.AutoString(), nullable=True) + ) + if "icon_bg_color" not in column_names: + batch_op.add_column( + sa.Column( + "icon_bg_color", sqlmodel.sql.sqltypes.AutoString(), nullable=True + ) + ) + + # ### end Alembic commands ### + + +def downgrade() -> None: + conn = op.get_bind() + inspector = Inspector.from_engine(conn) # type: ignore + table_names = inspector.get_table_names() + column_names = [column["name"] for column in inspector.get_columns("flow")] + # ### commands auto generated by Alembic - please adjust! ### + with op.batch_alter_table("flow", schema=None) as batch_op: + if "icon" in column_names: + batch_op.drop_column("icon") + if "icon_bg_color" in column_names: + batch_op.drop_column("icon_bg_color") + + # ### end Alembic commands ### diff --git a/src/backend/langflow/api/v1/base.py b/src/backend/langflow/api/v1/base.py index 2380b019e..bad43c437 100644 --- a/src/backend/langflow/api/v1/base.py +++ b/src/backend/langflow/api/v1/base.py @@ -1,9 +1,7 @@ from typing import Optional -from langchain.prompts import PromptTemplate from pydantic import BaseModel, field_validator, model_serializer -from langflow.interface.utils import extract_input_variables_from_prompt from langflow.template.frontend_node.base import FrontendNode @@ -80,24 +78,6 @@ INVALID_NAMES = { } -def validate_prompt(template: str): - input_variables = extract_input_variables_from_prompt(template) - - # Check if there are invalid characters in the input_variables - input_variables = check_input_variables(input_variables) - if any(var in INVALID_NAMES for var in input_variables): - raise ValueError( - f"Invalid input variables. None of the variables can be named {', '.join(input_variables)}. " - ) - - try: - PromptTemplate(template=template, input_variables=input_variables) - except Exception as exc: - raise ValueError(f"Invalid prompt: {exc}") from exc - - return input_variables - - def is_json_like(var): if var.startswith("{{") and var.endswith("}}"): # If it is a double brance variable diff --git a/src/backend/langflow/api/v1/chat.py b/src/backend/langflow/api/v1/chat.py index f4227ffb1..c05e76325 100644 --- a/src/backend/langflow/api/v1/chat.py +++ b/src/backend/langflow/api/v1/chat.py @@ -93,7 +93,7 @@ async def build_vertex( current_user=Depends(get_current_active_user), ): """Build a vertex instead of the entire graph.""" - {"inputs": {"input_value": "some value"}} + start_time = time.perf_counter() next_vertices_ids = [] try: @@ -110,7 +110,7 @@ async def build_vertex( vertex = graph.get_vertex(vertex_id) try: - if not vertex.pinned or not vertex._built: + if not vertex.frozen or not vertex._built: inputs_dict = inputs.model_dump() if inputs else {} await vertex.build(user_id=current_user.id, inputs=inputs_dict) @@ -155,10 +155,10 @@ async def build_vertex( result_data_response.duration = duration result_data_response.timedelta = timedelta vertex.add_build_time(timedelta) - inactive_vertices = None - if graph.inactive_vertices: - inactive_vertices = list(graph.inactive_vertices) - graph.reset_inactive_vertices() + inactivated_vertices = None + inactivated_vertices = list(graph.inactivated_vertices) + graph.reset_inactivated_vertices() + graph.reset_activated_vertices() chat_service.set_cache(flow_id, graph) # graph.stop_vertex tells us if the user asked @@ -169,8 +169,8 @@ async def build_vertex( next_vertices_ids = [graph.stop_vertex] build_response = VertexBuildResponse( + inactivated_vertices=inactivated_vertices, next_vertices_ids=next_vertices_ids, - inactive_vertices=inactive_vertices, valid=valid, params=params, id=vertex.id, @@ -227,7 +227,7 @@ async def build_vertex_stream( ) yield str(stream_data) - elif not vertex.pinned or not vertex._built: + elif not vertex.frozen or not vertex._built: logger.debug(f"Streaming vertex {vertex_id}") stream_data = StreamData( event="message", diff --git a/src/backend/langflow/api/v1/endpoints.py b/src/backend/langflow/api/v1/endpoints.py index bd07d38b2..b2d175f77 100644 --- a/src/backend/langflow/api/v1/endpoints.py +++ b/src/backend/langflow/api/v1/endpoints.py @@ -13,6 +13,7 @@ from langflow.api.v1.schemas import ( ProcessResponse, RunResponse, TaskStatusResponse, + Tweaks, UploadFileResponse, ) from langflow.interface.custom.custom_component import CustomComponent @@ -44,9 +45,10 @@ def get_all( logger.debug("Building langchain types dict") try: - all_types_dict = get_all_types_dict(settings_service) + all_types_dict = get_all_types_dict(settings_service.settings.COMPONENTS_PATH) return all_types_dict except Exception as exc: + logger.exception(exc) raise HTTPException(status_code=500, detail=str(exc)) from exc @@ -56,19 +58,60 @@ def get_all( async def run_flow_with_caching( session: Annotated[Session, Depends(get_session)], flow_id: str, - inputs: Optional[InputValueRequest] = None, - tweaks: Optional[dict] = None, + inputs: Optional[List[InputValueRequest]] = None, + outputs: Optional[List[str]] = None, + tweaks: Annotated[Optional[Tweaks], Body(embed=True)] = None, # noqa: F821 stream: Annotated[bool, Body(embed=True)] = False, # noqa: F821 session_id: Annotated[Union[None, str], Body(embed=True)] = None, # noqa: F821 api_key_user: User = Depends(api_key_security), session_service: SessionService = Depends(get_session_service), ): + """ + Executes a specified flow by ID with optional input values, output selection, tweaks, and streaming capability. + This endpoint supports running flows with caching to enhance performance and efficiency. + + ### Parameters: + - `flow_id` (str): The unique identifier of the flow to be executed. + - `inputs` (List[InputValueRequest], optional): A list of inputs specifying the input values and components for the flow. Each input can target specific components and provide custom values. + - `outputs` (List[str], optional): A list of output names to retrieve from the executed flow. If not provided, all outputs are returned. + - `tweaks` (Optional[Tweaks], optional): A dictionary of tweaks to customize the flow execution. The tweaks can be used to modify the flow's parameters and components. Tweaks can be overridden by the input values. + - `stream` (bool, optional): Specifies whether the results should be streamed. Defaults to False. + - `session_id` (Union[None, str], optional): An optional session ID to utilize existing session data for the flow execution. + - `api_key_user` (User): The user associated with the current API key. Automatically resolved from the API key. + - `session_service` (SessionService): The session service object for managing flow sessions. + + ### Returns: + A `RunResponse` object containing the selected outputs (or all if not specified) of the executed flow and the session ID. The structure of the response accommodates multiple inputs, providing a nested list of outputs for each input. + + ### Raises: + HTTPException: Indicates issues with finding the specified flow, invalid input formats, or internal errors during flow execution. + + ### Example usage: + ```json + POST /run/{flow_id} + Payload: + { + "inputs": [ + {"components": ["component1"], "input_value": "value1"}, + {"components": ["component3"], "input_value": "value2"} + ], + "outputs": ["Component Name", "component_id"], + "tweaks": {"parameter_name": "value", "Component Name": {"parameter_name": "value"}, "component_id": {"parameter_name": "value"}} + "stream": false + } + ``` + + This endpoint facilitates complex flow executions with customized inputs, outputs, and configurations, catering to diverse application requirements. + """ try: if inputs is not None: input_values_dict: dict[str, Union[str, list[str]]] = inputs.model_dump() else: input_values_dict = {} + if outputs is None: + outputs = [] + if session_id: session_data = await session_service.load_session( session_id, flow_id=flow_id @@ -82,6 +125,7 @@ async def run_flow_with_caching( flow_id=flow_id, session_id=session_id, inputs=input_values_dict, + outputs=outputs, artifacts=artifacts, session_service=session_service, stream=stream, @@ -107,6 +151,7 @@ async def run_flow_with_caching( flow_id=flow_id, session_id=session_id, inputs=input_values_dict, + outputs=outputs, artifacts={}, session_service=session_service, stream=stream, @@ -262,7 +307,10 @@ async def custom_component_update( component = CustomComponent(code=raw_code.code) component_node = build_custom_component_template( - component, user_id=user.id, update_field=raw_code.field + component, + user_id=user.id, + update_field=raw_code.field, + update_field_value=raw_code.field_value, ) # Update the field return component_node diff --git a/src/backend/langflow/api/v1/flows.py b/src/backend/langflow/api/v1/flows.py index 517ff33c1..dd60d5fed 100644 --- a/src/backend/langflow/api/v1/flows.py +++ b/src/backend/langflow/api/v1/flows.py @@ -5,14 +5,22 @@ from uuid import UUID import orjson from fastapi import APIRouter, Depends, File, HTTPException, UploadFile from fastapi.encoders import jsonable_encoder +from loguru import logger from sqlmodel import Session, select from langflow.api.utils import remove_api_keys, validate_is_component from langflow.api.v1.schemas import FlowListCreate, FlowListRead +from langflow.initial_setup.setup import STARTER_FOLDER_NAME from langflow.services.auth.utils import get_current_active_user -from langflow.services.database.models.flow import Flow, FlowCreate, FlowRead, FlowUpdate +from langflow.services.database.models.flow import ( + Flow, + FlowCreate, + FlowRead, + FlowUpdate, +) from langflow.services.database.models.user.model import User from langflow.services.deps import get_session, get_settings_service +from langflow.services.settings.service import SettingsService # build router router = APIRouter(prefix="/flows", tags=["Flows"]) @@ -42,11 +50,36 @@ def create_flow( def read_flows( *, current_user: User = Depends(get_current_active_user), + session: Session = Depends(get_session), + settings_service: "SettingsService" = Depends(get_settings_service), ): """Read all flows.""" try: - flows = current_user.flows + auth_settings = settings_service.auth_settings + if auth_settings.AUTO_LOGIN: + flows = session.exec( + select(Flow).where( + (Flow.user_id == None) | (Flow.user_id == current_user.id) # noqa + ) + ).all() + else: + flows = current_user.flows + flows = validate_is_component(flows) + flow_ids = [flow.id for flow in flows] + # with the session get the flows that DO NOT have a user_id + try: + example_flows = session.exec( + select(Flow).where( + Flow.user_id == None, # noqa + Flow.folder == STARTER_FOLDER_NAME, + ) + ).all() + for example_flow in example_flows: + if example_flow.id not in flow_ids: + flows.append(example_flow) + except Exception as e: + logger.error(e) except Exception as e: raise HTTPException(status_code=500, detail=str(e)) from e return [jsonable_encoder(flow) for flow in flows] @@ -58,9 +91,18 @@ def read_flow( session: Session = Depends(get_session), flow_id: UUID, current_user: User = Depends(get_current_active_user), + settings_service: "SettingsService" = Depends(get_settings_service), ): """Read a flow.""" - if user_flow := (session.exec(select(Flow).where(Flow.id == flow_id, Flow.user_id == current_user.id)).first()): + auth_settings = settings_service.auth_settings + stmt = select(Flow).where(Flow.id == flow_id) + if auth_settings.AUTO_LOGIN: + # If auto login is enable user_id can be current_user.id or None + # so write an OR + stmt = stmt.where( + (Flow.user_id == current_user.id) | (Flow.user_id == None) # noqa + ) # noqa + if user_flow := session.exec(stmt).first(): return user_flow else: raise HTTPException(status_code=404, detail="Flow not found") @@ -77,7 +119,12 @@ def update_flow( ): """Update a flow.""" - db_flow = read_flow(session=session, flow_id=flow_id, current_user=current_user) + db_flow = read_flow( + session=session, + flow_id=flow_id, + current_user=current_user, + settings_service=settings_service, + ) if not db_flow: raise HTTPException(status_code=404, detail="Flow not found") flow_data = flow.model_dump(exclude_unset=True) @@ -99,9 +146,15 @@ def delete_flow( session: Session = Depends(get_session), flow_id: UUID, current_user: User = Depends(get_current_active_user), + settings_service=Depends(get_settings_service), ): """Delete a flow.""" - flow = read_flow(session=session, flow_id=flow_id, current_user=current_user) + flow = read_flow( + session=session, + flow_id=flow_id, + current_user=current_user, + settings_service=settings_service, + ) if not flow: raise HTTPException(status_code=404, detail="Flow not found") session.delete(flow) @@ -109,9 +162,6 @@ def delete_flow( return {"message": "Flow deleted successfully"} -# Define a new model to handle multiple flows - - @router.post("/batch/", response_model=List[FlowRead], status_code=201) def create_flows( *, diff --git a/src/backend/langflow/api/v1/schemas.py b/src/backend/langflow/api/v1/schemas.py index 7a91473e6..70a60de5b 100644 --- a/src/backend/langflow/api/v1/schemas.py +++ b/src/backend/langflow/api/v1/schemas.py @@ -4,7 +4,7 @@ from pathlib import Path from typing import Any, Dict, List, Optional, Union from uuid import UUID -from pydantic import BaseModel, Field, field_validator, model_serializer +from pydantic import BaseModel, Field, RootModel, field_validator, model_serializer from langflow.services.database.models.api_key.model import ApiKeyRead from langflow.services.database.models.base import orjson_dumps @@ -158,14 +158,13 @@ class StreamData(BaseModel): data: dict def __str__(self) -> str: - return ( - f"event: {self.event}\ndata: {orjson_dumps(self.data, indent_2=False)}\n\n" - ) + return f"event: {self.event}\ndata: {orjson_dumps(self.data, indent_2=False)}\n\n" class CustomComponentCode(BaseModel): code: str field: Optional[str] = None + field_value: Optional[Any] = None frontend_node: Optional[dict] = None @@ -229,8 +228,8 @@ class ResultDataResponse(BaseModel): class VertexBuildResponse(BaseModel): id: Optional[str] = None + inactivated_vertices: Optional[List[str]] = None next_vertices_ids: Optional[List[str]] = None - inactive_vertices: Optional[List[str]] = None valid: bool params: Optional[Any] = Field(default_factory=dict) """JSON string of the params.""" @@ -245,4 +244,49 @@ class VerticesBuiltResponse(BaseModel): class InputValueRequest(BaseModel): - input_value: str + components: Optional[List[str]] = None + input_value: Optional[str] = None + + # add an example + model_config = { + "json_schema_extra": { + "examples": [ + { + "components": ["components_id", "Component Name"], + "input_value": "input_value", + }, + {"components": ["Component Name"], "input_value": "input_value"}, + {"input_value": "input_value"}, + ] + } + } + + +class Tweaks(RootModel): + root: dict[str, Union[str, dict[str, str]]] = Field( + description="A dictionary of tweaks to adjust the flow's execution. Allows customizing flow behavior dynamically. All tweaks are overridden by the input values.", + ) + model_config = { + "json_schema_extra": { + "examples": [ + { + "parameter_name": "value", + "Component Name": {"parameter_name": "value"}, + "component_id": {"parameter_name": "value"}, + } + ] + } + } + + # This should behave like a dict + def __getitem__(self, key): + return self.root[key] + + def __setitem__(self, key, value): + self.root[key] = value + + def __delitem__(self, key): + del self.root[key] + + def items(self): + return self.root.items() diff --git a/src/backend/langflow/api/v1/validate.py b/src/backend/langflow/api/v1/validate.py index b062c1329..b7b43c376 100644 --- a/src/backend/langflow/api/v1/validate.py +++ b/src/backend/langflow/api/v1/validate.py @@ -1,14 +1,20 @@ from fastapi import APIRouter, HTTPException +from loguru import logger + from langflow.api.v1.base import ( Code, CodeValidationResponse, PromptValidationResponse, ValidatePromptRequest, +) +from langflow.base.prompts.utils import ( + add_new_variables_to_template, + get_old_custom_fields, + remove_old_variables_from_template, + update_input_variables_field, validate_prompt, ) -from langflow.template.field.base import TemplateField -from langflow.utils.validate import PROMPT_INPUT_TYPES, validate_code -from loguru import logger +from langflow.utils.validate import validate_code # build router router = APIRouter(prefix="/validate", tags=["Validate"]) @@ -36,13 +42,28 @@ def post_validate_prompt(prompt_request: ValidatePromptRequest): input_variables=input_variables, frontend_node=None, ) - old_custom_fields = get_old_custom_fields(prompt_request) + old_custom_fields = get_old_custom_fields( + prompt_request.custom_fields, prompt_request.name + ) - add_new_variables_to_template(input_variables, prompt_request) + add_new_variables_to_template( + input_variables, + prompt_request.custom_fields, + prompt_request.frontend_node.template, + prompt_request.name, + ) - remove_old_variables_from_template(old_custom_fields, input_variables, prompt_request) + remove_old_variables_from_template( + old_custom_fields, + input_variables, + prompt_request.custom_fields, + prompt_request.frontend_node.template, + prompt_request.name, + ) - update_input_variables_field(input_variables, prompt_request) + update_input_variables_field( + input_variables, prompt_request.frontend_node.template + ) return PromptValidationResponse( input_variables=input_variables, @@ -51,70 +72,3 @@ def post_validate_prompt(prompt_request: ValidatePromptRequest): except Exception as e: logger.exception(e) raise HTTPException(status_code=500, detail=str(e)) from e - - -def get_old_custom_fields(prompt_request): - try: - if len(prompt_request.frontend_node.custom_fields) == 1 and prompt_request.name == "": - # If there is only one custom field and the name is empty string - # then we are dealing with the first prompt request after the node was created - prompt_request.name = list(prompt_request.frontend_node.custom_fields.keys())[0] - - old_custom_fields = prompt_request.frontend_node.custom_fields[prompt_request.name] - if old_custom_fields is None: - old_custom_fields = [] - - old_custom_fields = old_custom_fields.copy() - except KeyError: - old_custom_fields = [] - prompt_request.frontend_node.custom_fields[prompt_request.name] = [] - return old_custom_fields - - -def add_new_variables_to_template(input_variables, prompt_request): - for variable in input_variables: - try: - template_field = TemplateField( - name=variable, - display_name=variable, - field_type="str", - show=True, - advanced=False, - multiline=True, - input_types=PROMPT_INPUT_TYPES, - value="", # Set the value to empty string - ) - if variable in prompt_request.frontend_node.template: - # Set the new field with the old value - template_field.value = prompt_request.frontend_node.template[variable]["value"] - - prompt_request.frontend_node.template[variable] = template_field.to_dict() - - # Check if variable is not already in the list before appending - if variable not in prompt_request.frontend_node.custom_fields[prompt_request.name]: - prompt_request.frontend_node.custom_fields[prompt_request.name].append(variable) - - except Exception as exc: - logger.exception(exc) - raise HTTPException(status_code=500, detail=str(exc)) from exc - - -def remove_old_variables_from_template(old_custom_fields, input_variables, prompt_request): - for variable in old_custom_fields: - if variable not in input_variables: - try: - # Remove the variable from custom_fields associated with the given name - if variable in prompt_request.frontend_node.custom_fields[prompt_request.name]: - prompt_request.frontend_node.custom_fields[prompt_request.name].remove(variable) - - # Remove the variable from the template - prompt_request.frontend_node.template.pop(variable, None) - - except Exception as exc: - logger.exception(exc) - raise HTTPException(status_code=500, detail=str(exc)) from exc - - -def update_input_variables_field(input_variables, prompt_request): - if "input_variables" in prompt_request.frontend_node.template: - prompt_request.frontend_node.template["input_variables"]["value"] = input_variables diff --git a/src/backend/langflow/base/__init__.py b/src/backend/langflow/base/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/src/backend/langflow/base/data/__init__.py b/src/backend/langflow/base/data/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/src/backend/langflow/base/data/utils.py b/src/backend/langflow/base/data/utils.py new file mode 100644 index 000000000..03f6de046 --- /dev/null +++ b/src/backend/langflow/base/data/utils.py @@ -0,0 +1,83 @@ +from concurrent import futures +from pathlib import Path +from typing import List, Optional, Text + +from langflow.schema.schema import Record + + +def is_hidden(path: Path) -> bool: + return path.name.startswith(".") + + +def retrieve_file_paths( + path: str, + types: List[str], + load_hidden: bool, + recursive: bool, + depth: int, +) -> List[str]: + path_obj = Path(path) + if not path_obj.exists() or not path_obj.is_dir(): + raise ValueError(f"Path {path} must exist and be a directory.") + + def match_types(p: Path) -> bool: + return any(p.suffix == f".{t}" for t in types) if types else True + + def is_not_hidden(p: Path) -> bool: + return not is_hidden(p) or load_hidden + + def walk_level(directory: Path, max_depth: int): + directory = directory.resolve() + prefix_length = len(directory.parts) + for p in directory.rglob("*" if recursive else "[!.]*"): + if len(p.parts) - prefix_length <= max_depth: + yield p + + glob = "**/*" if recursive else "*" + paths = walk_level(path_obj, depth) if depth else path_obj.glob(glob) + file_paths = [Text(p) for p in paths if p.is_file() and match_types(p) and is_not_hidden(p)] + + return file_paths + + +def parse_file_to_record(file_path: str, silent_errors: bool) -> Optional[Record]: + # Use the partition function to load the file + from unstructured.partition.auto import partition # type: ignore + + try: + elements = partition(file_path) + except Exception as e: + if not silent_errors: + raise ValueError(f"Error loading file {file_path}: {e}") from e + return None + + # Create a Record + text = "\n\n".join([Text(el) for el in elements]) + metadata = elements.metadata if hasattr(elements, "metadata") else {} + metadata["file_path"] = file_path + record = Record(text=text, data=metadata) + return record + + +def get_elements( + file_paths: List[str], + silent_errors: bool, + max_concurrency: int, + use_multithreading: bool, +) -> List[Optional[Record]]: + if use_multithreading: + records = parallel_load_records(file_paths, silent_errors, max_concurrency) + else: + records = [parse_file_to_record(file_path, silent_errors) for file_path in file_paths] + records = list(filter(None, records)) + return records + + +def parallel_load_records(file_paths: List[str], silent_errors: bool, max_concurrency: int) -> List[Optional[Record]]: + with futures.ThreadPoolExecutor(max_workers=max_concurrency) as executor: + loaded_files = executor.map( + lambda file_path: parse_file_to_record(file_path, silent_errors), + file_paths, + ) + # loaded_files is an iterator, so we need to convert it to a list + return list(loaded_files) diff --git a/src/backend/langflow/base/io/__init__.py b/src/backend/langflow/base/io/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/src/backend/langflow/components/io/base/chat.py b/src/backend/langflow/base/io/chat.py similarity index 98% rename from src/backend/langflow/components/io/base/chat.py rename to src/backend/langflow/base/io/chat.py index 8695dbcbc..b9c721190 100644 --- a/src/backend/langflow/components/io/base/chat.py +++ b/src/backend/langflow/base/io/chat.py @@ -59,8 +59,8 @@ class ChatComponent(CustomComponent): ) else: record = Record( - text=message, data={ + "text": message, "session_id": session_id, "sender": sender, "sender_name": sender_name, diff --git a/src/backend/langflow/components/io/base/text.py b/src/backend/langflow/base/io/text.py similarity index 100% rename from src/backend/langflow/components/io/base/text.py rename to src/backend/langflow/base/io/text.py diff --git a/src/backend/langflow/base/prompts/__init__.py b/src/backend/langflow/base/prompts/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/src/backend/langflow/base/prompts/utils.py b/src/backend/langflow/base/prompts/utils.py new file mode 100644 index 000000000..c30d2d6a2 --- /dev/null +++ b/src/backend/langflow/base/prompts/utils.py @@ -0,0 +1,141 @@ +from fastapi import HTTPException +from langchain.prompts import PromptTemplate +from langchain_core.documents import Document +from loguru import logger + +from langflow.api.v1.base import INVALID_NAMES, check_input_variables +from langflow.interface.utils import extract_input_variables_from_prompt +from langflow.schema import Record +from langflow.template.field.prompt import DefaultPromptField + + +def dict_values_to_string(d: dict) -> dict: + """ + Converts the values of a dictionary to strings. + + Args: + d (dict): The dictionary whose values need to be converted. + + Returns: + dict: The dictionary with values converted to strings. + """ + # Do something similar to the above + for key, value in d.items(): + # it could be a list of records or documents or strings + if isinstance(value, list): + for i, item in enumerate(value): + if isinstance(item, Record): + d[key][i] = record_to_string(item) + elif isinstance(item, Document): + d[key][i] = document_to_string(item) + elif isinstance(value, Record): + d[key] = record_to_string(value) + elif isinstance(value, Document): + d[key] = document_to_string(value) + return d + + +def record_to_string(record: Record) -> str: + """ + Convert a record to a string. + + Args: + record (Record): The record to convert. + + Returns: + str: The record as a string. + """ + return record.text + + +def document_to_string(document: Document) -> str: + """ + Convert a document to a string. + + Args: + document (Document): The document to convert. + + Returns: + str: The document as a string. + """ + return document.page_content + + +def validate_prompt(prompt_template: str, silent_errors: bool = False) -> list[str]: + input_variables = extract_input_variables_from_prompt(prompt_template) + + # Check if there are invalid characters in the input_variables + input_variables = check_input_variables(input_variables) + if any(var in INVALID_NAMES for var in input_variables): + raise ValueError( + f"Invalid input variables. None of the variables can be named {', '.join(input_variables)}. " + ) + + try: + PromptTemplate(template=prompt_template, input_variables=input_variables) + except Exception as exc: + logger.error(f"Invalid prompt: {exc}") + if not silent_errors: + raise ValueError(f"Invalid prompt: {exc}") from exc + + return input_variables + + +def get_old_custom_fields(custom_fields, name): + try: + if len(custom_fields) == 1 and name == "": + # If there is only one custom field and the name is empty string + # then we are dealing with the first prompt request after the node was created + name = list(custom_fields.keys())[0] + + old_custom_fields = custom_fields[name] + if not old_custom_fields: + old_custom_fields = [] + + old_custom_fields = old_custom_fields.copy() + except KeyError: + old_custom_fields = [] + custom_fields[name] = [] + return old_custom_fields + + +def add_new_variables_to_template(input_variables, custom_fields, template, name): + for variable in input_variables: + try: + template_field = DefaultPromptField(name=variable, display_name=variable) + if variable in template: + # Set the new field with the old value + template_field.value = template[variable]["value"] + + template[variable] = template_field.to_dict() + + # Check if variable is not already in the list before appending + if variable not in custom_fields[name]: + custom_fields[name].append(variable) + + except Exception as exc: + logger.exception(exc) + raise HTTPException(status_code=500, detail=str(exc)) from exc + + +def remove_old_variables_from_template( + old_custom_fields, input_variables, custom_fields, template, name +): + for variable in old_custom_fields: + if variable not in input_variables: + try: + # Remove the variable from custom_fields associated with the given name + if variable in custom_fields[name]: + custom_fields[name].remove(variable) + + # Remove the variable from the template + template.pop(variable, None) + + except Exception as exc: + logger.exception(exc) + raise HTTPException(status_code=500, detail=str(exc)) from exc + + +def update_input_variables_field(input_variables, template): + if "input_variables" in template: + template["input_variables"]["value"] = input_variables diff --git a/src/backend/langflow/components/chains/ConversationChain.py b/src/backend/langflow/components/chains/ConversationChain.py index 6e1e319d6..774632412 100644 --- a/src/backend/langflow/components/chains/ConversationChain.py +++ b/src/backend/langflow/components/chains/ConversationChain.py @@ -31,7 +31,7 @@ class ConversationChainComponent(CustomComponent): chain = ConversationChain(llm=llm) else: chain = ConversationChain(llm=llm, memory=memory) - result = chain.invoke(inputs) + result = chain.invoke({"input": input_value}) if hasattr(result, "content") and isinstance(result.content, str): result = result.content elif isinstance(result, str): diff --git a/src/backend/langflow/components/data/APIRequest.py b/src/backend/langflow/components/data/APIRequest.py new file mode 100644 index 000000000..1b8655369 --- /dev/null +++ b/src/backend/langflow/components/data/APIRequest.py @@ -0,0 +1,118 @@ +import asyncio +import json +from typing import List, Optional + +import httpx + +from langflow import CustomComponent +from langflow.schema import Record + + +class APIRequest(CustomComponent): + display_name: str = "API Request" + description: str = "Make an HTTP request to the given URL." + output_types: list[str] = ["Record"] + documentation: str = "https://docs.langflow.org/components/utilities#api-request" + beta: bool = True + field_config = { + "url": {"display_name": "URL", "info": "The URL to make the request to."}, + "method": { + "display_name": "Method", + "info": "The HTTP method to use.", + "field_type": "str", + "options": ["GET", "POST", "PATCH", "PUT"], + "value": "GET", + }, + "headers": { + "display_name": "Headers", + "info": "The headers to send with the request.", + "input_types": ["dict"], + }, + "body": { + "display_name": "Body", + "info": "The body to send with the request (for POST, PATCH, PUT).", + "input_types": ["dict"], + }, + "timeout": { + "display_name": "Timeout", + "field_type": "int", + "info": "The timeout to use for the request.", + "value": 5, + }, + } + + async def make_request( + self, + client: httpx.AsyncClient, + method: str, + url: str, + headers: Optional[dict] = None, + body: Optional[dict] = None, + timeout: int = 5, + ) -> Record: + method = method.upper() + if method not in ["GET", "POST", "PATCH", "PUT", "DELETE"]: + raise ValueError(f"Unsupported method: {method}") + + data = body if body else None + data = json.dumps(data) + try: + response = await client.request( + method, url, headers=headers, content=data, timeout=timeout + ) + try: + result = response.json() + except Exception: + result = response.text + return Record( + data={ + "source": url, + "headers": headers, + "status_code": response.status_code, + "result": result, + }, + ) + except httpx.TimeoutException: + return Record( + data={ + "source": url, + "headers": headers, + "status_code": 408, + "error": "Request timed out", + }, + ) + except Exception as exc: + return Record( + data={ + "source": url, + "headers": headers, + "status_code": 500, + "error": str(exc), + }, + ) + + async def build( + self, + method: str, + url: List[str], + headers: Optional[dict] = None, + body: Optional[List[Record]] = None, + timeout: int = 5, + ) -> List[Record]: + if headers is None: + headers = {} + urls = url if isinstance(url, list) else [url] + bodies = [] + if body: + if isinstance(body, list): + bodies = [b.data for b in body] + else: + bodies = [body.data] + async with httpx.AsyncClient() as client: + results = await asyncio.gather( + *[ + self.make_request(client, method, u, headers, rec, timeout) + for u, rec in zip(urls, bodies) + ] + ) + return results diff --git a/src/backend/langflow/components/data/Directory.py b/src/backend/langflow/components/data/Directory.py new file mode 100644 index 000000000..f05b11e2c --- /dev/null +++ b/src/backend/langflow/components/data/Directory.py @@ -0,0 +1,69 @@ +from typing import Any, Dict, List, Optional + +from langflow import CustomComponent +from langflow.base.data.utils import ( + parallel_load_records, + parse_file_to_record, + retrieve_file_paths, +) +from langflow.schema import Record + + +class DirectoryComponent(CustomComponent): + display_name = "Directory" + description = "Load files from a directory." + + def build_config(self) -> Dict[str, Any]: + return { + "path": {"display_name": "Path"}, + "types": { + "display_name": "Types", + "info": "File types to load. Leave empty to load all types.", + }, + "depth": {"display_name": "Depth", "info": "Depth to search for files."}, + "max_concurrency": {"display_name": "Max Concurrency", "advanced": True}, + "load_hidden": { + "display_name": "Load Hidden", + "advanced": True, + "info": "If true, hidden files will be loaded.", + }, + "recursive": { + "display_name": "Recursive", + "advanced": True, + "info": "If true, the search will be recursive.", + }, + "silent_errors": { + "display_name": "Silent Errors", + "advanced": True, + "info": "If true, errors will not raise an exception.", + }, + "use_multithreading": { + "display_name": "Use Multithreading", + "advanced": True, + }, + } + + def build( + self, + path: str, + types: Optional[List[str]] = None, + depth: int = 0, + max_concurrency: int = 2, + load_hidden: bool = False, + recursive: bool = True, + silent_errors: bool = False, + use_multithreading: bool = True, + ) -> List[Optional[Record]]: + if types is None: + types = [] + resolved_path = self.resolve_path(path) + file_paths = retrieve_file_paths(resolved_path, types, load_hidden, recursive, depth) + loaded_records = [] + + if use_multithreading: + loaded_records = parallel_load_records(file_paths, silent_errors, max_concurrency) + else: + loaded_records = [parse_file_to_record(file_path, silent_errors) for file_path in file_paths] + loaded_records = list(filter(None, loaded_records)) + self.status = loaded_records + return loaded_records diff --git a/src/backend/langflow/components/data/File.py b/src/backend/langflow/components/data/File.py new file mode 100644 index 000000000..dbc14abc4 --- /dev/null +++ b/src/backend/langflow/components/data/File.py @@ -0,0 +1,28 @@ +from typing import Any, Dict, Optional + +from langflow import CustomComponent +from langflow.base.data.utils import parse_file_to_record +from langflow.schema import Record + + +class FileComponent(CustomComponent): + display_name = "File" + description = "Load a file." + + def build_config(self) -> Dict[str, Any]: + return { + "path": {"display_name": "Path"}, + "silent_errors": { + "display_name": "Silent Errors", + "advanced": True, + "info": "If true, errors will not raise an exception.", + }, + } + + def build( + self, + path: str, + silent_errors: bool = False, + ) -> Optional[Record]: + resolved_path = self.resolve_path(path) + return parse_file_to_record(resolved_path, silent_errors) diff --git a/src/backend/langflow/components/documentloaders/FileLoader.py b/src/backend/langflow/components/data/FileLoader.py similarity index 90% rename from src/backend/langflow/components/documentloaders/FileLoader.py rename to src/backend/langflow/components/data/FileLoader.py index 2f74d9d04..d513298c6 100644 --- a/src/backend/langflow/components/documentloaders/FileLoader.py +++ b/src/backend/langflow/components/data/FileLoader.py @@ -11,9 +11,7 @@ class FileLoaderComponent(CustomComponent): beta = True def build_config(self): - loader_options = ["Automatic"] + [ - loader_info["name"] for loader_info in LOADERS_INFO - ] + loader_options = ["Automatic"] + [loader_info["name"] for loader_info in LOADERS_INFO] file_types = [] suffixes = [] @@ -105,9 +103,7 @@ class FileLoaderComponent(CustomComponent): if isinstance(selected_loader_info, dict): loader_import: str = selected_loader_info["import"] else: - raise ValueError( - f"Loader info for {loader} is not a dict\nLoader info:\n{selected_loader_info}" - ) + raise ValueError(f"Loader info for {loader} is not a dict\nLoader info:\n{selected_loader_info}") module_name, class_name = loader_import.rsplit(".", 1) try: @@ -115,9 +111,7 @@ class FileLoaderComponent(CustomComponent): loader_module = __import__(module_name, fromlist=[class_name]) loader_instance = getattr(loader_module, class_name) except ImportError as e: - raise ValueError( - f"Loader {loader} could not be imported\nLoader info:\n{selected_loader_info}" - ) from e + raise ValueError(f"Loader {loader} could not be imported\nLoader info:\n{selected_loader_info}") from e result = loader_instance(file_path=file_path) docs = result.load() diff --git a/src/backend/langflow/components/data/URL.py b/src/backend/langflow/components/data/URL.py new file mode 100644 index 000000000..8368e72be --- /dev/null +++ b/src/backend/langflow/components/data/URL.py @@ -0,0 +1,25 @@ +from typing import Any, Dict + +from langchain_community.document_loaders.web_base import WebBaseLoader + +from langflow import CustomComponent +from langflow.schema import Record + + +class URLComponent(CustomComponent): + display_name = "URL" + description = "Load URLs and convert them to records." + + def build_config(self) -> Dict[str, Any]: + return { + "urls": {"display_name": "URL"}, + } + + async def build( + self, + urls: list[str], + ) -> Record: + loader = WebBaseLoader(web_paths=urls) + docs = loader.load() + records = self.to_records(docs) + return records diff --git a/src/backend/langflow/components/data/__init__.py b/src/backend/langflow/components/data/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/src/backend/langflow/components/documentloaders/GatherRecords.py b/src/backend/langflow/components/documentloaders/GatherRecords.py deleted file mode 100644 index ac298c092..000000000 --- a/src/backend/langflow/components/documentloaders/GatherRecords.py +++ /dev/null @@ -1,161 +0,0 @@ -from concurrent import futures -from pathlib import Path -from typing import Any, Dict, List, Optional, Text - -from langflow import CustomComponent -from langflow.schema import Record - - -class GatherRecordsComponent(CustomComponent): - display_name = "Gather Records" - description = "Gather records from a directory." - - def build_config(self) -> Dict[str, Any]: - return { - "path": {"display_name": "Path"}, - "types": { - "display_name": "Types", - "info": "File types to load. Leave empty to load all types.", - }, - "depth": {"display_name": "Depth", "info": "Depth to search for files."}, - "max_concurrency": {"display_name": "Max Concurrency", "advanced": True}, - "load_hidden": { - "display_name": "Load Hidden", - "advanced": True, - "info": "If true, hidden files will be loaded.", - }, - "recursive": { - "display_name": "Recursive", - "advanced": True, - "info": "If true, the search will be recursive.", - }, - "silent_errors": { - "display_name": "Silent Errors", - "advanced": True, - "info": "If true, errors will not raise an exception.", - }, - "use_multithreading": { - "display_name": "Use Multithreading", - "advanced": True, - }, - } - - def is_hidden(self, path: Path) -> bool: - return path.name.startswith(".") - - def retrieve_file_paths( - self, - path: str, - types: List[str], - load_hidden: bool, - recursive: bool, - depth: int, - ) -> List[str]: - path_obj = Path(path) - if not path_obj.exists() or not path_obj.is_dir(): - raise ValueError(f"Path {path} must exist and be a directory.") - - def match_types(p: Path) -> bool: - return any(p.suffix == f".{t}" for t in types) if types else True - - def is_not_hidden(p: Path) -> bool: - return not self.is_hidden(p) or load_hidden - - def walk_level(directory: Path, max_depth: int): - directory = directory.resolve() - prefix_length = len(directory.parts) - for p in directory.rglob("*" if recursive else "[!.]*"): - if len(p.parts) - prefix_length <= max_depth: - yield p - - glob = "**/*" if recursive else "*" - paths = walk_level(path_obj, depth) if depth else path_obj.glob(glob) - file_paths = [ - Text(p) - for p in paths - if p.is_file() and match_types(p) and is_not_hidden(p) - ] - - return file_paths - - def parse_file_to_record( - self, file_path: str, silent_errors: bool - ) -> Optional[Record]: - # Use the partition function to load the file - from unstructured.partition.auto import partition # type: ignore - - try: - elements = partition(file_path) - except Exception as e: - if not silent_errors: - raise ValueError(f"Error loading file {file_path}: {e}") from e - return None - - # Create a Record - text = "\n\n".join([Text(el) for el in elements]) - metadata = elements.metadata if hasattr(elements, "metadata") else {} - metadata["file_path"] = file_path - record = Record(text=text, data=metadata) - return record - - def get_elements( - self, - file_paths: List[str], - silent_errors: bool, - max_concurrency: int, - use_multithreading: bool, - ) -> List[Optional[Record]]: - if use_multithreading: - records = self.parallel_load_records( - file_paths, silent_errors, max_concurrency - ) - else: - records = [ - self.parse_file_to_record(file_path, silent_errors) - for file_path in file_paths - ] - records = list(filter(None, records)) - return records - - def parallel_load_records( - self, file_paths: List[str], silent_errors: bool, max_concurrency: int - ) -> List[Optional[Record]]: - with futures.ThreadPoolExecutor(max_workers=max_concurrency) as executor: - loaded_files = executor.map( - lambda file_path: self.parse_file_to_record(file_path, silent_errors), - file_paths, - ) - # loaded_files is an iterator, so we need to convert it to a list - return list(loaded_files) - - def build( - self, - path: str, - types: Optional[List[str]] = None, - depth: int = 0, - max_concurrency: int = 2, - load_hidden: bool = False, - recursive: bool = True, - silent_errors: bool = False, - use_multithreading: bool = True, - ) -> List[Optional[Record]]: - if types is None: - types = [] - resolved_path = self.resolve_path(path) - file_paths = self.retrieve_file_paths( - resolved_path, types, load_hidden, recursive, depth - ) - loaded_records = [] - - if use_multithreading: - loaded_records = self.parallel_load_records( - file_paths, silent_errors, max_concurrency - ) - else: - loaded_records = [ - self.parse_file_to_record(file_path, silent_errors) - for file_path in file_paths - ] - loaded_records = list(filter(None, loaded_records)) - self.status = loaded_records - return loaded_records diff --git a/src/backend/langflow/components/documentloaders/UrlLoader.py b/src/backend/langflow/components/documentloaders/UrlLoader.py deleted file mode 100644 index eb60ac572..000000000 --- a/src/backend/langflow/components/documentloaders/UrlLoader.py +++ /dev/null @@ -1,47 +0,0 @@ -from typing import List - -from langchain import document_loaders -from langchain_core.documents import Document - -from langflow import CustomComponent - - -class UrlLoaderComponent(CustomComponent): - display_name: str = "Url Loader" - description: str = "Generic Url Loader Component" - - def build_config(self): - return { - "web_path": { - "display_name": "Url", - "required": True, - }, - "loader": { - "display_name": "Loader", - "is_list": True, - "required": True, - "options": [ - "AZLyricsLoader", - "CollegeConfidentialLoader", - "GitbookLoader", - "HNLoader", - "IFixitLoader", - "IMSDbLoader", - "WebBaseLoader", - ], - "value": "WebBaseLoader", - }, - "code": {"show": False}, - } - - def build(self, web_path: str, loader: str) -> List[Document]: - try: - loader_instance = getattr(document_loaders, loader)(web_path=web_path) - except Exception as e: - raise ValueError(f"No loader found for: {web_path}") from e - docs = loader_instance.load() - avg_length = sum(len(doc.page_content) for doc in docs if hasattr(doc, "page_content")) / len(docs) - self.status = f"""{len(docs)} documents) - \nAvg. Document Length (characters): {int(avg_length)} - Documents: {docs[:3]}...""" - return docs diff --git a/src/backend/langflow/components/experimental/ClearMessageHistory.py b/src/backend/langflow/components/experimental/ClearMessageHistory.py new file mode 100644 index 000000000..6d264422f --- /dev/null +++ b/src/backend/langflow/components/experimental/ClearMessageHistory.py @@ -0,0 +1,24 @@ +from langflow import CustomComponent +from langflow.memory import delete_messages, get_messages + + +class ClearMessageHistoryComponent(CustomComponent): + display_name = "Clear Message History" + description = "A component to clear the message history." + + def build_config(self): + return { + "session_id": { + "display_name": "Session ID", + "info": "The session ID to clear the message history.", + } + } + + def build( + self, + session_id: str, + ) -> None: + delete_messages(session_id=session_id) + records = get_messages(session_id=session_id) + self.records = records + return records diff --git a/src/backend/langflow/components/experimental/ExtractDataFromRecord.py b/src/backend/langflow/components/experimental/ExtractDataFromRecord.py new file mode 100644 index 000000000..2b28545b5 --- /dev/null +++ b/src/backend/langflow/components/experimental/ExtractDataFromRecord.py @@ -0,0 +1,16 @@ +from langflow import CustomComponent +from langflow.schema import Record + + +class ExtractKeyFromRecordComponent(CustomComponent): + display_name = "Extract Key From Record" + description = "Extracts a key from a record." + + field_config = { + "record": {"display_name": "Record"}, + } + + def build(self, record: Record, key: str, silent_error: bool = True) -> dict: + data = getattr(record, key) + self.status = data + return data diff --git a/src/backend/langflow/components/experimental/GetNotified.py b/src/backend/langflow/components/experimental/GetNotified.py new file mode 100644 index 000000000..d97c30df2 --- /dev/null +++ b/src/backend/langflow/components/experimental/GetNotified.py @@ -0,0 +1,20 @@ +from langflow import CustomComponent +from langflow.schema import Record + + +class GetNotifiedComponent(CustomComponent): + display_name = "Get Notified" + description = "A component to get notified by Notify component." + + def build_config(self): + return { + "name": { + "display_name": "Name", + "info": "The name of the notification to listen for.", + }, + } + + def build(self, name: str) -> Record: + state = self.get_state(name) + self.status = state + return state diff --git a/src/backend/langflow/components/utilities/ListFlows.py b/src/backend/langflow/components/experimental/ListFlows.py similarity index 100% rename from src/backend/langflow/components/utilities/ListFlows.py rename to src/backend/langflow/components/experimental/ListFlows.py diff --git a/src/backend/langflow/components/experimental/MergeRecords.py b/src/backend/langflow/components/experimental/MergeRecords.py new file mode 100644 index 000000000..9c280d12a --- /dev/null +++ b/src/backend/langflow/components/experimental/MergeRecords.py @@ -0,0 +1,25 @@ +from langflow import CustomComponent +from langflow.schema import Record + + +class MergeRecordsComponent(CustomComponent): + display_name = "Merge Records" + description = "Merges records." + + field_config = { + "records": {"display_name": "Records"}, + } + + def build(self, records: list[Record]) -> Record: + if not records: + return records + if len(records) == 1: + return records[0] + merged_record = None + for record in records: + if merged_record is None: + merged_record = record + else: + merged_record += record + self.status = merged_record + return merged_record diff --git a/src/backend/langflow/components/memories/MessageHistory.py b/src/backend/langflow/components/experimental/MessageHistory.py similarity index 100% rename from src/backend/langflow/components/memories/MessageHistory.py rename to src/backend/langflow/components/experimental/MessageHistory.py diff --git a/src/backend/langflow/components/experimental/Notify.py b/src/backend/langflow/components/experimental/Notify.py new file mode 100644 index 000000000..3b5662355 --- /dev/null +++ b/src/backend/langflow/components/experimental/Notify.py @@ -0,0 +1,39 @@ +from typing import Optional + +from langflow import CustomComponent +from langflow.schema import Record + + +class NotifyComponent(CustomComponent): + display_name = "Notify" + description = "A component to generate a notification to Get Notified component." + + def build_config(self): + return { + "name": {"display_name": "Name", "info": "The name of the notification."}, + "record": {"display_name": "Record", "info": "The record to store."}, + "append": { + "display_name": "Append", + "info": "If True, the record will be appended to the notification.", + }, + } + + def build(self, name: str, record: Optional[Record] = None, append: bool = False) -> Record: + if record and not isinstance(record, Record): + if isinstance(record, str): + record = Record(text=record) + elif isinstance(record, dict): + record = Record(data=record) + else: + record = Record(text=str(record)) + elif not record: + record = Record(text="") + if record: + if append: + self.append_state(name, record) + else: + self.update_state(name, record) + else: + self.status = "No record provided." + self.status = record + return record diff --git a/src/backend/langflow/components/utilities/RunFlow.py b/src/backend/langflow/components/experimental/RunFlow.py similarity index 94% rename from src/backend/langflow/components/utilities/RunFlow.py rename to src/backend/langflow/components/experimental/RunFlow.py index d0e49ac90..94ba88044 100644 --- a/src/backend/langflow/components/utilities/RunFlow.py +++ b/src/backend/langflow/components/experimental/RunFlow.py @@ -39,10 +39,7 @@ class RunFlowComponent(CustomComponent): records.append(record) return records - async def build( - self, input_value: Text, flow_name: str, tweaks: NestedDict - ) -> Record: - + async def build(self, input_value: Text, flow_name: str, tweaks: NestedDict) -> Record: results: List[Optional[ResultData]] = await self.run_flow( input_value=input_value, flow_name=flow_name, tweaks=tweaks ) diff --git a/src/backend/langflow/components/utilities/RunnableExecutor.py b/src/backend/langflow/components/experimental/RunnableExecutor.py similarity index 100% rename from src/backend/langflow/components/utilities/RunnableExecutor.py rename to src/backend/langflow/components/experimental/RunnableExecutor.py diff --git a/src/backend/langflow/components/utilities/SQLExecutor.py b/src/backend/langflow/components/experimental/SQLExecutor.py similarity index 76% rename from src/backend/langflow/components/utilities/SQLExecutor.py rename to src/backend/langflow/components/experimental/SQLExecutor.py index 530391c31..e1b4e699f 100644 --- a/src/backend/langflow/components/utilities/SQLExecutor.py +++ b/src/backend/langflow/components/experimental/SQLExecutor.py @@ -11,7 +11,10 @@ class SQLExecutorComponent(CustomComponent): def build_config(self): return { - "database": {"display_name": "Database"}, + "database_url": { + "display_name": "Database URL", + "info": "The URL of the database.", + }, "include_columns": { "display_name": "Include Columns", "info": "Include columns in the result.", @@ -26,15 +29,24 @@ class SQLExecutorComponent(CustomComponent): }, } + def clean_up_uri(self, uri: str) -> str: + if uri.startswith("postgresql://"): + uri = uri.replace("postgresql://", "postgres://") + return uri.strip() + def build( self, query: str, - database: SQLDatabase, + database_url: str, include_columns: bool = False, passthrough: bool = False, add_error: bool = False, ) -> Text: error = None + try: + database = SQLDatabase.from_uri(database_url) + except Exception as e: + raise ValueError(f"An error occurred while connecting to the database: {e}") try: tool = QuerySQLDataBaseTool(db=database) result = tool.run(query, include_columns=include_columns) diff --git a/src/backend/langflow/components/experimental/TextToRecord.py b/src/backend/langflow/components/experimental/TextToRecord.py new file mode 100644 index 000000000..fcb91fdee --- /dev/null +++ b/src/backend/langflow/components/experimental/TextToRecord.py @@ -0,0 +1,99 @@ +from typing import Any + +from langflow import CustomComponent +from langflow.schema import Record +from langflow.template.field.base import TemplateField + + +class TextToRecordComponent(CustomComponent): + display_name = "Text to Record" + description = "A component to create a record from key-value pairs." + field_order = ["mode", "keys", "n_keys"] + + def set_key_template(self, build_config, field_value): + keys_template = TemplateField( + name="n_keys" if field_value == "Number" else "keys", + field_type="dict" if field_value == "Number" else "str", + is_list=False if field_value == "Number" else True, + display_name="Keys", + info=( + "The Number of keys to use for the record." + if field_value == "Number" + else "The keys to use for the record." + ), + input_types=["Text"], + ) + build_config["keys"] = keys_template.to_dict() + + def set_n_keys(self, build_config, field_name, field_value): + if int(field_value) == 0: + keep = ["n_keys", "code"] + for key in build_config.copy(): + if key in keep: + continue + del build_config[key] + build_config[field_name]["value"] = int(field_value) + + # Add new fields depending on the field value + for i in range(int(field_value)): + field = TemplateField( + name=f"Key and Value {i}", + field_type="dict", + display_name="", + info="The key for the record.", + input_types=["Text"], + ) + build_config[field.name] = field.to_dict() + + def set_keys_template(self, build_config, field_value): + for key in build_config.copy(): + if key == "keys": + continue + del build_config[key] + for i in range(int(field_value)): + field = TemplateField( + name=f"Key and Value {i}", + field_type="dict", + display_name="", + info="The key for the record.", + input_types=["Text"], + ) + build_config[field.name] = field.to_dict() + + def update_build_config( + self, build_config: dict, field_name: str, field_value: Any + ): + if field_name == "mode": + build_config["mode"]["value"] = field_value + self.set_key_template(build_config, field_value) + if field_value is None: + return + if field_name == "n_keys": + self.set_n_keys(build_config, field_name, field_value) + elif field_name == "keys": + self.set_keys_template(build_config, field_value) + return build_config + + def build_config(self): + return { + "mode": { + "display_name": "Mode", + "options": ["Text", "Number"], + "info": "The mode to use for creating the record.", + "real_time_refresh": True, + "input_types": [], + }, + } + + def build(self, mode: str, **kwargs) -> Record: + if mode == "Text": + data = kwargs + else: + data = { + k: v + for key, d in kwargs.items() + for k, v in d.items() + if key not in ["mode", "n_keys", "keys"] + } + record = Record(data=data) + return record diff --git a/src/backend/langflow/components/experimental/__init__.py b/src/backend/langflow/components/experimental/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/src/backend/langflow/components/custom_components/CustomComponent.py b/src/backend/langflow/components/helpers/CustomComponent.py similarity index 91% rename from src/backend/langflow/components/custom_components/CustomComponent.py rename to src/backend/langflow/components/helpers/CustomComponent.py index 533ccb727..c45b5effd 100644 --- a/src/backend/langflow/components/custom_components/CustomComponent.py +++ b/src/backend/langflow/components/helpers/CustomComponent.py @@ -4,6 +4,7 @@ from langflow.field_typing import Data class Component(CustomComponent): documentation: str = "http://docs.langflow.org/components/custom" + icon = "custom_components" def build_config(self): return {"param": {"display_name": "Parameter"}} diff --git a/src/backend/langflow/components/utilities/DocumentToRecord.py b/src/backend/langflow/components/helpers/DocumentToRecord.py similarity index 100% rename from src/backend/langflow/components/utilities/DocumentToRecord.py rename to src/backend/langflow/components/helpers/DocumentToRecord.py diff --git a/src/backend/langflow/components/helpers/IDGenerator.py b/src/backend/langflow/components/helpers/IDGenerator.py new file mode 100644 index 000000000..73c56eb7b --- /dev/null +++ b/src/backend/langflow/components/helpers/IDGenerator.py @@ -0,0 +1,28 @@ +import uuid +from typing import Any, Text + +from langflow import CustomComponent + + +class UUIDGeneratorComponent(CustomComponent): + documentation: str = "http://docs.langflow.org/components/custom" + display_name = "Unique ID Generator" + description = "Generates a unique ID." + + def update_build_config( + self, build_config: dict, field_name: Text, field_value: Any + ): + if field_name == "unique_id": + build_config[field_name]["value"] = str(uuid.uuid4()) + return build_config + + def build_config(self): + return { + "unique_id": { + "display_name": "Value", + "real_time_refresh": True, + } + } + + def build(self, unique_id: str) -> str: + return unique_id diff --git a/src/backend/langflow/components/utilities/PythonFunction.py b/src/backend/langflow/components/helpers/PythonFunction.py similarity index 100% rename from src/backend/langflow/components/utilities/PythonFunction.py rename to src/backend/langflow/components/helpers/PythonFunction.py diff --git a/src/backend/langflow/components/utilities/RecordsAsText.py b/src/backend/langflow/components/helpers/RecordsAsText.py similarity index 83% rename from src/backend/langflow/components/utilities/RecordsAsText.py rename to src/backend/langflow/components/helpers/RecordsAsText.py index debf3eed2..f7750bdba 100644 --- a/src/backend/langflow/components/utilities/RecordsAsText.py +++ b/src/backend/langflow/components/helpers/RecordsAsText.py @@ -6,7 +6,7 @@ from langflow.schema import Record class RecordsAsTextComponent(CustomComponent): display_name = "Records to Text" - description = "Converts Records a list of Records to text using a template." + description = "Converts Records into single piece of text using a template." def build_config(self): return { @@ -16,7 +16,7 @@ class RecordsAsTextComponent(CustomComponent): }, "template": { "display_name": "Template", - "info": "The template to use for formatting the records. It must contain the keys {text} and {data}.", + "info": "The template to use for formatting the records. It can contain the keys {text}, {data} or any other key in the Record.", }, } @@ -25,6 +25,8 @@ class RecordsAsTextComponent(CustomComponent): records: list[Record], template: str = "Text: {text}\nData: {data}", ) -> Text: + if not records: + return "" if isinstance(records, Record): records = [records] diff --git a/src/backend/langflow/components/helpers/__init__.py b/src/backend/langflow/components/helpers/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/src/backend/langflow/components/io/ChatInput.py b/src/backend/langflow/components/inputs/ChatInput.py similarity index 92% rename from src/backend/langflow/components/io/ChatInput.py rename to src/backend/langflow/components/inputs/ChatInput.py index de8ce14cb..e5867751c 100644 --- a/src/backend/langflow/components/io/ChatInput.py +++ b/src/backend/langflow/components/inputs/ChatInput.py @@ -1,6 +1,6 @@ from typing import Optional, Union -from langflow.components.io.base.chat import ChatComponent +from langflow.base.io.chat import ChatComponent from langflow.field_typing import Text from langflow.schema import Record diff --git a/src/backend/langflow/components/prompts/Prompt.py b/src/backend/langflow/components/inputs/Prompt.py similarity index 73% rename from src/backend/langflow/components/prompts/Prompt.py rename to src/backend/langflow/components/inputs/Prompt.py index 975998919..8ee6f0232 100644 --- a/src/backend/langflow/components/prompts/Prompt.py +++ b/src/backend/langflow/components/inputs/Prompt.py @@ -1,6 +1,7 @@ from langchain_core.prompts import PromptTemplate from langflow import CustomComponent +from langflow.base.prompts.utils import dict_values_to_string from langflow.field_typing import Prompt, TemplateField, Text @@ -8,6 +9,7 @@ class PromptComponent(CustomComponent): display_name: str = "Prompt" description: str = "A component for creating prompts using templates" beta = True + icon = "terminal-square" def build_config(self): return { @@ -21,16 +23,13 @@ class PromptComponent(CustomComponent): **kwargs, ) -> Text: prompt_template = PromptTemplate.from_template(Text(template)) - - attributes_to_check = ["text", "page_content"] - for key, value in kwargs.copy().items(): - for attribute in attributes_to_check: - if hasattr(value, attribute): - kwargs[key] = getattr(value, attribute) - + kwargs = dict_values_to_string(kwargs) + kwargs = { + k: "\n".join(v) if isinstance(v, list) else v for k, v in kwargs.items() + } try: formated_prompt = prompt_template.format(**kwargs) except Exception as exc: raise ValueError(f"Error formatting prompt: {exc}") from exc - self.status = f'Prompt: "{formated_prompt}"' + self.status = f'Prompt:\n"{formated_prompt}"' return formated_prompt diff --git a/src/backend/langflow/components/io/TextInput.py b/src/backend/langflow/components/inputs/TextInput.py similarity index 84% rename from src/backend/langflow/components/io/TextInput.py rename to src/backend/langflow/components/inputs/TextInput.py index dc7ff1a73..034cf527b 100644 --- a/src/backend/langflow/components/io/TextInput.py +++ b/src/backend/langflow/components/inputs/TextInput.py @@ -1,6 +1,6 @@ from typing import Optional -from langflow.components.io.base.text import TextComponent +from langflow.base.io.text import TextComponent from langflow.field_typing import Text diff --git a/src/backend/langflow/components/inputs/__init__.py b/src/backend/langflow/components/inputs/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/src/backend/langflow/components/utilities/BingSearchAPIWrapper.py b/src/backend/langflow/components/langchain_utilities/BingSearchAPIWrapper.py similarity index 100% rename from src/backend/langflow/components/utilities/BingSearchAPIWrapper.py rename to src/backend/langflow/components/langchain_utilities/BingSearchAPIWrapper.py diff --git a/src/backend/langflow/components/utilities/GoogleSearchAPIWrapper.py b/src/backend/langflow/components/langchain_utilities/GoogleSearchAPIWrapper.py similarity index 100% rename from src/backend/langflow/components/utilities/GoogleSearchAPIWrapper.py rename to src/backend/langflow/components/langchain_utilities/GoogleSearchAPIWrapper.py diff --git a/src/backend/langflow/components/utilities/GoogleSerperAPIWrapper.py b/src/backend/langflow/components/langchain_utilities/GoogleSerperAPIWrapper.py similarity index 100% rename from src/backend/langflow/components/utilities/GoogleSerperAPIWrapper.py rename to src/backend/langflow/components/langchain_utilities/GoogleSerperAPIWrapper.py diff --git a/src/backend/langflow/components/utilities/JSONDocumentBuilder.py b/src/backend/langflow/components/langchain_utilities/JSONDocumentBuilder.py similarity index 100% rename from src/backend/langflow/components/utilities/JSONDocumentBuilder.py rename to src/backend/langflow/components/langchain_utilities/JSONDocumentBuilder.py diff --git a/src/backend/langflow/components/utilities/SQLDatabase.py b/src/backend/langflow/components/langchain_utilities/SQLDatabase.py similarity index 100% rename from src/backend/langflow/components/utilities/SQLDatabase.py rename to src/backend/langflow/components/langchain_utilities/SQLDatabase.py diff --git a/src/backend/langflow/components/utilities/SearxSearchWrapper.py b/src/backend/langflow/components/langchain_utilities/SearxSearchWrapper.py similarity index 100% rename from src/backend/langflow/components/utilities/SearxSearchWrapper.py rename to src/backend/langflow/components/langchain_utilities/SearxSearchWrapper.py diff --git a/src/backend/langflow/components/utilities/SerpAPIWrapper.py b/src/backend/langflow/components/langchain_utilities/SerpAPIWrapper.py similarity index 100% rename from src/backend/langflow/components/utilities/SerpAPIWrapper.py rename to src/backend/langflow/components/langchain_utilities/SerpAPIWrapper.py diff --git a/src/backend/langflow/components/utilities/WikipediaAPIWrapper.py b/src/backend/langflow/components/langchain_utilities/WikipediaAPIWrapper.py similarity index 100% rename from src/backend/langflow/components/utilities/WikipediaAPIWrapper.py rename to src/backend/langflow/components/langchain_utilities/WikipediaAPIWrapper.py diff --git a/src/backend/langflow/components/utilities/WolframAlphaAPIWrapper.py b/src/backend/langflow/components/langchain_utilities/WolframAlphaAPIWrapper.py similarity index 100% rename from src/backend/langflow/components/utilities/WolframAlphaAPIWrapper.py rename to src/backend/langflow/components/langchain_utilities/WolframAlphaAPIWrapper.py diff --git a/src/backend/langflow/components/models/AmazonBedrockModel.py b/src/backend/langflow/components/models/AmazonBedrockModel.py index ce7347fde..00ece9a23 100644 --- a/src/backend/langflow/components/models/AmazonBedrockModel.py +++ b/src/backend/langflow/components/models/AmazonBedrockModel.py @@ -7,7 +7,7 @@ from langflow.field_typing import Text class AmazonBedrockComponent(LCModelComponent): - display_name: str = "Amazon Bedrock Model" + display_name: str = "Amazon Bedrock" description: str = "Generate text using LLM model from Amazon Bedrock." icon = "Amazon" diff --git a/src/backend/langflow/components/models/AnthropicModel.py b/src/backend/langflow/components/models/AnthropicModel.py index fb891f9a3..83a7fd461 100644 --- a/src/backend/langflow/components/models/AnthropicModel.py +++ b/src/backend/langflow/components/models/AnthropicModel.py @@ -8,7 +8,7 @@ from langflow.field_typing import Text class AnthropicLLM(LCModelComponent): - display_name: str = "AnthropicModel" + display_name: str = "Anthropic" description: str = "Generate text using Anthropic Chat&Completion large language models." icon = "Anthropic" diff --git a/src/backend/langflow/components/models/AzureOpenAIModel.py b/src/backend/langflow/components/models/AzureOpenAIModel.py index 81c22d399..5d16cc0c4 100644 --- a/src/backend/langflow/components/models/AzureOpenAIModel.py +++ b/src/backend/langflow/components/models/AzureOpenAIModel.py @@ -9,7 +9,7 @@ from langflow.field_typing import Text class AzureChatOpenAIComponent(LCModelComponent): - display_name: str = "AzureOpenAI Model" + display_name: str = "AzureOpenAI" description: str = "Generate text using LLM model from Azure OpenAI." documentation: str = "https://python.langchain.com/docs/integrations/llms/azure_openai" beta = False diff --git a/src/backend/langflow/components/models/BaiduQianfanChatModel.py b/src/backend/langflow/components/models/BaiduQianfanChatModel.py index 1ef65ee33..76ae69e42 100644 --- a/src/backend/langflow/components/models/BaiduQianfanChatModel.py +++ b/src/backend/langflow/components/models/BaiduQianfanChatModel.py @@ -8,7 +8,7 @@ from langflow.field_typing import Text class QianfanChatEndpointComponent(LCModelComponent): - display_name: str = "QianfanChat Model" + display_name: str = "QianfanChat" description: str = ( "Generate text using Baidu Qianfan chat models. Get more detail from " "https://python.langchain.com/docs/integrations/chat/baidu_qianfan_endpoint." diff --git a/src/backend/langflow/components/models/CTransformersModel.py b/src/backend/langflow/components/models/CTransformersModel.py index 219d74440..160f698db 100644 --- a/src/backend/langflow/components/models/CTransformersModel.py +++ b/src/backend/langflow/components/models/CTransformersModel.py @@ -7,7 +7,7 @@ from langflow.field_typing import Text class CTransformersComponent(LCModelComponent): - display_name = "CTransformersModel" + display_name = "CTransformers" description = "Generate text using CTransformers LLM models" documentation = "https://python.langchain.com/docs/modules/model_io/models/llms/integrations/ctransformers" diff --git a/src/backend/langflow/components/models/CohereModel.py b/src/backend/langflow/components/models/CohereModel.py index c0b603695..1cab1bec7 100644 --- a/src/backend/langflow/components/models/CohereModel.py +++ b/src/backend/langflow/components/models/CohereModel.py @@ -5,7 +5,7 @@ from langflow.field_typing import Text class CohereComponent(LCModelComponent): - display_name = "CohereModel" + display_name = "Cohere" description = "Generate text using Cohere large language models." documentation = "https://python.langchain.com/docs/modules/model_io/models/llms/integrations/cohere" diff --git a/src/backend/langflow/components/models/GoogleGenerativeAIModel.py b/src/backend/langflow/components/models/GoogleGenerativeAIModel.py index c979e6db4..b8c3d3331 100644 --- a/src/backend/langflow/components/models/GoogleGenerativeAIModel.py +++ b/src/backend/langflow/components/models/GoogleGenerativeAIModel.py @@ -8,7 +8,7 @@ from langflow.field_typing import RangeSpec, Text class GoogleGenerativeAIComponent(LCModelComponent): - display_name: str = "Google Generative AIModel" + display_name: str = "Google Generative AI" description: str = "Generate text using Google Generative AI to generate text." icon = "GoogleGenerativeAI" icon = "Google" diff --git a/src/backend/langflow/components/models/HuggingFaceModel.py b/src/backend/langflow/components/models/HuggingFaceModel.py index 33cc57815..c56015741 100644 --- a/src/backend/langflow/components/models/HuggingFaceModel.py +++ b/src/backend/langflow/components/models/HuggingFaceModel.py @@ -8,7 +8,7 @@ from langflow.field_typing import Text class HuggingFaceEndpointsComponent(LCModelComponent): - display_name: str = "Hugging Face Inference API models" + display_name: str = "Hugging Face Inference API" description: str = "Generate text using LLM model from Hugging Face Inference API." icon = "HuggingFace" diff --git a/src/backend/langflow/components/models/LlamaCppModel.py b/src/backend/langflow/components/models/LlamaCppModel.py index 468a1ac8b..48cea7ad0 100644 --- a/src/backend/langflow/components/models/LlamaCppModel.py +++ b/src/backend/langflow/components/models/LlamaCppModel.py @@ -7,7 +7,7 @@ from langflow.field_typing import Text class LlamaCppComponent(LCModelComponent): - display_name = "LlamaCppModel" + display_name = "LlamaCpp" description = "Generate text using llama.cpp model." documentation = "https://python.langchain.com/docs/modules/model_io/models/llms/integrations/llamacpp" diff --git a/src/backend/langflow/components/models/OllamaModel.py b/src/backend/langflow/components/models/OllamaModel.py index 9de1847f0..cfd1e2a2e 100644 --- a/src/backend/langflow/components/models/OllamaModel.py +++ b/src/backend/langflow/components/models/OllamaModel.py @@ -13,7 +13,7 @@ from langflow.field_typing import Text class ChatOllamaComponent(LCModelComponent): - display_name = "ChatOllamaModel" + display_name = "ChatOllama" description = "Generate text using Local LLM for chat with Ollama." icon = "Ollama" diff --git a/src/backend/langflow/components/models/OpenAIModel.py b/src/backend/langflow/components/models/OpenAIModel.py index f595255ce..2a8314fd2 100644 --- a/src/backend/langflow/components/models/OpenAIModel.py +++ b/src/backend/langflow/components/models/OpenAIModel.py @@ -8,7 +8,7 @@ from langflow.field_typing import NestedDict, Text class OpenAIModelComponent(LCModelComponent): - display_name = "OpenAI Model" + display_name = "OpenAI" description = "Generates text using OpenAI's models." icon = "OpenAI" diff --git a/src/backend/langflow/components/models/VertexAiModel.py b/src/backend/langflow/components/models/VertexAiModel.py index c4354d999..11c49c7b4 100644 --- a/src/backend/langflow/components/models/VertexAiModel.py +++ b/src/backend/langflow/components/models/VertexAiModel.py @@ -7,7 +7,7 @@ from langflow.field_typing import Text class ChatVertexAIComponent(LCModelComponent): - display_name = "ChatVertexAIModel" + display_name = "ChatVertexAI" description = "Generate text using Vertex AI Chat large language models API." icon = "VertexAI" diff --git a/src/backend/langflow/components/io/ChatOutput.py b/src/backend/langflow/components/outputs/ChatOutput.py similarity index 92% rename from src/backend/langflow/components/io/ChatOutput.py rename to src/backend/langflow/components/outputs/ChatOutput.py index a528b65b8..aa61159c9 100644 --- a/src/backend/langflow/components/io/ChatOutput.py +++ b/src/backend/langflow/components/outputs/ChatOutput.py @@ -1,6 +1,6 @@ from typing import Optional, Union -from langflow.components.io.base.chat import ChatComponent +from langflow.base.io.chat import ChatComponent from langflow.field_typing import Text from langflow.schema import Record diff --git a/src/backend/langflow/components/io/TextOutput.py b/src/backend/langflow/components/outputs/TextOutput.py similarity index 87% rename from src/backend/langflow/components/io/TextOutput.py rename to src/backend/langflow/components/outputs/TextOutput.py index f92e09146..c971a9699 100644 --- a/src/backend/langflow/components/io/TextOutput.py +++ b/src/backend/langflow/components/outputs/TextOutput.py @@ -1,6 +1,6 @@ from typing import Optional -from langflow.components.io.base.text import TextComponent +from langflow.base.io.text import TextComponent from langflow.field_typing import Text diff --git a/src/backend/langflow/components/outputs/__init__.py b/src/backend/langflow/components/outputs/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/src/backend/langflow/components/textsplitters/CharacterTextSplitter.py b/src/backend/langflow/components/textsplitters/CharacterTextSplitter.py index d165f47fd..96576a4a3 100644 --- a/src/backend/langflow/components/textsplitters/CharacterTextSplitter.py +++ b/src/backend/langflow/components/textsplitters/CharacterTextSplitter.py @@ -1,8 +1,9 @@ from typing import List from langchain.text_splitter import CharacterTextSplitter -from langchain_core.documents.base import Document + from langflow import CustomComponent +from langflow.schema.schema import Record class CharacterTextSplitterComponent(CustomComponent): @@ -11,7 +12,7 @@ class CharacterTextSplitterComponent(CustomComponent): def build_config(self): return { - "documents": {"display_name": "Documents"}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, "chunk_overlap": {"display_name": "Chunk Overlap", "default": 200}, "chunk_size": {"display_name": "Chunk Size", "default": 1000}, "separator": {"display_name": "Separator", "default": "\n"}, @@ -19,17 +20,24 @@ class CharacterTextSplitterComponent(CustomComponent): def build( self, - documents: List[Document], + inputs: List[Record], chunk_overlap: int = 200, chunk_size: int = 1000, separator: str = "\n", - ) -> List[Document]: + ) -> List[Record]: # separator may come escaped from the frontend separator = separator.encode().decode("unicode_escape") + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) docs = CharacterTextSplitter( chunk_overlap=chunk_overlap, chunk_size=chunk_size, separator=separator, ).split_documents(documents) - self.status = docs - return docs + records = self.to_records(docs) + self.status = records + return records diff --git a/src/backend/langflow/components/textsplitters/LanguageRecursiveTextSplitter.py b/src/backend/langflow/components/textsplitters/LanguageRecursiveTextSplitter.py index d1494f4d0..3f521e0ba 100644 --- a/src/backend/langflow/components/textsplitters/LanguageRecursiveTextSplitter.py +++ b/src/backend/langflow/components/textsplitters/LanguageRecursiveTextSplitter.py @@ -1,9 +1,9 @@ -from typing import Optional +from typing import List, Optional from langchain.text_splitter import Language -from langchain_core.documents import Document from langflow import CustomComponent +from langflow.schema.schema import Record class LanguageRecursiveTextSplitterComponent(CustomComponent): @@ -14,10 +14,7 @@ class LanguageRecursiveTextSplitterComponent(CustomComponent): def build_config(self): options = [x.value for x in Language] return { - "documents": { - "display_name": "Documents", - "info": "The documents to split.", - }, + "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, "separator_type": { "display_name": "Separator Type", "info": "The type of separator to use.", @@ -47,11 +44,11 @@ class LanguageRecursiveTextSplitterComponent(CustomComponent): def build( self, - documents: list[Document], + inputs: List[Record], chunk_size: Optional[int] = 1000, chunk_overlap: Optional[int] = 200, separator_type: str = "Python", - ) -> list[Document]: + ) -> list[Record]: """ Split text into chunks of a specified length. @@ -77,6 +74,12 @@ class LanguageRecursiveTextSplitterComponent(CustomComponent): chunk_size=chunk_size, chunk_overlap=chunk_overlap, ) - + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) docs = splitter.split_documents(documents) - return docs + records = self.to_records(docs) + return records diff --git a/src/backend/langflow/components/textsplitters/RecursiveCharacterTextSplitter.py b/src/backend/langflow/components/textsplitters/RecursiveCharacterTextSplitter.py index d07ae3ebe..6b9cb865b 100644 --- a/src/backend/langflow/components/textsplitters/RecursiveCharacterTextSplitter.py +++ b/src/backend/langflow/components/textsplitters/RecursiveCharacterTextSplitter.py @@ -1,10 +1,11 @@ from typing import Optional +from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_core.documents import Document from langflow import CustomComponent -from langflow.utils.util import build_loader_repr_from_documents -from langchain.text_splitter import RecursiveCharacterTextSplitter +from langflow.schema import Record +from langflow.utils.util import build_loader_repr_from_records class RecursiveCharacterTextSplitterComponent(CustomComponent): @@ -14,9 +15,10 @@ class RecursiveCharacterTextSplitterComponent(CustomComponent): def build_config(self): return { - "documents": { - "display_name": "Documents", - "info": "The documents to split.", + "inputs": { + "display_name": "Input", + "info": "The texts to split.", + "input_types": ["Document", "Record"], }, "separators": { "display_name": "Separators", @@ -40,11 +42,11 @@ class RecursiveCharacterTextSplitterComponent(CustomComponent): def build( self, - documents: list[Document], + inputs: list[Document], separators: Optional[list[str]] = None, chunk_size: Optional[int] = 1000, chunk_overlap: Optional[int] = 200, - ) -> list[Document]: + ) -> list[Record]: """ Split text into chunks of a specified length. @@ -75,7 +77,13 @@ class RecursiveCharacterTextSplitterComponent(CustomComponent): chunk_size=chunk_size, chunk_overlap=chunk_overlap, ) - + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) docs = splitter.split_documents(documents) - self.repr_value = build_loader_repr_from_documents(docs) - return docs + records = self.to_records(docs) + self.repr_value = build_loader_repr_from_records(records) + return records diff --git a/src/backend/langflow/components/utilities/GetRequest.py b/src/backend/langflow/components/utilities/GetRequest.py deleted file mode 100644 index d6ee5a44f..000000000 --- a/src/backend/langflow/components/utilities/GetRequest.py +++ /dev/null @@ -1,75 +0,0 @@ -from typing import Optional, Text - -import requests -from langchain_core.documents import Document - -from langflow import CustomComponent -from langflow.services.database.models.base import orjson_dumps - - -class GetRequest(CustomComponent): - display_name: str = "GET Request" - description: str = "Make a GET request to the given URL." - output_types: list[str] = ["Document"] - documentation: str = "https://docs.langflow.org/components/utilities#get-request" - beta: bool = True - field_config = { - "url": { - "display_name": "URL", - "info": "The URL to make the request to", - "is_list": True, - }, - "headers": { - "display_name": "Headers", - "info": "The headers to send with the request.", - }, - "code": {"show": False}, - "timeout": { - "display_name": "Timeout", - "field_type": "int", - "info": "The timeout to use for the request.", - "value": 5, - }, - } - - def get_document(self, session: requests.Session, url: str, headers: Optional[dict], timeout: int) -> Document: - try: - response = session.get(url, headers=headers, timeout=int(timeout)) - try: - response_json = response.json() - result = orjson_dumps(response_json, indent_2=False) - except Exception: - result = response.text - self.repr_value = result - return Document( - page_content=result, - metadata={ - "source": url, - "headers": headers, - "status_code": response.status_code, - }, - ) - except requests.Timeout: - return Document( - page_content="Request Timed Out", - metadata={"source": url, "headers": headers, "status_code": 408}, - ) - except Exception as exc: - return Document( - page_content=Text(exc), - metadata={"source": url, "headers": headers, "status_code": 500}, - ) - - def build( - self, - url: str, - headers: Optional[dict] = None, - timeout: int = 5, - ) -> list[Document]: - if headers is None: - headers = {} - urls = url if isinstance(url, list) else [url] - with requests.Session() as session: - documents = [self.get_document(session, u, headers, timeout) for u in urls] - self.repr_value = documents - return documents diff --git a/src/backend/langflow/components/utilities/IDGenerator.py b/src/backend/langflow/components/utilities/IDGenerator.py deleted file mode 100644 index ceb937a6c..000000000 --- a/src/backend/langflow/components/utilities/IDGenerator.py +++ /dev/null @@ -1,19 +0,0 @@ -import uuid -from typing import Text - -from langflow import CustomComponent - - -class UUIDGeneratorComponent(CustomComponent): - documentation: str = "http://docs.langflow.org/components/custom" - display_name = "Unique ID Generator" - description = "Generates a unique ID." - - def generate(self, *args, **kwargs): - return Text(uuid.uuid4().hex) - - def build_config(self): - return {"unique_id": {"display_name": "Value", "value": self.generate}} - - def build(self, unique_id: str) -> str: - return unique_id diff --git a/src/backend/langflow/components/utilities/PostRequest.py b/src/backend/langflow/components/utilities/PostRequest.py deleted file mode 100644 index befc006c8..000000000 --- a/src/backend/langflow/components/utilities/PostRequest.py +++ /dev/null @@ -1,78 +0,0 @@ -from typing import Optional, Text - -import requests -from langchain_core.documents import Document - -from langflow import CustomComponent -from langflow.services.database.models.base import orjson_dumps - - -class PostRequest(CustomComponent): - display_name: str = "POST Request" - description: str = "Make a POST request to the given URL." - output_types: list[str] = ["Document"] - documentation: str = "https://docs.langflow.org/components/utilities#post-request" - beta: bool = True - field_config = { - "url": {"display_name": "URL", "info": "The URL to make the request to."}, - "headers": { - "display_name": "Headers", - "info": "The headers to send with the request.", - }, - "code": {"show": False}, - "document": {"display_name": "Document"}, - } - - def post_document( - self, - session: requests.Session, - document: Document, - url: str, - headers: Optional[dict] = None, - ) -> Document: - try: - response = session.post(url, headers=headers, data=document.page_content) - try: - response_json = response.json() - result = orjson_dumps(response_json, indent_2=False) - except Exception: - result = response.text - self.repr_value = result - return Document( - page_content=result, - metadata={ - "source": url, - "headers": headers, - "status_code": response, - }, - ) - except Exception as exc: - return Document( - page_content=Text(exc), - metadata={ - "source": url, - "headers": headers, - "status_code": 500, - }, - ) - - def build( - self, - document: Document, - url: str, - headers: Optional[dict] = None, - ) -> list[Document]: - if headers is None: - headers = {} - - if not isinstance(document, list) and isinstance(document, Document): - documents: list[Document] = [document] - elif isinstance(document, list) and all(isinstance(doc, Document) for doc in document): - documents = document - else: - raise ValueError("document must be a Document or a list of Documents") - - with requests.Session() as session: - documents = [self.post_document(session, doc, url, headers) for doc in documents] - self.repr_value = documents - return documents diff --git a/src/backend/langflow/components/utilities/SharedState.py b/src/backend/langflow/components/utilities/SharedState.py deleted file mode 100644 index 7d29da9bb..000000000 --- a/src/backend/langflow/components/utilities/SharedState.py +++ /dev/null @@ -1,38 +0,0 @@ -from typing import Union - -from langflow import CustomComponent -from langflow.field_typing import Text -from langflow.schema import Record - - -class SharedState(CustomComponent): - display_name = "Shared State" - description = "A component to share state between components." - - def build_config(self): - return { - "name": {"display_name": "Name", "info": "The name of the state."}, - "record": {"display_name": "Record", "info": "The record to store."}, - "append": { - "display_name": "Append", - "info": "If True, the record will be appended to the state.", - }, - } - - def build( - self, name: str, record: Union[Text, Record], append: bool = False - ) -> Record: - if append: - self.append_state(name, record) - else: - self.update_state(name, record) - - state = self.get_state(name) - if not isinstance(state, Record): - if isinstance(state, str): - state = Record(text=state) - elif isinstance(state, dict): - state = Record(data=state) - else: - state = Record(text=str(state)) - return state diff --git a/src/backend/langflow/components/utilities/ShouldRunNext.py b/src/backend/langflow/components/utilities/ShouldRunNext.py deleted file mode 100644 index b9ae3b048..000000000 --- a/src/backend/langflow/components/utilities/ShouldRunNext.py +++ /dev/null @@ -1,49 +0,0 @@ -# Implement ShouldRunNext component -from typing import Text -from langchain_core.prompts import PromptTemplate - -from langflow import CustomComponent -from langflow.field_typing import BaseLanguageModel, Prompt - - -class ShouldRunNext(CustomComponent): - display_name = "Should Run Next" - description = "Decides whether to run the next component." - - def build_config(self): - return { - "prompt": { - "display_name": "Prompt", - "info": "The prompt to use for the decision. It should generate a boolean response (True or False).", - }, - "llm": { - "display_name": "LLM", - "info": "The language model to use for the decision.", - }, - } - - def build(self, template: Prompt, llm: BaseLanguageModel, **kwargs) -> dict: - # This is a simple component that always returns True - prompt_template = PromptTemplate.from_template(Text(template)) - - attributes_to_check = ["text", "page_content"] - for key, value in kwargs.items(): - for attribute in attributes_to_check: - if hasattr(value, attribute): - kwargs[key] = getattr(value, attribute) - - chain = prompt_template | llm - result = chain.invoke(kwargs) - if hasattr(result, "content") and isinstance(result.content, str): - result = result.content - elif isinstance(result, str): - result = result - else: - result = result.get("response") - - if result.lower() not in ["true", "false"]: - raise ValueError("The prompt should generate a boolean response (True or False).") - # The string should be the words true or false - # if not raise an error - bool_result = result.lower() == "true" - return {"condition": bool_result, "result": kwargs} diff --git a/src/backend/langflow/components/utilities/UpdateRequest.py b/src/backend/langflow/components/utilities/UpdateRequest.py deleted file mode 100644 index 41a57eda6..000000000 --- a/src/backend/langflow/components/utilities/UpdateRequest.py +++ /dev/null @@ -1,89 +0,0 @@ -from typing import List, Optional, Text - -import requests -from langchain_core.documents import Document - -from langflow import CustomComponent -from langflow.services.database.models.base import orjson_dumps - - -class UpdateRequest(CustomComponent): - display_name: str = "Update Request" - description: str = "Make a PATCH request to the given URL." - output_types: list[str] = ["Document"] - documentation: str = "https://docs.langflow.org/components/utilities#update-request" - beta: bool = True - field_config = { - "url": {"display_name": "URL", "info": "The URL to make the request to."}, - "headers": { - "display_name": "Headers", - "field_type": "NestedDict", - "info": "The headers to send with the request.", - }, - "code": {"show": False}, - "document": {"display_name": "Document"}, - "method": { - "display_name": "Method", - "field_type": "str", - "info": "The HTTP method to use.", - "options": ["PATCH", "PUT"], - "value": "PATCH", - }, - } - - def update_document( - self, - session: requests.Session, - document: Document, - url: str, - headers: Optional[dict] = None, - method: str = "PATCH", - ) -> Document: - try: - if method == "PATCH": - response = session.patch(url, headers=headers, data=document.page_content) - elif method == "PUT": - response = session.put(url, headers=headers, data=document.page_content) - else: - raise ValueError(f"Unsupported method: {method}") - try: - response_json = response.json() - result = orjson_dumps(response_json, indent_2=False) - except Exception: - result = response.text - self.repr_value = result - return Document( - page_content=result, - metadata={ - "source": url, - "headers": headers, - "status_code": response.status_code, - }, - ) - except Exception as exc: - return Document( - page_content=Text(exc), - metadata={"source": url, "headers": headers, "status_code": 500}, - ) - - def build( - self, - method: str, - document: Document, - url: str, - headers: Optional[dict] = None, - ) -> List[Document]: - if headers is None: - headers = {} - - if not isinstance(document, list) and isinstance(document, Document): - documents: list[Document] = [document] - elif isinstance(document, list) and all(isinstance(doc, Document) for doc in document): - documents = document - else: - raise ValueError("document must be a Document or a list of Documents") - - with requests.Session() as session: - documents = [self.update_document(session, doc, url, headers, method) for doc in documents] - self.repr_value = documents - return documents diff --git a/src/backend/langflow/components/vectorstores/Chroma.py b/src/backend/langflow/components/vectorstores/Chroma.py index b1756e777..063ff7cf3 100644 --- a/src/backend/langflow/components/vectorstores/Chroma.py +++ b/src/backend/langflow/components/vectorstores/Chroma.py @@ -2,11 +2,12 @@ from typing import List, Optional, Union import chromadb # type: ignore from langchain.embeddings.base import Embeddings -from langchain.schema import BaseRetriever, Document +from langchain.schema import BaseRetriever from langchain_community.vectorstores import VectorStore from langchain_community.vectorstores.chroma import Chroma from langflow import CustomComponent +from langflow.schema.schema import Record class ChromaComponent(CustomComponent): @@ -31,7 +32,7 @@ class ChromaComponent(CustomComponent): "collection_name": {"display_name": "Collection Name", "value": "langflow"}, "index_directory": {"display_name": "Persist Directory"}, "code": {"advanced": True, "display_name": "Code"}, - "documents": {"display_name": "Documents", "is_list": True}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, "embedding": {"display_name": "Embedding"}, "chroma_server_cors_allow_origins": { "display_name": "Server CORS Allow Origins", @@ -55,7 +56,7 @@ class ChromaComponent(CustomComponent): embedding: Embeddings, chroma_server_ssl_enabled: bool, index_directory: Optional[str] = None, - documents: Optional[List[Document]] = None, + inputs: Optional[List[Record]] = None, chroma_server_cors_allow_origins: Optional[str] = None, chroma_server_host: Optional[str] = None, chroma_server_port: Optional[int] = None, @@ -97,6 +98,12 @@ class ChromaComponent(CustomComponent): if index_directory is not None: index_directory = self.resolve_path(index_directory) + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) if documents is not None and embedding is not None: if len(documents) == 0: raise ValueError("If documents are provided, there must be at least one document.") diff --git a/src/backend/langflow/components/vectorstores/ChromaSearch.py b/src/backend/langflow/components/vectorstores/ChromaSearch.py index 3a6d283b3..baa550472 100644 --- a/src/backend/langflow/components/vectorstores/ChromaSearch.py +++ b/src/backend/langflow/components/vectorstores/ChromaSearch.py @@ -35,7 +35,6 @@ class ChromaSearchComponent(LCVectorStoreComponent): # "persist": {"display_name": "Persist"}, "index_directory": {"display_name": "Index Directory"}, "code": {"show": False, "display_name": "Code"}, - "documents": {"display_name": "Documents", "is_list": True}, "embedding": { "display_name": "Embedding", "info": "Embedding model to vectorize inputs (make sure to use same as index)", @@ -93,8 +92,7 @@ class ChromaSearchComponent(LCVectorStoreComponent): if chroma_server_host is not None: chroma_settings = chromadb.config.Settings( - chroma_server_cors_allow_origins=chroma_server_cors_allow_origins - or None, + chroma_server_cors_allow_origins=chroma_server_cors_allow_origins or None, chroma_server_host=chroma_server_host, chroma_server_port=chroma_server_port or None, chroma_server_grpc_port=chroma_server_grpc_port or None, diff --git a/src/backend/langflow/components/vectorstores/FAISS.py b/src/backend/langflow/components/vectorstores/FAISS.py index a0324456e..7cdadccdb 100644 --- a/src/backend/langflow/components/vectorstores/FAISS.py +++ b/src/backend/langflow/components/vectorstores/FAISS.py @@ -5,7 +5,8 @@ from langchain_community.vectorstores import VectorStore from langchain_community.vectorstores.faiss import FAISS from langflow import CustomComponent -from langflow.field_typing import Document, Embeddings +from langflow.field_typing import Embeddings +from langflow.schema.schema import Record class FAISSComponent(CustomComponent): @@ -15,7 +16,7 @@ class FAISSComponent(CustomComponent): def build_config(self): return { - "documents": {"display_name": "Documents"}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, "embedding": {"display_name": "Embedding"}, "folder_path": { "display_name": "Folder Path", @@ -27,10 +28,16 @@ class FAISSComponent(CustomComponent): def build( self, embedding: Embeddings, - documents: List[Document], + inputs: List[Record], folder_path: str, index_name: str = "langflow_index", ) -> Union[VectorStore, FAISS, BaseRetriever]: + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) vector_store = FAISS.from_documents(documents=documents, embedding=embedding) if not folder_path: raise ValueError("Folder path is required to save the FAISS index.") diff --git a/src/backend/langflow/components/vectorstores/FAISSSearch.py b/src/backend/langflow/components/vectorstores/FAISSSearch.py index f6ddf4f7a..27cdc606c 100644 --- a/src/backend/langflow/components/vectorstores/FAISSSearch.py +++ b/src/backend/langflow/components/vectorstores/FAISSSearch.py @@ -14,7 +14,6 @@ class FAISSSearchComponent(LCVectorStoreComponent): def build_config(self): return { - "documents": {"display_name": "Documents"}, "embedding": {"display_name": "Embedding"}, "folder_path": { "display_name": "Folder Path", @@ -34,9 +33,7 @@ class FAISSSearchComponent(LCVectorStoreComponent): if not folder_path: raise ValueError("Folder path is required to save the FAISS index.") path = self.resolve_path(folder_path) - vector_store = FAISS.load_local( - folder_path=Text(path), embeddings=embedding, index_name=index_name - ) + vector_store = FAISS.load_local(folder_path=Text(path), embeddings=embedding, index_name=index_name) if not vector_store: raise ValueError("Failed to load the FAISS index.") diff --git a/src/backend/langflow/components/vectorstores/MongoDBAtlasVector.py b/src/backend/langflow/components/vectorstores/MongoDBAtlasVector.py index e15368f7d..f45d55584 100644 --- a/src/backend/langflow/components/vectorstores/MongoDBAtlasVector.py +++ b/src/backend/langflow/components/vectorstores/MongoDBAtlasVector.py @@ -3,7 +3,8 @@ from typing import List, Optional from langchain_community.vectorstores.mongodb_atlas import MongoDBAtlasVectorSearch from langflow import CustomComponent -from langflow.field_typing import Document, Embeddings, NestedDict +from langflow.field_typing import Embeddings, NestedDict +from langflow.schema.schema import Record class MongoDBAtlasComponent(CustomComponent): @@ -13,7 +14,7 @@ class MongoDBAtlasComponent(CustomComponent): def build_config(self): return { - "documents": {"display_name": "Documents"}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, "embedding": {"display_name": "Embedding"}, "collection_name": {"display_name": "Collection Name"}, "db_name": {"display_name": "Database Name"}, @@ -25,7 +26,7 @@ class MongoDBAtlasComponent(CustomComponent): def build( self, embedding: Embeddings, - documents: List[Document], + inputs: List[Record], collection_name: str = "", db_name: str = "", index_name: str = "", @@ -42,6 +43,12 @@ class MongoDBAtlasComponent(CustomComponent): collection = mongo_client[db_name][collection_name] except Exception as e: raise ValueError(f"Failed to connect to MongoDB Atlas: {e}") + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) if documents: vector_store = MongoDBAtlasVectorSearch.from_documents( documents=documents, diff --git a/src/backend/langflow/components/vectorstores/Pinecone.py b/src/backend/langflow/components/vectorstores/Pinecone.py index 54222b133..c71048266 100644 --- a/src/backend/langflow/components/vectorstores/Pinecone.py +++ b/src/backend/langflow/components/vectorstores/Pinecone.py @@ -7,7 +7,8 @@ from langchain_community.vectorstores import VectorStore from langchain_community.vectorstores.pinecone import Pinecone from langflow import CustomComponent -from langflow.field_typing import Document, Embeddings +from langflow.field_typing import Embeddings +from langflow.schema.schema import Record class PineconeComponent(CustomComponent): @@ -17,7 +18,7 @@ class PineconeComponent(CustomComponent): def build_config(self): return { - "documents": {"display_name": "Documents"}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, "embedding": {"display_name": "Embedding"}, "index_name": {"display_name": "Index Name"}, "namespace": {"display_name": "Namespace"}, @@ -44,7 +45,7 @@ class PineconeComponent(CustomComponent): self, embedding: Embeddings, pinecone_env: str, - documents: List[Document], + inputs: List[Record], text_key: str = "text", pool_threads: int = 4, index_name: Optional[str] = None, @@ -59,6 +60,12 @@ class PineconeComponent(CustomComponent): pinecone.init(api_key=pinecone_api_key, environment=pinecone_env) # type: ignore if not index_name: raise ValueError("Index Name is required.") + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) if documents: return Pinecone.from_documents( documents=documents, diff --git a/src/backend/langflow/components/vectorstores/Qdrant.py b/src/backend/langflow/components/vectorstores/Qdrant.py index 23ee70b11..e1773268b 100644 --- a/src/backend/langflow/components/vectorstores/Qdrant.py +++ b/src/backend/langflow/components/vectorstores/Qdrant.py @@ -3,8 +3,10 @@ from typing import Optional, Union from langchain.schema import BaseRetriever from langchain_community.vectorstores import VectorStore from langchain_community.vectorstores.qdrant import Qdrant + from langflow import CustomComponent -from langflow.field_typing import Document, Embeddings, NestedDict +from langflow.field_typing import Embeddings, NestedDict +from langflow.schema.schema import Record class QdrantComponent(CustomComponent): @@ -14,17 +16,23 @@ class QdrantComponent(CustomComponent): def build_config(self): return { - "documents": {"display_name": "Documents"}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, "embedding": {"display_name": "Embedding"}, "api_key": {"display_name": "API Key", "password": True, "advanced": True}, "collection_name": {"display_name": "Collection Name"}, - "content_payload_key": {"display_name": "Content Payload Key", "advanced": True}, + "content_payload_key": { + "display_name": "Content Payload Key", + "advanced": True, + }, "distance_func": {"display_name": "Distance Function", "advanced": True}, "grpc_port": {"display_name": "gRPC Port", "advanced": True}, "host": {"display_name": "Host", "advanced": True}, "https": {"display_name": "HTTPS", "advanced": True}, "location": {"display_name": "Location", "advanced": True}, - "metadata_payload_key": {"display_name": "Metadata Payload Key", "advanced": True}, + "metadata_payload_key": { + "display_name": "Metadata Payload Key", + "advanced": True, + }, "path": {"display_name": "Path", "advanced": True}, "port": {"display_name": "Port", "advanced": True}, "prefer_grpc": {"display_name": "Prefer gRPC", "advanced": True}, @@ -38,7 +46,7 @@ class QdrantComponent(CustomComponent): self, embedding: Embeddings, collection_name: str, - documents: Optional[Document] = None, + inputs: Optional[Record] = None, api_key: Optional[str] = None, content_payload_key: str = "page_content", distance_func: str = "Cosine", @@ -55,6 +63,12 @@ class QdrantComponent(CustomComponent): timeout: Optional[int] = None, url: Optional[str] = None, ) -> Union[VectorStore, Qdrant, BaseRetriever]: + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) if documents is None: from qdrant_client import QdrantClient diff --git a/src/backend/langflow/components/vectorstores/Redis.py b/src/backend/langflow/components/vectorstores/Redis.py index b2d7e4542..599a697a0 100644 --- a/src/backend/langflow/components/vectorstores/Redis.py +++ b/src/backend/langflow/components/vectorstores/Redis.py @@ -3,9 +3,10 @@ from typing import Optional, Union from langchain.embeddings.base import Embeddings from langchain_community.vectorstores import VectorStore from langchain_community.vectorstores.redis import Redis -from langchain_core.documents import Document from langchain_core.retrievers import BaseRetriever + from langflow import CustomComponent +from langflow.schema.schema import Record class RedisComponent(CustomComponent): @@ -28,7 +29,7 @@ class RedisComponent(CustomComponent): return { "index_name": {"display_name": "Index Name", "value": "your_index"}, "code": {"show": False, "display_name": "Code"}, - "documents": {"display_name": "Documents", "is_list": True}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, "embedding": {"display_name": "Embedding"}, "schema": {"display_name": "Schema", "file_types": [".yaml"]}, "redis_server_url": { @@ -44,7 +45,7 @@ class RedisComponent(CustomComponent): redis_server_url: str, redis_index_name: str, schema: Optional[str] = None, - documents: Optional[Document] = None, + inputs: Optional[Record] = None, ) -> Union[VectorStore, BaseRetriever]: """ Builds the Vector Store or BaseRetriever object. @@ -58,7 +59,13 @@ class RedisComponent(CustomComponent): Returns: - VectorStore: The Vector Store object. """ - if documents is None: + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) + if not documents: if schema is None: raise ValueError("If no documents are provided, a schema must be provided.") redis_vs = Redis.from_existing_index( diff --git a/src/backend/langflow/components/vectorstores/RedisSearch.py b/src/backend/langflow/components/vectorstores/RedisSearch.py index 4089d4f47..b2b420d3e 100644 --- a/src/backend/langflow/components/vectorstores/RedisSearch.py +++ b/src/backend/langflow/components/vectorstores/RedisSearch.py @@ -33,7 +33,6 @@ class RedisSearchComponent(RedisComponent, LCVectorStoreComponent): "input_value": {"display_name": "Input"}, "index_name": {"display_name": "Index Name", "value": "your_index"}, "code": {"show": False, "display_name": "Code"}, - "documents": {"display_name": "Documents", "is_list": True}, "embedding": {"display_name": "Embedding"}, "schema": {"display_name": "Schema", "file_types": [".yaml"]}, "redis_server_url": { diff --git a/src/backend/langflow/components/vectorstores/SupabaseVectorStore.py b/src/backend/langflow/components/vectorstores/SupabaseVectorStore.py index 2ec6dfabc..5d32388d9 100644 --- a/src/backend/langflow/components/vectorstores/SupabaseVectorStore.py +++ b/src/backend/langflow/components/vectorstores/SupabaseVectorStore.py @@ -3,10 +3,12 @@ from typing import List, Union from langchain.schema import BaseRetriever from langchain_community.vectorstores import VectorStore from langchain_community.vectorstores.supabase import SupabaseVectorStore -from langflow import CustomComponent -from langflow.field_typing import Document, Embeddings, NestedDict from supabase.client import Client, create_client +from langflow import CustomComponent +from langflow.field_typing import Embeddings, NestedDict +from langflow.schema.schema import Record + class SupabaseComponent(CustomComponent): display_name = "Supabase" @@ -14,7 +16,7 @@ class SupabaseComponent(CustomComponent): def build_config(self): return { - "documents": {"display_name": "Documents"}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, "embedding": {"display_name": "Embedding"}, "query_name": {"display_name": "Query Name"}, "search_kwargs": {"display_name": "Search Kwargs", "advanced": True}, @@ -26,7 +28,7 @@ class SupabaseComponent(CustomComponent): def build( self, embedding: Embeddings, - documents: List[Document], + inputs: List[Record], query_name: str = "", search_kwargs: NestedDict = {}, supabase_service_key: str = "", @@ -34,6 +36,12 @@ class SupabaseComponent(CustomComponent): table_name: str = "", ) -> Union[VectorStore, SupabaseVectorStore, BaseRetriever]: supabase: Client = create_client(supabase_url, supabase_key=supabase_service_key) + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) return SupabaseVectorStore.from_documents( documents=documents, embedding=embedding, diff --git a/src/backend/langflow/components/vectorstores/SupabaseVectorStoreSearch.py b/src/backend/langflow/components/vectorstores/SupabaseVectorStoreSearch.py index 5fd4dbd18..ca8113c56 100644 --- a/src/backend/langflow/components/vectorstores/SupabaseVectorStoreSearch.py +++ b/src/backend/langflow/components/vectorstores/SupabaseVectorStoreSearch.py @@ -38,9 +38,7 @@ class SupabaseSearchComponent(LCVectorStoreComponent): supabase_url: str = "", table_name: str = "", ) -> List[Record]: - supabase: Client = create_client( - supabase_url, supabase_key=supabase_service_key - ) + supabase: Client = create_client(supabase_url, supabase_key=supabase_service_key) vector_store = SupabaseVectorStore( client=supabase, embedding=embedding, diff --git a/src/backend/langflow/components/vectorstores/Vectara.py b/src/backend/langflow/components/vectorstores/Vectara.py index 0a396918c..cd25b2dd9 100644 --- a/src/backend/langflow/components/vectorstores/Vectara.py +++ b/src/backend/langflow/components/vectorstores/Vectara.py @@ -8,7 +8,8 @@ from langchain_community.vectorstores.vectara import Vectara from langchain_core.vectorstores import VectorStore from langflow import CustomComponent -from langflow.field_typing import BaseRetriever, Document +from langflow.field_typing import BaseRetriever +from langflow.schema.schema import Record class VectaraComponent(CustomComponent): @@ -28,8 +29,9 @@ class VectaraComponent(CustomComponent): "display_name": "Vectara API Key", "password": True, }, - "documents": { - "display_name": "Documents", + "inputs": { + "display_name": "Input", + "input_types": ["Document", "Record"], "info": "If provided, will be upserted to corpus (optional)", }, "files_url": { @@ -44,11 +46,18 @@ class VectaraComponent(CustomComponent): vectara_corpus_id: str, vectara_api_key: str, files_url: Optional[List[str]] = None, - documents: Optional[Document] = None, + inputs: Optional[Record] = None, ) -> Union[VectorStore, BaseRetriever]: source = "Langflow" - if documents is not None: + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) + + if documents: return Vectara.from_documents( documents=documents, # type: ignore embedding=FakeEmbeddings(size=768), diff --git a/src/backend/langflow/components/vectorstores/VectaraSearch.py b/src/backend/langflow/components/vectorstores/VectaraSearch.py index ae2d442be..3220d1561 100644 --- a/src/backend/langflow/components/vectorstores/VectaraSearch.py +++ b/src/backend/langflow/components/vectorstores/VectaraSearch.py @@ -11,9 +11,7 @@ from langflow.schema import Record class VectaraSearchComponent(VectaraComponent, LCVectorStoreComponent): display_name: str = "Vectara Search" description: str = "Search a Vectara Vector Store for similar documents." - documentation = ( - "https://python.langchain.com/docs/integrations/vectorstores/vectara" - ) + documentation = "https://python.langchain.com/docs/integrations/vectorstores/vectara" beta = True icon = "Vectara" @@ -33,10 +31,6 @@ class VectaraSearchComponent(VectaraComponent, LCVectorStoreComponent): "display_name": "Vectara API Key", "password": True, }, - "documents": { - "display_name": "Documents", - "info": "If provided, will be upserted to corpus (optional)", - }, "files_url": { "display_name": "Files Url", "info": "Make vectara object using url of files (optional)", diff --git a/src/backend/langflow/components/vectorstores/Weaviate.py b/src/backend/langflow/components/vectorstores/Weaviate.py index 3d804255a..8bc46d17b 100644 --- a/src/backend/langflow/components/vectorstores/Weaviate.py +++ b/src/backend/langflow/components/vectorstores/Weaviate.py @@ -2,10 +2,11 @@ from typing import Optional, Union import weaviate # type: ignore from langchain.embeddings.base import Embeddings -from langchain.schema import BaseRetriever, Document +from langchain.schema import BaseRetriever from langchain_community.vectorstores import VectorStore, Weaviate from langflow import CustomComponent +from langflow.schema.schema import Record class WeaviateVectorStoreComponent(CustomComponent): @@ -30,7 +31,7 @@ class WeaviateVectorStoreComponent(CustomComponent): "advanced": True, "value": "text", }, - "documents": {"display_name": "Documents", "is_list": True}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, "embedding": {"display_name": "Embedding"}, "attributes": { "display_name": "Attributes", @@ -55,7 +56,7 @@ class WeaviateVectorStoreComponent(CustomComponent): index_name: Optional[str] = None, text_key: str = "text", embedding: Optional[Embeddings] = None, - documents: Optional[Document] = None, + inputs: Optional[Record] = None, attributes: Optional[list] = None, ) -> Union[VectorStore, BaseRetriever]: if api_key: @@ -78,8 +79,14 @@ class WeaviateVectorStoreComponent(CustomComponent): return pascal_case_word index_name = _to_pascal_case(index_name) if index_name else None + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) - if documents is not None and embedding is not None: + if documents and embedding is not None: return Weaviate.from_documents( client=client, index_name=index_name, diff --git a/src/backend/langflow/components/vectorstores/WeaviateSearch.py b/src/backend/langflow/components/vectorstores/WeaviateSearch.py index 6eee202c9..5713ca26f 100644 --- a/src/backend/langflow/components/vectorstores/WeaviateSearch.py +++ b/src/backend/langflow/components/vectorstores/WeaviateSearch.py @@ -11,9 +11,7 @@ from langflow.schema import Record class WeaviateSearchVectorStore(WeaviateVectorStoreComponent, LCVectorStoreComponent): display_name: str = "Weaviate Search" description: str = "Search a Weaviate Vector Store for similar documents." - documentation = ( - "https://python.langchain.com/docs/integrations/vectorstores/weaviate" - ) + documentation = "https://python.langchain.com/docs/integrations/vectorstores/weaviate" beta = True icon = "Weaviate" @@ -39,7 +37,6 @@ class WeaviateSearchVectorStore(WeaviateVectorStoreComponent, LCVectorStoreCompo "advanced": True, "value": "text", }, - "documents": {"display_name": "Documents", "is_list": True}, "embedding": {"display_name": "Embedding"}, "attributes": { "display_name": "Attributes", diff --git a/src/backend/langflow/components/vectorstores/pgvector.py b/src/backend/langflow/components/vectorstores/pgvector.py index 2baf6dae6..7ab20b8df 100644 --- a/src/backend/langflow/components/vectorstores/pgvector.py +++ b/src/backend/langflow/components/vectorstores/pgvector.py @@ -3,9 +3,10 @@ from typing import Optional, Union from langchain.embeddings.base import Embeddings from langchain_community.vectorstores import VectorStore from langchain_community.vectorstores.pgvector import PGVector -from langchain_core.documents import Document from langchain_core.retrievers import BaseRetriever + from langflow import CustomComponent +from langflow.schema.schema import Record class PGVectorComponent(CustomComponent): @@ -26,7 +27,7 @@ class PGVectorComponent(CustomComponent): """ return { "code": {"show": False}, - "documents": {"display_name": "Documents", "is_list": True}, + "inputs": {"display_name": "Input", "input_types": ["Document", "Record"]}, "embedding": {"display_name": "Embedding"}, "pg_server_url": { "display_name": "PostgreSQL Server Connection String", @@ -40,7 +41,7 @@ class PGVectorComponent(CustomComponent): embedding: Embeddings, pg_server_url: str, collection_name: str, - documents: Optional[Document] = None, + inputs: Optional[Record] = None, ) -> Union[VectorStore, BaseRetriever]: """ Builds the Vector Store or BaseRetriever object. @@ -55,6 +56,12 @@ class PGVectorComponent(CustomComponent): - VectorStore: The Vector Store object. """ + documents = [] + for _input in inputs: + if isinstance(_input, Record): + documents.append(_input.to_lc_document()) + else: + documents.append(_input) try: if documents is None: vector_store = PGVector.from_existing_index( diff --git a/src/backend/langflow/components/vectorstores/pgvectorSearch.py b/src/backend/langflow/components/vectorstores/pgvectorSearch.py index f40e5ed26..04666fe74 100644 --- a/src/backend/langflow/components/vectorstores/pgvectorSearch.py +++ b/src/backend/langflow/components/vectorstores/pgvectorSearch.py @@ -15,9 +15,7 @@ class PGVectorSearchComponent(PGVectorComponent, LCVectorStoreComponent): display_name: str = "PGVector Search" description: str = "Search a PGVector Store for similar documents." - documentation = ( - "https://python.langchain.com/docs/integrations/vectorstores/pgvector" - ) + documentation = "https://python.langchain.com/docs/integrations/vectorstores/pgvector" def build_config(self): """ diff --git a/src/backend/langflow/graph/edge/base.py b/src/backend/langflow/graph/edge/base.py index c49ec714c..53c4892f5 100644 --- a/src/backend/langflow/graph/edge/base.py +++ b/src/backend/langflow/graph/edge/base.py @@ -12,9 +12,7 @@ if TYPE_CHECKING: class SourceHandle(BaseModel): - baseClasses: List[str] = Field( - ..., description="List of base classes for the source handle." - ) + baseClasses: List[str] = Field(..., description="List of base classes for the source handle.") dataType: str = Field(..., description="Data type for the source handle.") id: str = Field(..., description="Unique identifier for the source handle.") @@ -22,9 +20,7 @@ class SourceHandle(BaseModel): class TargetHandle(BaseModel): fieldName: str = Field(..., description="Field name for the target handle.") id: str = Field(..., description="Unique identifier for the target handle.") - inputTypes: Optional[List[str]] = Field( - None, description="List of input types for the target handle." - ) + inputTypes: Optional[List[str]] = Field(None, description="List of input types for the target handle.") type: str = Field(..., description="Type of the target handle.") @@ -53,24 +49,16 @@ class Edge: def validate_handles(self, source, target) -> None: if self.target_handle.inputTypes is None: - self.valid_handles = ( - self.target_handle.type in self.source_handle.baseClasses - ) + self.valid_handles = self.target_handle.type in self.source_handle.baseClasses else: self.valid_handles = ( - any( - baseClass in self.target_handle.inputTypes - for baseClass in self.source_handle.baseClasses - ) + any(baseClass in self.target_handle.inputTypes for baseClass in self.source_handle.baseClasses) or self.target_handle.type in self.source_handle.baseClasses ) if not self.valid_handles: logger.debug(self.source_handle) logger.debug(self.target_handle) - raise ValueError( - f"Edge between {source.vertex_type} and {target.vertex_type} " - f"has invalid handles" - ) + raise ValueError(f"Edge between {source.vertex_type} and {target.vertex_type} " f"has invalid handles") def __setstate__(self, state): self.source_id = state["source_id"] @@ -87,11 +75,7 @@ class Edge: # Both lists contain strings and sometimes a string contains the value we are # looking for e.g. comgin_out=["Chain"] and target_reqs=["LLMChain"] # so we need to check if any of the strings in source_types is in target_reqs - self.valid = any( - output in target_req - for output in self.source_types - for target_req in self.target_reqs - ) + self.valid = any(output in target_req for output in self.source_types for target_req in self.target_reqs) # Get what type of input the target node is expecting self.matched_type = next( @@ -102,10 +86,7 @@ class Edge: if no_matched_type: logger.debug(self.source_types) logger.debug(self.target_reqs) - raise ValueError( - f"Edge between {source.vertex_type} and {target.vertex_type} " - f"has no matched type" - ) + raise ValueError(f"Edge between {source.vertex_type} and {target.vertex_type} " f"has no matched type") def __repr__(self) -> str: return ( @@ -118,10 +99,7 @@ class Edge: def __eq__(self, __o: object) -> bool: # Create a better way to compare edges - return ( - self._source_handle == __o._source_handle - and self._target_handle == __o._target_handle - ) + return self._source_handle == __o._source_handle and self._target_handle == __o._target_handle class ContractEdge(Edge): @@ -178,9 +156,7 @@ class ContractEdge(Edge): return f"{self.source_id} -[{self.target_param}]-> {self.target_id}" -def log_transaction( - edge: ContractEdge, source: "Vertex", target: "Vertex", status, error=None -): +def log_transaction(edge: ContractEdge, source: "Vertex", target: "Vertex", status, error=None): try: monitor_service = get_monitor_service() clean_params = build_clean_params(target) diff --git a/src/backend/langflow/graph/graph/base.py b/src/backend/langflow/graph/graph/base.py index b52776cd0..f22795927 100644 --- a/src/backend/langflow/graph/graph/base.py +++ b/src/backend/langflow/graph/graph/base.py @@ -17,9 +17,11 @@ from langflow.graph.vertex.types import ( FileToolVertex, LLMVertex, RoutingVertex, + StateVertex, ToolkitVertex, ) from langflow.interface.tools.constants import FILE_TOOLS +from langflow.schema import Record from langflow.utils import payload if TYPE_CHECKING: @@ -43,9 +45,10 @@ class Graph: self.flow_id = flow_id self._is_input_vertices: List[str] = [] self._is_output_vertices: List[str] = [] + self._is_state_vertices: List[str] = [] self._has_session_id_vertices: List[str] = [] self._sorted_vertices_layers: List[List[str]] = [] - self.run_id = None + self._run_id = None self.top_level_vertices = [] for vertex in self._vertices: @@ -55,6 +58,8 @@ class Graph: self._vertices = self._graph_data["nodes"] self._edges = self._graph_data["edges"] + self.inactivated_vertices: set = set() + self.activated_vertices: List[str] = [] self.vertices_layers = [] self.vertices_to_run = set() self.stop_vertex = None @@ -67,13 +72,66 @@ class Graph: self.define_vertices_lists() self.state_manager = GraphStateManager() + def get_state(self, name: str) -> Optional[Record]: + """Returns the state of the graph.""" + return self.state_manager.get_state(name, run_id=self._run_id) + + def update_state( + self, name: str, record: Union[str, Record], caller: Optional[str] = None + ) -> None: + """Updates the state of the graph.""" + if caller: + # If there is a caller which is a vertex_id, I want to activate + # all StateVertex in self.vertices that are not the caller + # essentially notifying all the other vertices that the state has changed + # This also has to activate their successors + self.activate_state_vertices(name, caller) + + self.state_manager.update_state(name, record, run_id=self._run_id) + + def activate_state_vertices(self, name: str, caller: str): + vertices_ids = [] + for vertex_id in self._is_state_vertices: + if vertex_id == caller: + continue + vertex = self.get_vertex(vertex_id) + if ( + isinstance(vertex._raw_params["name"], str) + and name in vertex._raw_params["name"] + and vertex_id != caller + and isinstance(vertex, StateVertex) + ): + vertices_ids.append(vertex_id) + successors = self.get_all_successors(vertex, flat=True) + self.vertices_to_run.update(list(map(lambda x: x.id, successors))) + self.activated_vertices = vertices_ids + self.vertices_to_run.update(vertices_ids) + + def reset_activated_vertices(self): + self.activated_vertices = [] + + def append_state( + self, name: str, record: Union[str, Record], caller: Optional[str] = None + ) -> None: + """Appends the state of the graph.""" + if caller: + self.activate_state_vertices(name, caller) + + self.state_manager.append_state(name, record, run_id=self._run_id) + + @property + def run_id(self): + if not self._run_id: + raise ValueError("Run ID not set") + return self._run_id + def set_run_id(self, run_id: str): for vertex in self.vertices: self.state_manager.subscribe(run_id, vertex.update_graph_state) - self.run_id = run_id + self._run_id = run_id def add_state(self, state: str): - self.state_manager.append_state(self.run_id, state) + self.state_manager.append_state(self._run_id, state) @property def sorted_vertices_layers(self) -> List[List[str]]: @@ -85,26 +143,44 @@ class Graph: """ Defines the lists of vertices that are inputs, outputs, and have session_id. """ - attributes = ["is_input", "is_output", "has_session_id"] + attributes = ["is_input", "is_output", "has_session_id", "is_state"] for vertex in self.vertices: for attribute in attributes: if getattr(vertex, attribute): getattr(self, f"_{attribute}_vertices").append(vertex.id) - async def _run(self, inputs: Dict[str, str], stream: bool) -> List[Optional["ResultData"]]: + async def _run( + self, + inputs: Dict[str, str], + input_components: list[str], + outputs: list[str], + stream: bool, + session_id: str, + ) -> List[Optional["ResultData"]]: """Runs the graph with the given inputs.""" for vertex_id in self._is_input_vertices: vertex = self.get_vertex(vertex_id) + if input_components and ( + vertex_id not in input_components + or vertex.display_name not in input_components + ): + continue if vertex is None: raise ValueError(f"Vertex {vertex_id} not found") - vertex.update_raw_params(inputs) + vertex.update_raw_params(inputs, overwrite=True) + # Update all the vertices with the session_id + for vertex_id in self._has_session_id_vertices: + vertex = self.get_vertex(vertex_id) + if vertex is None: + raise ValueError(f"Vertex {vertex_id} not found") + vertex.update_raw_params({"session_id": session_id}) try: await self.process() self.increment_run_count() except Exception as exc: logger.exception(exc) raise ValueError(f"Error running graph: {exc}") from exc - outputs = [] + vertex_outputs = [] for vertex_id in self._is_output_vertices: vertex = self.get_vertex(vertex_id) if vertex is None: @@ -116,25 +192,38 @@ class Graph: and hasattr(vertex, "consume_async_generator") ): await vertex.consume_async_generator() - outputs.append(vertex.result) - return outputs + if not outputs or (vertex.display_name in outputs or vertex.id in outputs): + vertex_outputs.append(vertex.result) + return vertex_outputs - async def run(self, inputs: Dict[str, Union[str, list[str]]], stream: bool) -> List[Optional["ResultData"]]: + async def run( + self, + inputs: Dict[str, Union[str, list[str]]], + outputs: list[str], + session_id: str, + stream: Optional[bool] = False, + ) -> List[Optional["ResultData"]]: """Runs the graph with the given inputs.""" # inputs is {"message": "Hello, world!"} # we need to go through self.inputs and update the self._raw_params # of the vertices that are inputs # if the value is a list, we need to run multiple times - outputs = [] + vertex_outputs = [] inputs_values = inputs.get(INPUT_FIELD_NAME, "") if not isinstance(inputs_values, list): inputs_values = [inputs_values] for input_value in inputs_values: - run_outputs = await self._run({INPUT_FIELD_NAME: input_value}, stream=stream) + run_outputs = await self._run( + inputs={INPUT_FIELD_NAME: input_value}, + input_components=inputs.get("components", []), + outputs=outputs, + stream=stream, + session_id=session_id, + ) logger.debug(f"Run outputs: {run_outputs}") - outputs.extend(run_outputs) - return outputs + vertex_outputs.append(run_outputs) + return vertex_outputs # vertices_layers is a list of lists ordered by the order the vertices # should be built. @@ -149,7 +238,7 @@ class Graph: return { "runs": self._runs, "updates": self._updates, - "inactive_vertices": self.inactive_vertices, + "inactivated_vertices": self.inactivated_vertices, } def build_graph_maps(self): @@ -157,8 +246,8 @@ class Graph: self.in_degree_map = self.build_in_degree() self.parent_child_map = self.build_parent_child_map() - def reset_inactive_vertices(self): - self.inactive_vertices = set() + def reset_inactivated_vertices(self): + self.inactivated_vertices = set() def mark_all_vertices(self, state: str): """Marks all vertices in the graph.""" @@ -228,9 +317,12 @@ class Graph: return cls(vertices, edges, flow_id) except KeyError as exc: logger.exception(exc) - raise ValueError( - f"Invalid payload. Expected keys 'nodes' and 'edges'. Found {list(payload.keys())}" - ) from exc + if "nodes" not in payload and "edges" not in payload: + logger.exception(exc) + raise ValueError( + f"Invalid payload. Expected keys 'nodes' and 'edges'. Found {list(payload.keys())}" + ) from exc + raise ValueError(f"Error while creating graph from payload: {exc}") from exc def __eq__(self, other: object) -> bool: if not isinstance(other, Graph): @@ -329,9 +421,9 @@ class Graph: vertex.params = {} vertex._build_params() vertex.graph = self - # If the vertex is pinned, we don't want + # If the vertex is frozen, we don't want # to reset the results nor the _built attribute - if not vertex.pinned: + if not vertex.frozen: vertex._built = False vertex.result = None vertex.artifacts = {} @@ -344,7 +436,7 @@ class Graph: for vid in [edge.source_id, edge.target_id]: if vid in self.vertex_map: _vertex = self.vertex_map[vid] - if not _vertex.pinned: + if not _vertex.frozen: _vertex._build_params() def _add_vertex(self, vertex: Vertex) -> None: @@ -543,6 +635,43 @@ class Graph: """Returns the predecessors of a vertex.""" return [self.get_vertex(source_id) for source_id in self.predecessor_map.get(vertex.id, [])] + def get_all_successors(self, vertex, recursive=True, flat=True): + # Recursively get the successors of the current vertex + # successors = vertex.successors + # if not successors: + # return [] + # successors_result = [] + # for successor in successors: + # # Just return a list of successors + # if recursive: + # next_successors = self.get_all_successors(successor) + # successors_result.extend(next_successors) + # successors_result.append(successor) + # return successors_result + # The above is the version without the flat parameter + # The below is the version with the flat parameter + # the flat parameter will define if each layer of successors + # becomes one list or if the result is a list of lists + # if flat is True, the result will be a list of vertices + # if flat is False, the result will be a list of lists of vertices + # each list will represent a layer of successors + successors = vertex.successors + if not successors: + return [] + successors_result = [] + for successor in successors: + if recursive: + next_successors = self.get_all_successors(successor) + if flat: + successors_result.extend(next_successors) + else: + successors_result.append(next_successors) + if flat: + successors_result.append(successor) + else: + successors_result.append([successor]) + return successors_result + def get_successors(self, vertex): """Returns the successors of a vertex.""" return [self.get_vertex(target_id) for target_id in self.successor_map.get(vertex.id, [])] @@ -574,14 +703,21 @@ class Graph: # if we can't find a vertex, we raise an error edges: List[ContractEdge] = [] + edges_added = set() for edge in self._edges: source = self.get_vertex(edge["source"]) target = self.get_vertex(edge["target"]) + if source is None: raise ValueError(f"Source vertex {edge['source']} not found") if target is None: raise ValueError(f"Target vertex {edge['target']} not found") + + if (source.id, target.id) in edges_added: + continue + edges.append(ContractEdge(source, target, edge)) + edges_added.add((source.id, target.id)) return edges def _get_vertex_class(self, node_type: str, node_base_type: str, node_id: str) -> Type[Vertex]: @@ -592,6 +728,8 @@ class Graph: return ChatVertex elif node_name in ["ShouldRunNext"]: return RoutingVertex + elif node_name in ["SharedState", "Notify", "GetNotified"]: + return StateVertex elif node_base_type in lazy_load_vertex_dict.VERTEX_TYPE_MAP: return lazy_load_vertex_dict.VERTEX_TYPE_MAP[node_base_type] elif node_name in lazy_load_vertex_dict.VERTEX_TYPE_MAP: @@ -693,19 +831,28 @@ class Graph: def layered_topological_sort( self, vertices: List[Vertex], + filter_graphs: bool = False, ) -> List[List[str]]: """Performs a layered topological sort of the vertices in the graph.""" vertices_ids = {vertex.id for vertex in vertices} # Queue for vertices with no incoming edges - queue = deque(vertex.id for vertex in vertices if self.in_degree_map[vertex.id] == 0) + queue = deque( + vertex.id + for vertex in vertices + # if filter_graphs then only vertex.is_input will be considered + if self.in_degree_map[vertex.id] == 0 + and (not filter_graphs or vertex.is_input) + ) layers: List[List[str]] = [] - + visited = set(queue) current_layer = 0 while queue: layers.append([]) # Start a new layer layer_size = len(queue) for _ in range(layer_size): vertex_id = queue.popleft() + visited.add(vertex_id) + layers[current_layer].append(vertex_id) for neighbor in self.successor_map[vertex_id]: # only vertices in `vertices_ids` should be considered @@ -716,8 +863,16 @@ class Graph: continue self.in_degree_map[neighbor] -= 1 # 'remove' edge - if self.in_degree_map[neighbor] == 0: + if self.in_degree_map[neighbor] == 0 and neighbor not in visited: queue.append(neighbor) + + # if > 0 it might mean not all predecessors have added to the queue + # so we should process the neighbors predecessors + elif self.in_degree_map[neighbor] > 0: + for predecessor in self.predecessor_map[neighbor]: + if predecessor not in queue and predecessor not in visited: + queue.append(predecessor) + current_layer += 1 # Next layer new_layers = self.refine_layers(layers) return new_layers @@ -787,9 +942,12 @@ class Graph: vertices = self.sort_up_to_vertex(stop_component_id) elif start_component_id: vertices = self.sort_up_to_vertex(start_component_id, is_start=True) - else: vertices = self.vertices + # without component_id we are probably running in the chat + # so we want to pick only graphs that start with ChatInput or + # TextInput + vertices_layers = self.layered_topological_sort(vertices) vertices_layers = self.sort_by_avg_build_time(vertices_layers) # vertices_layers = self.sort_chat_inputs_first(vertices_layers) @@ -799,7 +957,7 @@ class Graph: # save the only the rest self.vertices_layers = vertices_layers[1:] self.vertices_to_run = { - vertex for vertex in chain.from_iterable(vertices_layers) + vertex_id for vertex_id in chain.from_iterable(vertices_layers) } # Return just the first layer return first_layer diff --git a/src/backend/langflow/graph/graph/constants.py b/src/backend/langflow/graph/graph/constants.py index 2badbf0eb..1a203849c 100644 --- a/src/backend/langflow/graph/graph/constants.py +++ b/src/backend/langflow/graph/graph/constants.py @@ -5,7 +5,6 @@ from langflow.interface.document_loaders.base import documentloader_creator from langflow.interface.embeddings.base import embedding_creator from langflow.interface.memories.base import memory_creator from langflow.interface.output_parsers.base import output_parser_creator -from langflow.interface.prompts.base import prompt_creator from langflow.interface.retrievers.base import retriever_creator from langflow.interface.text_splitters.base import textsplitter_creator from langflow.interface.toolkits.base import toolkits_creator @@ -34,7 +33,7 @@ class VertexTypesDict(LazyLoadDictBase): def get_type_dict(self): return { - **{t: types.PromptVertex for t in prompt_creator.to_list()}, + # **{t: types.PromptVertex for t in prompt_creator.to_list()}, **{t: types.AgentVertex for t in agent_creator.to_list()}, # **{t: types.ChainVertex for t in chain_creator.to_list()}, **{t: types.ToolVertex for t in tool_creator.to_list()}, diff --git a/src/backend/langflow/graph/graph/state_manager.py b/src/backend/langflow/graph/graph/state_manager.py index 64476011b..ed5844d87 100644 --- a/src/backend/langflow/graph/graph/state_manager.py +++ b/src/backend/langflow/graph/graph/state_manager.py @@ -2,6 +2,8 @@ from collections import defaultdict from threading import Lock from typing import Callable +from loguru import logger + class GraphStateManager: def __init__(self): @@ -9,21 +11,29 @@ class GraphStateManager: self.observers = defaultdict(list) self.lock = Lock() - def append_state(self, key, new_state): + def append_state(self, key, new_state, run_id: str): with self.lock: - if key not in self.states: - self.states[key] = [] - self.states[key].append(new_state) + if run_id not in self.states: + self.states[run_id] = {} + if key not in self.states[run_id]: + self.states[run_id][key] = [] + elif not isinstance(self.states[key], list): + self.states[run_id][key] = [self.states[key]] + self.states[run_id][key].append(new_state) self.notify_append_observers(key, new_state) - def update_state(self, key, new_state): + def update_state(self, key, new_state, run_id: str): with self.lock: - self.states[key] = new_state + if run_id not in self.states: + self.states[run_id] = {} + if key not in self.states[run_id]: + self.states[run_id][key] = {} + self.states[run_id][key] = new_state self.notify_observers(key, new_state) - def get_state(self, key): + def get_state(self, key, run_id: str): with self.lock: - return self.states.get(key, None) + return self.states.get(run_id, {}).get(key, "") def subscribe(self, key, observer: Callable): with self.lock: @@ -36,4 +46,8 @@ class GraphStateManager: def notify_append_observers(self, key, new_state): for callback in self.observers[key]: - callback(key, new_state, append=True) + try: + callback(key, new_state, append=True) + except Exception as e: + logger.error(f"Error in observer {callback} for key {key}: {e}") + logger.warning("Callbacks not implemented yet") diff --git a/src/backend/langflow/graph/vertex/base.py b/src/backend/langflow/graph/vertex/base.py index cf35c7eb7..24d010f75 100644 --- a/src/backend/langflow/graph/vertex/base.py +++ b/src/backend/langflow/graph/vertex/base.py @@ -18,6 +18,7 @@ from loguru import logger from langflow.graph.schema import ( INPUT_COMPONENTS, + INPUT_FIELD_NAME, OUTPUT_COMPONENTS, InterfaceComponentTypes, ResultData, @@ -58,8 +59,14 @@ class Vertex: self.will_stream = False self.updated_raw_params = False self.id: str = data["id"] - self.is_input = any(input_component_name in self.id for input_component_name in INPUT_COMPONENTS) - self.is_output = any(output_component_name in self.id for output_component_name in OUTPUT_COMPONENTS) + self.is_state = False + self.is_input = any( + input_component_name in self.id for input_component_name in INPUT_COMPONENTS + ) + self.is_output = any( + output_component_name in self.id + for output_component_name in OUTPUT_COMPONENTS + ) self.has_session_id = None self._custom_component = None self.has_external_input = False @@ -94,21 +101,21 @@ class Vertex: def update_graph_state(self, key, new_state, append: bool): if append: - if key in self.graph_state: - self.graph_state[key].append(new_state) - else: - self.graph_state[key] = [new_state] + self.graph.append_state(key, new_state, caller=self.id) else: - self.graph_state[key] = new_state + self.graph.update_state(key, new_state, caller=self.id) def set_state(self, state: str): self.state = VertexStates[state] if self.state == VertexStates.INACTIVE and self.graph.in_degree_map[self.id] < 2: # If the vertex is inactive and has only one in degree # it means that it is not a merge point in the graph - self.graph.inactive_vertices.add(self.id) - elif self.state == VertexStates.ACTIVE and self.id in self.graph.inactive_vertices: - self.graph.inactive_vertices.remove(self.id) + self.graph.inactivated_vertices.add(self.id) + elif ( + self.state == VertexStates.ACTIVE + and self.id in self.graph.inactivated_vertices + ): + self.graph.inactivated_vertices.remove(self.id) @property def avg_build_time(self): @@ -176,7 +183,7 @@ class Vertex: self.base_type = state["base_type"] self.is_task = state["is_task"] self.id = state["id"] - self.pinned = state.get("pinned", False) + self.frozen = state.get("frozen", False) self._parse_data() if "_built_object" in state: self._built_object = state["_built_object"] @@ -202,7 +209,7 @@ class Vertex: self.data = self._data["data"] self.output = self.data["node"]["base_classes"] self.display_name = self.data["node"].get("display_name", self.id.split("-")[0]) - self.pinned = self.data["node"].get("pinned", False) + self.frozen = self.data["node"].get("frozen", False) self.selected_output_type = self.data["node"].get("selected_output_type") self.is_input = self.data["node"].get("is_input") or self.is_input self.is_output = self.data["node"].get("is_output") or self.is_output @@ -282,7 +289,16 @@ class Vertex: params[param_key] = [] params[param_key].append(self.graph.get_vertex(edge.source_id)) elif edge.target_id == self.id: - params[param_key] = self.graph.get_vertex(edge.source_id) + if isinstance(template_dict[param_key].get("value"), dict): + # we don't know the key of the dict but we need to set the value + # to the vertex that is the source of the edge + param_dict = template_dict[param_key]["value"] + params[param_key] = { + key: self.graph.get_vertex(edge.source_id) + for key in param_dict.keys() + } + else: + params[param_key] = self.graph.get_vertex(edge.source_id) for key, value in template_dict.items(): if key in params: @@ -348,7 +364,7 @@ class Vertex: self.params = params self._raw_params = params.copy() - def update_raw_params(self, new_params: Dict[str, str]): + def update_raw_params(self, new_params: Dict[str, str], overwrite: bool = False): """ Update the raw parameters of the vertex with the given new parameters. @@ -363,6 +379,10 @@ class Vertex: return if any(isinstance(self._raw_params.get(key), Vertex) for key in new_params): return + if not overwrite: + for key in new_params.copy(): + if key not in self._raw_params: + new_params.pop(key) self._raw_params.update(new_params) self.updated_raw_params = True @@ -452,9 +472,24 @@ class Vertex: await self._build_node_and_update_params(key, value, user_id) elif isinstance(value, list) and self._is_list_of_nodes(value): await self._build_list_of_nodes_and_update_params(key, value, user_id) + elif isinstance(value, dict): + await self._build_dict_and_update_params(key, value, user_id) elif key not in self.params or self.updated_raw_params: self.params[key] = value + async def _build_dict_and_update_params( + self, key, nodes_dict: Dict[str, "Vertex"], user_id=None + ): + """ + Iterates over a dictionary of nodes, builds each and updates the params dictionary. + """ + for sub_key, value in nodes_dict.items(): + if not self._is_node(value): + self.params[key][sub_key] = value + else: + built = await value.get_result(requester=self, user_id=user_id) + self.params[key][sub_key] = built + def _is_node(self, value): """ Checks if the provided value is an instance of Vertex. @@ -517,10 +552,14 @@ class Vertex: self.params[key].extend(built) else: try: + if self.params[key] == built: + continue + self.params[key].append(built) except AttributeError as e: logger.exception(e) raise ValueError( + f"Params {key} ({self.params[key]}) is not a list and cannot be extended with {built}" f"Error building node {self.display_name}: {str(e)}" ) from e @@ -625,12 +664,17 @@ class Vertex: self.build_inactive() return - if self.pinned and self._built: + if self.frozen and self._built: return self.get_requester_result(requester) + elif self._built and requester is not None: + # This means that the vertex has already been built + # and we are just getting the result for the requester + return await self.get_requester_result(requester) self._reset() if self._is_chat_input() and inputs is not None: - self.update_raw_params(inputs) + inputs = {"input_value": inputs.get(INPUT_FIELD_NAME, "")} + self.update_raw_params(inputs, overwrite=True) # Run steps for step in self.steps: @@ -682,12 +726,8 @@ class Vertex: def _built_object_repr(self): # Add a message with an emoji, stars for sucess, - return "Built sucessfully ✨" if self._built_object is not None else "Failed to build 😵‍💫" - - -class StatefulVertex(Vertex): - pass - - -class StatelessVertex(Vertex): - pass + return ( + "Built sucessfully ✨" + if self._built_object is not None + else "Failed to build 😵‍💫" + ) diff --git a/src/backend/langflow/graph/vertex/types.py b/src/backend/langflow/graph/vertex/types.py index 1c93729f1..99791729f 100644 --- a/src/backend/langflow/graph/vertex/types.py +++ b/src/backend/langflow/graph/vertex/types.py @@ -1,7 +1,6 @@ import ast import json -from typing import (AsyncIterator, Callable, Dict, Iterator, List, Optional, - Union) +from typing import AsyncIterator, Callable, Dict, Iterator, List, Optional, Union import yaml from langchain_core.messages import AIMessage @@ -9,14 +8,14 @@ from loguru import logger from langflow.graph.schema import INPUT_FIELD_NAME, InterfaceComponentTypes from langflow.graph.utils import UnbuiltObject, flatten_list, serialize_field -from langflow.graph.vertex.base import StatefulVertex, StatelessVertex +from langflow.graph.vertex.base import Vertex from langflow.interface.utils import extract_input_variables_from_prompt from langflow.schema import Record from langflow.services.monitor.utils import log_vertex_build from langflow.utils.schemas import ChatOutputResponse -class AgentVertex(StatelessVertex): +class AgentVertex(Vertex): def __init__(self, data: Dict, graph, params: Optional[Dict] = None): super().__init__(data, graph=graph, base_type="agents", params=params) @@ -59,12 +58,12 @@ class AgentVertex(StatelessVertex): await self._build(user_id=user_id) -class ToolVertex(StatelessVertex): +class ToolVertex(Vertex): def __init__(self, data: Dict, graph, params: Optional[Dict] = None): super().__init__(data, graph=graph, base_type="tools", params=params) -class LLMVertex(StatelessVertex): +class LLMVertex(Vertex): built_node_type = None class_built_object = None @@ -87,7 +86,7 @@ class LLMVertex(StatelessVertex): self.class_built_object = self._built_object -class ToolkitVertex(StatelessVertex): +class ToolkitVertex(Vertex): def __init__(self, data: Dict, graph, params=None): super().__init__(data, graph=graph, base_type="toolkits", params=params) @@ -101,7 +100,7 @@ class FileToolVertex(ToolVertex): ) -class WrapperVertex(StatelessVertex): +class WrapperVertex(Vertex): def __init__(self, data: Dict, graph, params=None): super().__init__(data, graph=graph, base_type="wrappers") self.steps: List[Callable] = [self._custom_build] @@ -115,7 +114,7 @@ class WrapperVertex(StatelessVertex): await self._build(user_id=user_id) -class DocumentLoaderVertex(StatefulVertex): +class DocumentLoaderVertex(Vertex): def __init__(self, data: Dict, graph, params: Optional[Dict] = None): super().__init__(data, graph=graph, base_type="documentloaders", params=params) @@ -135,12 +134,12 @@ class DocumentLoaderVertex(StatefulVertex): return f"{self.vertex_type}()" -class EmbeddingVertex(StatefulVertex): +class EmbeddingVertex(Vertex): def __init__(self, data: Dict, graph, params: Optional[Dict] = None): super().__init__(data, graph=graph, base_type="embeddings", params=params) -class VectorStoreVertex(StatefulVertex): +class VectorStoreVertex(Vertex): def __init__(self, data: Dict, graph, params=None): super().__init__(data, graph=graph, base_type="vectorstores") @@ -182,17 +181,17 @@ class VectorStoreVertex(StatefulVertex): self.remove_docs_and_texts_from_params() -class MemoryVertex(StatefulVertex): +class MemoryVertex(Vertex): def __init__(self, data: Dict, graph): super().__init__(data, graph=graph, base_type="memory") -class RetrieverVertex(StatefulVertex): +class RetrieverVertex(Vertex): def __init__(self, data: Dict, graph): super().__init__(data, graph=graph, base_type="retrievers") -class TextSplitterVertex(StatefulVertex): +class TextSplitterVertex(Vertex): def __init__(self, data: Dict, graph, params: Optional[Dict] = None): super().__init__(data, graph=graph, base_type="textsplitters", params=params) @@ -210,7 +209,7 @@ class TextSplitterVertex(StatefulVertex): return f"{self.vertex_type}()" -class ChainVertex(StatelessVertex): +class ChainVertex(Vertex): def __init__(self, data: Dict, graph): super().__init__(data, graph=graph, base_type="chains") self.steps = [self._custom_build] @@ -224,7 +223,7 @@ class ChainVertex(StatelessVertex): if isinstance(value, PromptVertex): # Build the PromptVertex, passing the tools if available tools = kwargs.get("tools", None) - self.params[key] = value.build(tools=tools, pinned=force) + self.params[key] = value.build(tools=tools, frozen=force) await self._build(user_id=user_id) @@ -240,7 +239,7 @@ class ChainVertex(StatelessVertex): return super()._built_object_repr() -class PromptVertex(StatelessVertex): +class PromptVertex(Vertex): def __init__(self, data: Dict, graph): super().__init__(data, graph=graph, base_type="prompts") self.steps: List[Callable] = [self._custom_build] @@ -324,12 +323,12 @@ class PromptVertex(StatelessVertex): return str(self._built_object) -class OutputParserVertex(StatelessVertex): +class OutputParserVertex(Vertex): def __init__(self, data: Dict, graph): super().__init__(data, graph=graph, base_type="output_parsers") -class CustomComponentVertex(StatelessVertex): +class CustomComponentVertex(Vertex): def __init__(self, data: Dict, graph): super().__init__(data, graph=graph, base_type="custom_components") @@ -338,7 +337,7 @@ class CustomComponentVertex(StatelessVertex): return self.artifacts["repr"] or super()._built_object_repr() -class ChatVertex(StatelessVertex): +class ChatVertex(Vertex): def __init__(self, data: Dict, graph): super().__init__(data, graph=graph, base_type="custom_components", is_task=True) self.steps = [self._build, self._run] @@ -398,7 +397,7 @@ class ChatVertex(StatelessVertex): self.will_stream = stream_url is not None if artifacts: - self.artifacts = artifacts.model_dump() + self.artifacts = artifacts.model_dump(exclude_none=True) if isinstance(self._built_object, (AsyncIterator, Iterator)): if self.params["return_record"]: self._built_object = Record(text=message, data=self.artifacts) @@ -460,7 +459,7 @@ class ChatVertex(StatelessVertex): return self.vertex_type == InterfaceComponentTypes.ChatInput and self.is_input -class RoutingVertex(StatelessVertex): +class RoutingVertex(Vertex): def __init__(self, data: Dict, graph): super().__init__(data, graph=graph, base_type="custom_components") self.use_result = True @@ -495,6 +494,18 @@ class RoutingVertex(StatelessVertex): self._built_result = None +class StateVertex(Vertex): + def __init__(self, data: Dict, graph): + super().__init__(data, graph=graph, base_type="custom_components") + self.steps = [self._build] + self.is_state = True + + @property + def successors_ids(self) -> List[str]: + successors = self.graph.successor_map.get(self.id, []) + return successors + self.graph.activated_vertices + + def dict_to_codeblock(d: dict) -> str: serialized = {key: serialize_field(val) for key, val in d.items()} json_str = json.dumps(serialized, indent=4) diff --git a/src/backend/langflow/helpers/record.py b/src/backend/langflow/helpers/record.py index 9e1f2eb34..dd480e4e7 100644 --- a/src/backend/langflow/helpers/record.py +++ b/src/backend/langflow/helpers/record.py @@ -30,8 +30,5 @@ def records_to_text(template: str, records: list[Record]) -> list[str]: records = [records] # Check if there are any format strings in the template - formated_records = [ - template.format(text=record.text, data=record.data, **record.data) - for record in records - ] + formated_records = [template.format(**record.data) for record in records] return "\n".join(formated_records) diff --git a/src/backend/langflow/initial_setup/__init__.py b/src/backend/langflow/initial_setup/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/src/backend/langflow/initial_setup/setup.py b/src/backend/langflow/initial_setup/setup.py new file mode 100644 index 000000000..040ae2594 --- /dev/null +++ b/src/backend/langflow/initial_setup/setup.py @@ -0,0 +1,169 @@ +from datetime import datetime +from pathlib import Path + +import orjson +from emoji import demojize, purely_emoji +from loguru import logger +from sqlmodel import select + +from langflow.interface.types import get_all_components +from langflow.services.database.models.flow.model import Flow, FlowCreate +from langflow.services.deps import get_settings_service, session_scope + +STARTER_FOLDER_NAME = "Starter Projects" + + +# In the folder ./starter_projects we have a few JSON files that represent +# starter projects. We want to load these into the database so that users +# can use them as a starting point for their own projects. + + +def update_projects_components_with_latest_component_versions( + project_data, all_types_dict +): + + # project data has a nodes key, which is a list of nodes + # we want to run through each node and see if it exists in the all_types_dict + # if so, we go into the template key and also get the template from all_types_dict + # and update it all + for node in project_data.get("nodes", []): + node_data = node.get("data").get("node") + if node_data.get("display_name") in all_types_dict: + latest_node = all_types_dict.get(node_data.get("display_name")) + latest_template = latest_node.get("template") + node_data["template"]["code"] = latest_template["code"] + return project_data + + +def load_starter_projects(): + starter_projects = [] + folder = Path(__file__).parent / "starter_projects" + for file in folder.glob("*.json"): + project = orjson.loads(file.read_text()) + starter_projects.append(project) + logger.info(f"Loaded starter project {file}") + return starter_projects + + +def get_project_data(project): + project_name = project.get("name") + project_description = project.get("description") + project_is_component = project.get("is_component") + project_updated_at = project.get("updated_at") + if not project_updated_at: + project_updated_at = datetime.utcnow().isoformat() + updated_at_datetime = datetime.strptime(project_updated_at, "%Y-%m-%dT%H:%M:%S.%f") + project_data = project.get("data") + project_icon = project.get("icon") + project_icon_bg_color = project.get("icon_bg_color") + return ( + project_name, + project_description, + project_is_component, + updated_at_datetime, + project_data, + project_icon, + project_icon_bg_color, + ) + + +def update_existing_project( + existing_project, + project_name, + project_description, + project_is_component, + updated_at_datetime, + project_data, + project_icon, + project_icon_bg_color, +): + logger.info(f"Updating starter project {project_name}") + existing_project.data = project_data + existing_project.folder = STARTER_FOLDER_NAME + existing_project.description = project_description + existing_project.is_component = project_is_component + existing_project.updated_at = updated_at_datetime + existing_project.icon = project_icon + existing_project.icon_bg_color = project_icon_bg_color + + +def create_new_project( + session, + project_name, + project_description, + project_is_component, + updated_at_datetime, + project_data, + project_icon, + project_icon_bg_color, +): + logger.info(f"Creating starter project {project_name}") + new_project = FlowCreate( + name=project_name, + description=project_description, + icon=project_icon if not purely_emoji(project_icon) else demojize(project_icon), + icon_bg_color=project_icon_bg_color, + data=project_data, + is_component=project_is_component, + updated_at=updated_at_datetime, + folder=STARTER_FOLDER_NAME, + ) + db_flow = Flow.model_validate(new_project, from_attributes=True) + session.add(db_flow) + + +def get_all_flows_similar_to_project(session, project_name): + flows = session.exec( + select(Flow).where( + Flow.name == project_name, + Flow.folder == STARTER_FOLDER_NAME, + ) + ).all() + return flows + + +def delete_start_projects(session): + flows = session.exec( + select(Flow).where( + Flow.folder == STARTER_FOLDER_NAME, + ) + ).all() + for flow in flows: + session.delete(flow) + + +def create_or_update_starter_projects(): + components_paths = get_settings_service().settings.COMPONENTS_PATH + all_types_dict = get_all_components(components_paths, as_dict=True) + with session_scope() as session: + starter_projects = load_starter_projects() + delete_start_projects(session) + for project in starter_projects: + ( + project_name, + project_description, + project_is_component, + updated_at_datetime, + project_data, + project_icon, + project_icon_bg_color, + ) = get_project_data(project) + project_data = update_projects_components_with_latest_component_versions( + project_data, all_types_dict + ) + if project_name and project_data: + for existing_project in get_all_flows_similar_to_project( + session, project_name + ): + session.delete(existing_project) + + create_new_project( + session, + project_name, + project_description, + project_is_component, + updated_at_datetime, + project_data, + project_icon, + project_icon_bg_color, + ) diff --git a/src/backend/langflow/initial_setup/starter_projects/Langflow Basic Prompting.json b/src/backend/langflow/initial_setup/starter_projects/Langflow Basic Prompting.json new file mode 100644 index 000000000..879cb1468 --- /dev/null +++ b/src/backend/langflow/initial_setup/starter_projects/Langflow Basic Prompting.json @@ -0,0 +1,1199 @@ +{ + "id": "4ac1ae80-b818-4fdf-b72c-f22dace784a5", + "icon": "📝", + "icon_bg_color": "#FFD700", + "data": { + "nodes": [ + { + "id": "ChatInput-WcFzs", + "type": "genericNode", + "position": { + "x": 86.66131544226482, + "y": 69.51987428063671 + }, + "data": { + "type": "ChatInput", + "node": { + "template": { + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from typing import Optional, Union\n\nfrom langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.schema import Record\n\n\nclass ChatInput(ChatComponent):\n display_name = \"Chat Input\"\n description = \"Used to get user input from the chat.\"\n\n def build(\n self,\n sender: Optional[str] = \"User\",\n sender_name: Optional[str] = \"User\",\n input_value: Optional[str] = None,\n session_id: Optional[str] = None,\n return_record: Optional[bool] = False,\n ) -> Union[Text, Record]:\n return super().build(\n sender=sender,\n sender_name=sender_name,\n input_value=input_value,\n session_id=session_id,\n return_record=return_record,\n )\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": false, + "dynamic": true, + "info": "", + "title_case": false + }, + "input_value": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "input_value", + "display_name": "Message", + "advanced": false, + "input_types": [ + "Text" + ], + "dynamic": false, + "info": "", + "title_case": false, + "value": "Write a press release " + }, + "return_record": { + "type": "bool", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "return_record", + "display_name": "Return Record", + "advanced": false, + "dynamic": false, + "info": "Return the message as a record containing the sender, sender_name, and session_id.", + "title_case": false + }, + "sender": { + "type": "str", + "required": false, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "value": "User", + "fileTypes": [], + "file_path": "", + "password": false, + "options": [ + "Machine", + "User" + ], + "name": "sender", + "display_name": "Sender Type", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "sender_name": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": "User", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "sender_name", + "display_name": "Sender Name", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "session_id": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "session_id", + "display_name": "Session ID", + "advanced": false, + "dynamic": false, + "info": "If provided, the message will be stored in the memory.", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "_type": "CustomComponent" + }, + "description": "Used to get user input from the chat.", + "base_classes": [ + "object", + "Text", + "Record", + "str" + ], + "display_name": "Chat Input", + "documentation": "", + "custom_fields": { + "sender": null, + "sender_name": null, + "input_value": null, + "session_id": null, + "return_record": null + }, + "output_types": [ + "Text", + "Record" + ], + "field_formatters": {}, + "frozen": false, + "field_order": [], + "beta": true + }, + "id": "ChatInput-WcFzs" + }, + "selected": false, + "width": 384, + "height": 667, + "positionAbsolute": { + "x": 86.66131544226482, + "y": 69.51987428063671 + }, + "dragging": false + }, + { + "id": "Prompt-QtWOn", + "type": "genericNode", + "position": { + "x": 731.5380376186406, + "y": 273.5294585628963 + }, + "data": { + "type": "Prompt", + "node": { + "template": { + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from langchain_core.prompts import PromptTemplate\n\nfrom langflow import CustomComponent\nfrom langflow.field_typing import Prompt, TemplateField, Text\n\n\nclass PromptComponent(CustomComponent):\n display_name: str = \"Prompt\"\n description: str = \"A component for creating prompts using templates\"\n beta = True\n\n def build_config(self):\n return {\n \"template\": TemplateField(display_name=\"Template\"),\n \"code\": TemplateField(advanced=True),\n }\n\n def build(\n self,\n template: Prompt,\n **kwargs,\n ) -> Text:\n prompt_template = PromptTemplate.from_template(Text(template))\n\n attributes_to_check = [\"text\", \"page_content\"]\n for key, value in kwargs.copy().items():\n for attribute in attributes_to_check:\n if hasattr(value, attribute):\n kwargs[key] = getattr(value, attribute)\n\n try:\n formated_prompt = prompt_template.format(**kwargs)\n except Exception as exc:\n raise ValueError(f\"Error formatting prompt: {exc}\") from exc\n self.status = f'Prompt: \"{formated_prompt}\"'\n return formated_prompt\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": true, + "dynamic": true, + "info": "", + "title_case": false + }, + "template": { + "type": "prompt", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": "{request}\n\n- {topic_1}\n- {topic_2}\n\n\nAnswer:\n\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "template", + "display_name": "Template", + "advanced": false, + "input_types": [ + "Text" + ], + "dynamic": false, + "info": "", + "title_case": false + }, + "_type": "CustomComponent", + "request": { + "field_type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "request", + "display_name": "request", + "advanced": false, + "input_types": [ + "Document", + "BaseOutputParser", + "Text", + "Record" + ], + "dynamic": false, + "info": "", + "title_case": false, + "type": "str" + }, + "topic_1": { + "field_type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "topic_1", + "display_name": "topic_1", + "advanced": false, + "input_types": [ + "Document", + "BaseOutputParser", + "Text", + "Record" + ], + "dynamic": false, + "info": "", + "title_case": false, + "type": "str" + }, + "topic_2": { + "field_type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "topic_2", + "display_name": "topic_2", + "advanced": false, + "input_types": [ + "Document", + "BaseOutputParser", + "Text", + "Record" + ], + "dynamic": false, + "info": "", + "title_case": false, + "type": "str" + } + }, + "description": "A component for creating prompts using templates", + "icon": null, + "is_input": null, + "is_output": null, + "is_composition": null, + "base_classes": [ + "object", + "Text", + "str" + ], + "name": "", + "display_name": "Prompt", + "documentation": "", + "custom_fields": { + "template": [ + "request", + "topic_1", + "topic_2" + ] + }, + "output_types": [ + "Text" + ], + "full_path": null, + "field_formatters": {}, + "frozen": false, + "field_order": [], + "beta": true, + "error": null + }, + "id": "Prompt-QtWOn", + "description": "A component for creating prompts using templates", + "display_name": "Prompt" + }, + "selected": false, + "width": 384, + "height": 571, + "dragging": false + }, + { + "id": "TextInput-xUQ9w", + "type": "genericNode", + "position": { + "x": 91.73477837172948, + "y": 787.6263883143245 + }, + "data": { + "type": "TextInput", + "node": { + "template": { + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from typing import Optional\n\nfrom langflow.base.io.text import TextComponent\nfrom langflow.field_typing import Text\n\n\nclass TextInput(TextComponent):\n display_name = \"Text Input\"\n description = \"Used to pass text input to the next component.\"\n\n def build(self, input_value: Optional[str] = \"\") -> Text:\n return super().build(input_value=input_value)\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": false, + "dynamic": true, + "info": "", + "title_case": false + }, + "input_value": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "Cars", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "input_value", + "display_name": "Value", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "_type": "CustomComponent" + }, + "description": "Used to pass text input to the next component.", + "base_classes": [ + "object", + "Text", + "str" + ], + "display_name": "Topic 1", + "documentation": "", + "custom_fields": { + "input_value": null + }, + "output_types": [ + "Text" + ], + "field_formatters": {}, + "frozen": false, + "field_order": [ + "input_value" + ], + "beta": true + }, + "id": "TextInput-xUQ9w" + }, + "selected": false, + "width": 384, + "height": 289, + "positionAbsolute": { + "x": 91.73477837172948, + "y": 787.6263883143245 + }, + "dragging": false + }, + { + "id": "TextInput-l4zQt", + "type": "genericNode", + "position": { + "x": 93.56470545178581, + "y": 1125.2986229040628 + }, + "data": { + "type": "TextInput", + "node": { + "template": { + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from typing import Optional\n\nfrom langflow.base.io.text import TextComponent\nfrom langflow.field_typing import Text\n\n\nclass TextInput(TextComponent):\n display_name = \"Text Input\"\n description = \"Used to pass text input to the next component.\"\n\n def build(self, input_value: Optional[str] = \"\") -> Text:\n return super().build(input_value=input_value)\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": false, + "dynamic": true, + "info": "", + "title_case": false + }, + "input_value": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "Bottle", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "input_value", + "display_name": "Value", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "_type": "CustomComponent" + }, + "description": "Used to pass text input to the next component.", + "base_classes": [ + "object", + "Text", + "str" + ], + "display_name": "Topic 2", + "documentation": "", + "custom_fields": { + "input_value": null + }, + "output_types": [ + "Text" + ], + "field_formatters": {}, + "frozen": false, + "field_order": [ + "input_value" + ], + "beta": true + }, + "id": "TextInput-l4zQt" + }, + "selected": false, + "width": 384, + "height": 289, + "positionAbsolute": { + "x": 93.56470545178581, + "y": 1125.2986229040628 + }, + "dragging": false + }, + { + "id": "TextOutput-fTp5e", + "type": "genericNode", + "position": { + "x": 1242.6494961686594, + "y": 100.3023112016921 + }, + "data": { + "type": "TextOutput", + "node": { + "template": { + "input_value": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": "", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "input_value", + "display_name": "Value", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from typing import Optional\n\nfrom langflow.base.io.text import TextComponent\nfrom langflow.field_typing import Text\n\n\nclass TextOutput(TextComponent):\n display_name = \"Text Output\"\n description = \"Used to pass text output to the next component.\"\n\n field_config = {\n \"input_value\": {\"display_name\": \"Value\"},\n }\n\n def build(self, input_value: Optional[Text] = \"\") -> Text:\n return super().build(input_value=input_value)\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": false, + "dynamic": true, + "info": "", + "title_case": false + }, + "_type": "CustomComponent" + }, + "description": "Used to pass text output to the next component.", + "base_classes": [ + "object", + "Text", + "str" + ], + "display_name": "Prompt Output", + "documentation": "", + "custom_fields": { + "input_value": null + }, + "output_types": [ + "Text" + ], + "field_formatters": {}, + "frozen": false, + "field_order": [ + "input_value" + ], + "beta": true + }, + "id": "TextOutput-fTp5e" + }, + "selected": false, + "width": 384, + "height": 297, + "positionAbsolute": { + "x": 1242.6494961686594, + "y": 100.3023112016921 + }, + "dragging": false + }, + { + "id": "ChatOutput-AVN8s", + "type": "genericNode", + "position": { + "x": 2299.2806014585203, + "y": 449.2461295937437 + }, + "data": { + "type": "ChatOutput", + "node": { + "template": { + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from typing import Optional, Union\n\nfrom langflow.base.io.chat import ChatComponent\nfrom langflow.field_typing import Text\nfrom langflow.schema import Record\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Used to send a message to the chat.\"\n\n def build(\n self,\n sender: Optional[str] = \"Machine\",\n sender_name: Optional[str] = \"AI\",\n input_value: Optional[str] = None,\n session_id: Optional[str] = None,\n return_record: Optional[bool] = False,\n ) -> Union[Text, Record]:\n return super().build(\n sender=sender,\n sender_name=sender_name,\n input_value=input_value,\n session_id=session_id,\n return_record=return_record,\n )\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": false, + "dynamic": true, + "info": "", + "title_case": false + }, + "input_value": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "input_value", + "display_name": "Message", + "advanced": false, + "input_types": [ + "Text" + ], + "dynamic": false, + "info": "", + "title_case": false + }, + "return_record": { + "type": "bool", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "return_record", + "display_name": "Return Record", + "advanced": false, + "dynamic": false, + "info": "Return the message as a record containing the sender, sender_name, and session_id.", + "title_case": false + }, + "sender": { + "type": "str", + "required": false, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "value": "Machine", + "fileTypes": [], + "file_path": "", + "password": false, + "options": [ + "Machine", + "User" + ], + "name": "sender", + "display_name": "Sender Type", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "sender_name": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": "AI", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "sender_name", + "display_name": "Sender Name", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "session_id": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "session_id", + "display_name": "Session ID", + "advanced": false, + "dynamic": false, + "info": "If provided, the message will be stored in the memory.", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "_type": "CustomComponent" + }, + "description": "Used to send a message to the chat.", + "base_classes": [ + "object", + "Text", + "Record", + "str" + ], + "display_name": "Chat Output", + "documentation": "", + "custom_fields": { + "sender": null, + "sender_name": null, + "input_value": null, + "session_id": null, + "return_record": null + }, + "output_types": [ + "Text", + "Record" + ], + "field_formatters": {}, + "frozen": false, + "field_order": [], + "beta": true + }, + "id": "ChatOutput-AVN8s" + }, + "selected": false, + "width": 384, + "height": 667, + "positionAbsolute": { + "x": 2299.2806014585203, + "y": 449.2461295937437 + }, + "dragging": false + }, + { + "id": "OpenAIModel-IRzsd", + "type": "genericNode", + "position": { + "x": 1735.1051821296949, + "y": 246.4955882724468 + }, + "data": { + "type": "OpenAIModel", + "node": { + "template": { + "input_value": { + "type": "str", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "input_value", + "display_name": "Input", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from typing import Optional\n\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.components.models.base.model import LCModelComponent\nfrom langflow.field_typing import NestedDict, Text\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI Model\"\n description = \"Generates text using OpenAI's models.\"\n icon = \"OpenAI\"\n\n def build_config(self):\n return {\n \"input_value\": {\"display_name\": \"Input\"},\n \"max_tokens\": {\n \"display_name\": \"Max Tokens\",\n \"advanced\": False,\n \"required\": False,\n },\n \"model_kwargs\": {\n \"display_name\": \"Model Kwargs\",\n \"advanced\": True,\n \"required\": False,\n },\n \"model_name\": {\n \"display_name\": \"Model Name\",\n \"advanced\": False,\n \"required\": False,\n \"options\": [\n \"gpt-4-turbo-preview\",\n \"gpt-4-0125-preview\",\n \"gpt-4-1106-preview\",\n \"gpt-4-vision-preview\",\n \"gpt-3.5-turbo-0125\",\n \"gpt-3.5-turbo-1106\",\n ],\n },\n \"openai_api_base\": {\n \"display_name\": \"OpenAI API Base\",\n \"advanced\": False,\n \"required\": False,\n \"info\": (\n \"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\n\"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\"\n ),\n },\n \"openai_api_key\": {\n \"display_name\": \"OpenAI API Key\",\n \"advanced\": False,\n \"required\": False,\n \"password\": True,\n },\n \"temperature\": {\n \"display_name\": \"Temperature\",\n \"advanced\": False,\n \"required\": False,\n \"value\": 0.7,\n },\n \"stream\": {\n \"display_name\": \"Stream\",\n \"info\": \"Stream the response from the model.\",\n },\n }\n\n def build(\n self,\n input_value: Text,\n max_tokens: Optional[int] = 256,\n model_kwargs: NestedDict = {},\n model_name: str = \"gpt-4-1106-preview\",\n openai_api_base: Optional[str] = None,\n openai_api_key: Optional[str] = None,\n temperature: float = 0.7,\n stream: bool = False,\n ) -> Text:\n if not openai_api_base:\n openai_api_base = \"https://api.openai.com/v1\"\n if openai_api_key:\n secret_key = SecretStr(openai_api_key)\n else:\n secret_key = None\n output = ChatOpenAI(\n max_tokens=max_tokens,\n model_kwargs=model_kwargs,\n model=model_name,\n base_url=openai_api_base,\n api_key=secret_key,\n temperature=temperature,\n )\n\n return self.get_result(output=output, stream=stream, input_value=input_value)\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": false, + "dynamic": true, + "info": "", + "title_case": false + }, + "max_tokens": { + "type": "int", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": 256, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "max_tokens", + "display_name": "Max Tokens", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false + }, + "model_kwargs": { + "type": "NestedDict", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": {}, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "model_kwargs", + "display_name": "Model Kwargs", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "model_name": { + "type": "str", + "required": false, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "value": "gpt-4-1106-preview", + "fileTypes": [], + "file_path": "", + "password": false, + "options": [ + "gpt-4-turbo-preview", + "gpt-4-0125-preview", + "gpt-4-1106-preview", + "gpt-4-vision-preview", + "gpt-3.5-turbo-0125", + "gpt-3.5-turbo-1106" + ], + "name": "model_name", + "display_name": "Model Name", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "openai_api_base": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "openai_api_base", + "display_name": "OpenAI API Base", + "advanced": false, + "dynamic": false, + "info": "The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\n\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "openai_api_key": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": true, + "name": "openai_api_key", + "display_name": "OpenAI API Key", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "stream": { + "type": "bool", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": true, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "stream", + "display_name": "Stream", + "advanced": false, + "dynamic": false, + "info": "Stream the response from the model.", + "title_case": false + }, + "temperature": { + "type": "float", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": "0.2", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "temperature", + "display_name": "Temperature", + "advanced": false, + "dynamic": false, + "info": "", + "rangeSpec": { + "min": -1, + "max": 1, + "step": 0.1 + }, + "title_case": false + }, + "_type": "CustomComponent" + }, + "description": "Generates text using OpenAI's models.", + "icon": "OpenAI", + "base_classes": [ + "object", + "Text", + "str" + ], + "display_name": "OpenAI Model", + "documentation": "", + "custom_fields": { + "input_value": null, + "max_tokens": null, + "model_kwargs": null, + "model_name": null, + "openai_api_base": null, + "openai_api_key": null, + "temperature": null, + "stream": null + }, + "output_types": [ + "Text" + ], + "field_formatters": {}, + "frozen": false, + "field_order": [], + "beta": true + }, + "id": "OpenAIModel-IRzsd" + }, + "selected": false, + "width": 384, + "height": 847, + "positionAbsolute": { + "x": 1735.1051821296949, + "y": 246.4955882724468 + }, + "dragging": false + } + ], + "edges": [ + { + "source": "ChatInput-WcFzs", + "sourceHandle": "{œbaseClassesœ:[œobjectœ,œTextœ,œRecordœ,œstrœ],œdataTypeœ:œChatInputœ,œidœ:œChatInput-WcFzsœ}", + "target": "Prompt-QtWOn", + "targetHandle": "{œfieldNameœ:œrequestœ,œidœ:œPrompt-QtWOnœ,œinputTypesœ:[œDocumentœ,œBaseOutputParserœ,œTextœ,œRecordœ],œtypeœ:œstrœ}", + "data": { + "targetHandle": { + "fieldName": "request", + "id": "Prompt-QtWOn", + "inputTypes": [ + "Document", + "BaseOutputParser", + "Text", + "Record" + ], + "type": "str" + }, + "sourceHandle": { + "baseClasses": [ + "object", + "Text", + "Record", + "str" + ], + "dataType": "ChatInput", + "id": "ChatInput-WcFzs" + } + }, + "style": { + "stroke": "#555" + }, + "className": "stroke-foreground stroke-connection", + "id": "reactflow__edge-ChatInput-WcFzs{œbaseClassesœ:[œobjectœ,œTextœ,œRecordœ,œstrœ],œdataTypeœ:œChatInputœ,œidœ:œChatInput-WcFzsœ}-Prompt-QtWOn{œfieldNameœ:œrequestœ,œidœ:œPrompt-QtWOnœ,œinputTypesœ:[œDocumentœ,œBaseOutputParserœ,œTextœ,œRecordœ],œtypeœ:œstrœ}" + }, + { + "source": "Prompt-QtWOn", + "sourceHandle": "{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œPromptœ,œidœ:œPrompt-QtWOnœ}", + "target": "TextOutput-fTp5e", + "targetHandle": "{œfieldNameœ:œinput_valueœ,œidœ:œTextOutput-fTp5eœ,œinputTypesœ:[œTextœ],œtypeœ:œstrœ}", + "data": { + "targetHandle": { + "fieldName": "input_value", + "id": "TextOutput-fTp5e", + "inputTypes": [ + "Text" + ], + "type": "str" + }, + "sourceHandle": { + "baseClasses": [ + "object", + "Text", + "str" + ], + "dataType": "Prompt", + "id": "Prompt-QtWOn" + } + }, + "style": { + "stroke": "#555" + }, + "className": "stroke-foreground stroke-connection", + "id": "reactflow__edge-Prompt-QtWOn{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œPromptœ,œidœ:œPrompt-QtWOnœ}-TextOutput-fTp5e{œfieldNameœ:œinput_valueœ,œidœ:œTextOutput-fTp5eœ,œinputTypesœ:[œTextœ],œtypeœ:œstrœ}" + }, + { + "source": "TextOutput-fTp5e", + "sourceHandle": "{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œTextOutputœ,œidœ:œTextOutput-fTp5eœ}", + "target": "OpenAIModel-IRzsd", + "targetHandle": "{œfieldNameœ:œinput_valueœ,œidœ:œOpenAIModel-IRzsdœ,œinputTypesœ:[œTextœ],œtypeœ:œstrœ}", + "data": { + "targetHandle": { + "fieldName": "input_value", + "id": "OpenAIModel-IRzsd", + "inputTypes": [ + "Text" + ], + "type": "str" + }, + "sourceHandle": { + "baseClasses": [ + "object", + "Text", + "str" + ], + "dataType": "TextOutput", + "id": "TextOutput-fTp5e" + } + }, + "style": { + "stroke": "#555" + }, + "className": "stroke-foreground stroke-connection", + "id": "reactflow__edge-TextOutput-fTp5e{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œTextOutputœ,œidœ:œTextOutput-fTp5eœ}-OpenAIModel-IRzsd{œfieldNameœ:œinput_valueœ,œidœ:œOpenAIModel-IRzsdœ,œinputTypesœ:[œTextœ],œtypeœ:œstrœ}" + }, + { + "source": "OpenAIModel-IRzsd", + "sourceHandle": "{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œOpenAIModelœ,œidœ:œOpenAIModel-IRzsdœ}", + "target": "ChatOutput-AVN8s", + "targetHandle": "{œfieldNameœ:œinput_valueœ,œidœ:œChatOutput-AVN8sœ,œinputTypesœ:[œTextœ],œtypeœ:œstrœ}", + "data": { + "targetHandle": { + "fieldName": "input_value", + "id": "ChatOutput-AVN8s", + "inputTypes": [ + "Text" + ], + "type": "str" + }, + "sourceHandle": { + "baseClasses": [ + "object", + "Text", + "str" + ], + "dataType": "OpenAIModel", + "id": "OpenAIModel-IRzsd" + } + }, + "style": { + "stroke": "#555" + }, + "className": "stroke-foreground stroke-connection", + "id": "reactflow__edge-OpenAIModel-IRzsd{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œOpenAIModelœ,œidœ:œOpenAIModel-IRzsdœ}-ChatOutput-AVN8s{œfieldNameœ:œinput_valueœ,œidœ:œChatOutput-AVN8sœ,œinputTypesœ:[œTextœ],œtypeœ:œstrœ}" + }, + { + "source": "TextInput-l4zQt", + "sourceHandle": "{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œTextInputœ,œidœ:œTextInput-l4zQtœ}", + "target": "Prompt-QtWOn", + "targetHandle": "{œfieldNameœ:œtopic_2œ,œidœ:œPrompt-QtWOnœ,œinputTypesœ:[œDocumentœ,œBaseOutputParserœ,œTextœ,œRecordœ],œtypeœ:œstrœ}", + "data": { + "targetHandle": { + "fieldName": "topic_2", + "id": "Prompt-QtWOn", + "inputTypes": [ + "Document", + "BaseOutputParser", + "Text", + "Record" + ], + "type": "str" + }, + "sourceHandle": { + "baseClasses": [ + "object", + "Text", + "str" + ], + "dataType": "TextInput", + "id": "TextInput-l4zQt" + } + }, + "style": { + "stroke": "#555" + }, + "className": "stroke-foreground stroke-connection", + "id": "reactflow__edge-TextInput-l4zQt{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œTextInputœ,œidœ:œTextInput-l4zQtœ}-Prompt-QtWOn{œfieldNameœ:œtopic_2œ,œidœ:œPrompt-QtWOnœ,œinputTypesœ:[œDocumentœ,œBaseOutputParserœ,œTextœ,œRecordœ],œtypeœ:œstrœ}" + }, + { + "source": "TextInput-xUQ9w", + "sourceHandle": "{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œTextInputœ,œidœ:œTextInput-xUQ9wœ}", + "target": "Prompt-QtWOn", + "targetHandle": "{œfieldNameœ:œtopic_1œ,œidœ:œPrompt-QtWOnœ,œinputTypesœ:[œDocumentœ,œBaseOutputParserœ,œTextœ,œRecordœ],œtypeœ:œstrœ}", + "data": { + "targetHandle": { + "fieldName": "topic_1", + "id": "Prompt-QtWOn", + "inputTypes": [ + "Document", + "BaseOutputParser", + "Text", + "Record" + ], + "type": "str" + }, + "sourceHandle": { + "baseClasses": [ + "object", + "Text", + "str" + ], + "dataType": "TextInput", + "id": "TextInput-xUQ9w" + } + }, + "style": { + "stroke": "#555" + }, + "className": "stroke-foreground stroke-connection", + "id": "reactflow__edge-TextInput-xUQ9w{œbaseClassesœ:[œobjectœ,œTextœ,œstrœ],œdataTypeœ:œTextInputœ,œidœ:œTextInput-xUQ9wœ}-Prompt-QtWOn{œfieldNameœ:œtopic_1œ,œidœ:œPrompt-QtWOnœ,œinputTypesœ:[œDocumentœ,œBaseOutputParserœ,œTextœ,œRecordœ],œtypeœ:œstrœ}" + } + ], + "viewport": { + "x": 81.87154098468557, + "y": 266.8627952720353, + "zoom": 0.315125847895746 + } + }, + "description": "Use a language model to generate text based on a prompt. \n\nIn this project, you'll be able to generate text based on a request and some topics.\n\nThe Topic 1 and Topic 2 components are actually Text Input, while the Prompt Output component is a Text Output. Changing the name of the component makes them easier to identify when interacting with them.", + "name": "Basic Prompting", + "last_tested_version": "0.6.8", + "is_component": false +} \ No newline at end of file diff --git a/src/backend/langflow/initial_setup/starter_projects/Langflow Data Ingestion.json b/src/backend/langflow/initial_setup/starter_projects/Langflow Data Ingestion.json new file mode 100644 index 000000000..e40ffda57 --- /dev/null +++ b/src/backend/langflow/initial_setup/starter_projects/Langflow Data Ingestion.json @@ -0,0 +1,1087 @@ +{ + "name": "Data Ingestion", + "icon": ":inbox_tray:", + "icon_bg_color": "#FFD700", + "description": "This project is the starting point to insert data into a Vector Store. \n\nWe use the Vector Store Chroma but you can replace it with any other Vector Store. \n\nYou start by deciding what type of data you want to load, then you pick a place where you want to store the vectors and run it.\n\nThis will create a vector store in your local environment which you can query using the Chroma Search component.", + "data": { + "nodes": [ + { + "id": "RecursiveCharacterTextSplitter-jwfyG", + "type": "genericNode", + "position": { + "x": 1042.4388767006992, + "y": 633.2204634490822 + }, + "data": { + "type": "RecursiveCharacterTextSplitter", + "node": { + "template": { + "inputs": { + "type": "Document", + "required": true, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "inputs", + "display_name": "Input", + "advanced": false, + "input_types": [ + "Document", + "Record" + ], + "dynamic": false, + "info": "The texts to split.", + "title_case": false + }, + "chunk_overlap": { + "type": "int", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": 200, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "chunk_overlap", + "display_name": "Chunk Overlap", + "advanced": false, + "dynamic": false, + "info": "The amount of overlap between chunks.", + "title_case": false + }, + "chunk_size": { + "type": "int", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": 1000, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "chunk_size", + "display_name": "Chunk Size", + "advanced": false, + "dynamic": false, + "info": "The maximum length of each chunk.", + "title_case": false + }, + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from typing import Optional\n\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom langchain_core.documents import Document\n\nfrom langflow import CustomComponent\nfrom langflow.schema import Record\nfrom langflow.utils.util import build_loader_repr_from_documents\n\n\nclass RecursiveCharacterTextSplitterComponent(CustomComponent):\n display_name: str = \"Recursive Character Text Splitter\"\n description: str = \"Split text into chunks of a specified length.\"\n documentation: str = (\n \"https://docs.langflow.org/components/text-splitters#recursivecharactertextsplitter\"\n )\n\n def build_config(self):\n return {\n \"inputs\": {\n \"display_name\": \"Input\",\n \"info\": \"The texts to split.\",\n \"input_types\": [\"Document\", \"Record\"],\n },\n \"separators\": {\n \"display_name\": \"Separators\",\n \"info\": 'The characters to split on.\\nIf left empty defaults to [\"\\\\n\\\\n\", \"\\\\n\", \" \", \"\"].',\n \"is_list\": True,\n },\n \"chunk_size\": {\n \"display_name\": \"Chunk Size\",\n \"info\": \"The maximum length of each chunk.\",\n \"field_type\": \"int\",\n \"value\": 1000,\n },\n \"chunk_overlap\": {\n \"display_name\": \"Chunk Overlap\",\n \"info\": \"The amount of overlap between chunks.\",\n \"field_type\": \"int\",\n \"value\": 200,\n },\n \"code\": {\"show\": False},\n }\n\n def build(\n self,\n inputs: list[Document],\n separators: Optional[list[str]] = None,\n chunk_size: Optional[int] = 1000,\n chunk_overlap: Optional[int] = 200,\n ) -> list[Record]:\n \"\"\"\n Split text into chunks of a specified length.\n\n Args:\n separators (list[str]): The characters to split on.\n chunk_size (int): The maximum length of each chunk.\n chunk_overlap (int): The amount of overlap between chunks.\n length_function (function): The function to use to calculate the length of the text.\n\n Returns:\n list[str]: The chunks of text.\n \"\"\"\n\n if separators == \"\":\n separators = None\n elif separators:\n # check if the separators list has escaped characters\n # if there are escaped characters, unescape them\n separators = [x.encode().decode(\"unicode-escape\") for x in separators]\n\n # Make sure chunk_size and chunk_overlap are ints\n if isinstance(chunk_size, str):\n chunk_size = int(chunk_size)\n if isinstance(chunk_overlap, str):\n chunk_overlap = int(chunk_overlap)\n splitter = RecursiveCharacterTextSplitter(\n separators=separators,\n chunk_size=chunk_size,\n chunk_overlap=chunk_overlap,\n )\n documents = []\n for _input in inputs:\n if isinstance(_input, Record):\n documents.append(_input.to_lc_document())\n else:\n documents.append(_input)\n docs = splitter.split_documents(documents)\n self.repr_value = build_loader_repr_from_documents(docs)\n return self.to_records(docs)\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": false, + "dynamic": true, + "info": "", + "title_case": false + }, + "separators": { + "type": "str", + "required": false, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "separators", + "display_name": "Separators", + "advanced": false, + "dynamic": false, + "info": "The characters to split on.\nIf left empty defaults to [\"\\n\\n\", \"\\n\", \" \", \"\"].", + "title_case": false, + "input_types": [ + "Text" + ], + "value": [ + "\\n" + ] + }, + "_type": "CustomComponent" + }, + "description": "Split text into chunks of a specified length.", + "base_classes": [ + "Record" + ], + "display_name": "Recursive Character Text Splitter", + "documentation": "https://docs.langflow.org/components/text-splitters#recursivecharactertextsplitter", + "custom_fields": { + "inputs": null, + "separators": null, + "chunk_size": null, + "chunk_overlap": null + }, + "output_types": [ + "Record" + ], + "field_formatters": {}, + "frozen": false, + "beta": true + }, + "id": "RecursiveCharacterTextSplitter-jwfyG" + }, + "selected": false, + "width": 384, + "height": 509, + "positionAbsolute": { + "x": 1042.4388767006992, + "y": 633.2204634490822 + }, + "dragging": false + }, + { + "id": "Chroma-aFGHF", + "type": "genericNode", + "position": { + "x": 1641.280676720732, + "y": 356.94961598422196 + }, + "data": { + "type": "Chroma", + "node": { + "template": { + "embedding": { + "type": "Embeddings", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "embedding", + "display_name": "Embedding", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false + }, + "inputs": { + "type": "Record", + "required": false, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "inputs", + "display_name": "Input", + "advanced": false, + "input_types": [ + "Document", + "Record" + ], + "dynamic": false, + "info": "", + "title_case": false + }, + "chroma_server_cors_allow_origins": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "chroma_server_cors_allow_origins", + "display_name": "Server CORS Allow Origins", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "chroma_server_grpc_port": { + "type": "int", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "chroma_server_grpc_port", + "display_name": "Server gRPC Port", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "chroma_server_host": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "chroma_server_host", + "display_name": "Server Host", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "chroma_server_port": { + "type": "int", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "chroma_server_port", + "display_name": "Server Port", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "chroma_server_ssl_enabled": { + "type": "bool", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "chroma_server_ssl_enabled", + "display_name": "Server SSL Enabled", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from typing import List, Optional, Union\n\nimport chromadb # type: ignore\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever\nfrom langchain_community.vectorstores import VectorStore\nfrom langchain_community.vectorstores.chroma import Chroma\n\nfrom langflow import CustomComponent\nfrom langflow.schema.schema import Record\n\n\nclass ChromaComponent(CustomComponent):\n \"\"\"\n A custom component for implementing a Vector Store using Chroma.\n \"\"\"\n\n display_name: str = \"Chroma\"\n description: str = \"Implementation of Vector Store using Chroma\"\n documentation = \"https://python.langchain.com/docs/integrations/vectorstores/chroma\"\n beta: bool = True\n icon = \"Chroma\"\n\n def build_config(self):\n \"\"\"\n Builds the configuration for the component.\n\n Returns:\n - dict: A dictionary containing the configuration options for the component.\n \"\"\"\n return {\n \"collection_name\": {\"display_name\": \"Collection Name\", \"value\": \"langflow\"},\n \"index_directory\": {\"display_name\": \"Persist Directory\"},\n \"code\": {\"advanced\": True, \"display_name\": \"Code\"},\n \"inputs\": {\"display_name\": \"Input\", \"input_types\": [\"Document\", \"Record\"]},\n \"embedding\": {\"display_name\": \"Embedding\"},\n \"chroma_server_cors_allow_origins\": {\n \"display_name\": \"Server CORS Allow Origins\",\n \"advanced\": True,\n },\n \"chroma_server_host\": {\"display_name\": \"Server Host\", \"advanced\": True},\n \"chroma_server_port\": {\"display_name\": \"Server Port\", \"advanced\": True},\n \"chroma_server_grpc_port\": {\n \"display_name\": \"Server gRPC Port\",\n \"advanced\": True,\n },\n \"chroma_server_ssl_enabled\": {\n \"display_name\": \"Server SSL Enabled\",\n \"advanced\": True,\n },\n }\n\n def build(\n self,\n collection_name: str,\n embedding: Embeddings,\n chroma_server_ssl_enabled: bool,\n index_directory: Optional[str] = None,\n inputs: Optional[List[Record]] = None,\n chroma_server_cors_allow_origins: Optional[str] = None,\n chroma_server_host: Optional[str] = None,\n chroma_server_port: Optional[int] = None,\n chroma_server_grpc_port: Optional[int] = None,\n ) -> Union[VectorStore, BaseRetriever]:\n \"\"\"\n Builds the Vector Store or BaseRetriever object.\n\n Args:\n - collection_name (str): The name of the collection.\n - index_directory (Optional[str]): The directory to persist the Vector Store to.\n - chroma_server_ssl_enabled (bool): Whether to enable SSL for the Chroma server.\n - embedding (Optional[Embeddings]): The embeddings to use for the Vector Store.\n - documents (Optional[Document]): The documents to use for the Vector Store.\n - chroma_server_cors_allow_origins (Optional[str]): The CORS allow origins for the Chroma server.\n - chroma_server_host (Optional[str]): The host for the Chroma server.\n - chroma_server_port (Optional[int]): The port for the Chroma server.\n - chroma_server_grpc_port (Optional[int]): The gRPC port for the Chroma server.\n\n Returns:\n - Union[VectorStore, BaseRetriever]: The Vector Store or BaseRetriever object.\n \"\"\"\n\n # Chroma settings\n chroma_settings = None\n\n if chroma_server_host is not None:\n chroma_settings = chromadb.config.Settings(\n chroma_server_cors_allow_origins=chroma_server_cors_allow_origins\n or None,\n chroma_server_host=chroma_server_host,\n chroma_server_port=chroma_server_port or None,\n chroma_server_grpc_port=chroma_server_grpc_port or None,\n chroma_server_ssl_enabled=chroma_server_ssl_enabled,\n )\n\n # If documents, then we need to create a Chroma instance using .from_documents\n\n # Check index_directory and expand it if it is a relative path\n if index_directory is not None:\n index_directory = self.resolve_path(index_directory)\n\n documents = []\n for _input in inputs:\n if isinstance(_input, Record):\n documents.append(_input.to_lc_document())\n else:\n documents.append(_input)\n if documents is not None and embedding is not None:\n if len(documents) == 0:\n raise ValueError(\n \"If documents are provided, there must be at least one document.\"\n )\n chroma = Chroma.from_documents(\n documents=documents, # type: ignore\n persist_directory=index_directory,\n collection_name=collection_name,\n embedding=embedding,\n client_settings=chroma_settings,\n )\n else:\n chroma = Chroma(\n persist_directory=index_directory,\n client_settings=chroma_settings,\n embedding_function=embedding,\n )\n return chroma\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": true, + "dynamic": true, + "info": "", + "title_case": false + }, + "collection_name": { + "type": "str", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": "langflow_contrib", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "collection_name", + "display_name": "Collection Name", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "index_directory": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "index_directory", + "display_name": "Persist Directory", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ], + "value": "./chroma_langflow" + }, + "_type": "CustomComponent" + }, + "description": "Implementation of Vector Store using Chroma", + "icon": "Chroma", + "base_classes": [ + "Serializable", + "VectorStore", + "object", + "Runnable", + "BaseRetriever", + "RunnableSerializable", + "Generic" + ], + "display_name": "Chroma", + "documentation": "https://python.langchain.com/docs/integrations/vectorstores/chroma", + "custom_fields": { + "collection_name": null, + "embedding": null, + "chroma_server_ssl_enabled": null, + "index_directory": null, + "inputs": null, + "chroma_server_cors_allow_origins": null, + "chroma_server_host": null, + "chroma_server_port": null, + "chroma_server_grpc_port": null + }, + "output_types": [ + "VectorStore", + "BaseRetriever" + ], + "field_formatters": {}, + "frozen": false, + "beta": true + }, + "id": "Chroma-aFGHF" + }, + "selected": true, + "width": 384, + "height": 495, + "positionAbsolute": { + "x": 1641.280676720732, + "y": 356.94961598422196 + }, + "dragging": false + }, + { + "id": "OpenAIEmbeddings-rbMk3", + "type": "genericNode", + "position": { + "x": 1053.9472627140208, + "y": -2.5921878249999963 + }, + "data": { + "type": "OpenAIEmbeddings", + "node": { + "template": { + "allowed_special": { + "type": "str", + "required": true, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "value": [], + "fileTypes": [], + "file_path": "", + "password": false, + "name": "allowed_special", + "display_name": "Allowed Special", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "chunk_size": { + "type": "int", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": 1000, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "chunk_size", + "display_name": "Chunk Size", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "client": { + "type": "Any", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "client", + "display_name": "Client", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from typing import Any, Callable, Dict, List, Optional, Union\n\nfrom langchain_openai.embeddings.base import OpenAIEmbeddings\nfrom langflow import CustomComponent\nfrom langflow.field_typing import NestedDict\nfrom pydantic.v1.types import SecretStr\n\n\nclass OpenAIEmbeddingsComponent(CustomComponent):\n display_name = \"OpenAIEmbeddings\"\n description = \"OpenAI embedding models\"\n\n def build_config(self):\n return {\n \"allowed_special\": {\n \"display_name\": \"Allowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"default_headers\": {\n \"display_name\": \"Default Headers\",\n \"advanced\": True,\n \"field_type\": \"dict\",\n },\n \"default_query\": {\n \"display_name\": \"Default Query\",\n \"advanced\": True,\n \"field_type\": \"NestedDict\",\n },\n \"disallowed_special\": {\n \"display_name\": \"Disallowed Special\",\n \"advanced\": True,\n \"field_type\": \"str\",\n \"is_list\": True,\n },\n \"chunk_size\": {\"display_name\": \"Chunk Size\", \"advanced\": True},\n \"client\": {\"display_name\": \"Client\", \"advanced\": True},\n \"deployment\": {\"display_name\": \"Deployment\", \"advanced\": True},\n \"embedding_ctx_length\": {\n \"display_name\": \"Embedding Context Length\",\n \"advanced\": True,\n },\n \"max_retries\": {\"display_name\": \"Max Retries\", \"advanced\": True},\n \"model\": {\n \"display_name\": \"Model\",\n \"advanced\": False,\n \"options\": [\"text-embedding-3-small\", \"text-embedding-3-large\", \"text-embedding-ada-002\"],\n },\n \"model_kwargs\": {\"display_name\": \"Model Kwargs\", \"advanced\": True},\n \"openai_api_base\": {\"display_name\": \"OpenAI API Base\", \"password\": True, \"advanced\": True},\n \"openai_api_key\": {\"display_name\": \"OpenAI API Key\", \"password\": True},\n \"openai_api_type\": {\"display_name\": \"OpenAI API Type\", \"advanced\": True, \"password\": True},\n \"openai_api_version\": {\n \"display_name\": \"OpenAI API Version\",\n \"advanced\": True,\n },\n \"openai_organization\": {\n \"display_name\": \"OpenAI Organization\",\n \"advanced\": True,\n },\n \"openai_proxy\": {\"display_name\": \"OpenAI Proxy\", \"advanced\": True},\n \"request_timeout\": {\"display_name\": \"Request Timeout\", \"advanced\": True},\n \"show_progress_bar\": {\n \"display_name\": \"Show Progress Bar\",\n \"advanced\": True,\n },\n \"skip_empty\": {\"display_name\": \"Skip Empty\", \"advanced\": True},\n \"tiktoken_model_name\": {\"display_name\": \"TikToken Model Name\"},\n \"tikToken_enable\": {\"display_name\": \"TikToken Enable\", \"advanced\": True},\n }\n\n def build(\n self,\n default_headers: Optional[Dict[str, str]] = None,\n default_query: Optional[NestedDict] = {},\n allowed_special: List[str] = [],\n disallowed_special: List[str] = [\"all\"],\n chunk_size: int = 1000,\n client: Optional[Any] = None,\n deployment: str = \"text-embedding-3-small\",\n embedding_ctx_length: int = 8191,\n max_retries: int = 6,\n model: str = \"text-embedding-3-small\",\n model_kwargs: NestedDict = {},\n openai_api_base: Optional[str] = None,\n openai_api_key: Optional[str] = \"\",\n openai_api_type: Optional[str] = None,\n openai_api_version: Optional[str] = None,\n openai_organization: Optional[str] = None,\n openai_proxy: Optional[str] = None,\n request_timeout: Optional[float] = None,\n show_progress_bar: bool = False,\n skip_empty: bool = False,\n tiktoken_enable: bool = True,\n tiktoken_model_name: Optional[str] = None,\n ) -> Union[OpenAIEmbeddings, Callable]:\n # This is to avoid errors with Vector Stores (e.g Chroma)\n if disallowed_special == [\"all\"]:\n disallowed_special = \"all\" # type: ignore\n\n api_key = SecretStr(openai_api_key) if openai_api_key else None\n\n return OpenAIEmbeddings(\n tiktoken_enabled=tiktoken_enable,\n default_headers=default_headers,\n default_query=default_query,\n allowed_special=set(allowed_special),\n disallowed_special=\"all\",\n chunk_size=chunk_size,\n client=client,\n deployment=deployment,\n embedding_ctx_length=embedding_ctx_length,\n max_retries=max_retries,\n model=model,\n model_kwargs=model_kwargs,\n base_url=openai_api_base,\n api_key=api_key,\n openai_api_type=openai_api_type,\n api_version=openai_api_version,\n organization=openai_organization,\n openai_proxy=openai_proxy,\n timeout=request_timeout,\n show_progress_bar=show_progress_bar,\n skip_empty=skip_empty,\n tiktoken_model_name=tiktoken_model_name,\n )\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": false, + "dynamic": true, + "info": "", + "title_case": false + }, + "default_headers": { + "type": "dict", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "default_headers", + "display_name": "Default Headers", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "default_query": { + "type": "NestedDict", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": {}, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "default_query", + "display_name": "Default Query", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "deployment": { + "type": "str", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": "text-embedding-3-small", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "deployment", + "display_name": "Deployment", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "disallowed_special": { + "type": "str", + "required": true, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "value": [ + "all" + ], + "fileTypes": [], + "file_path": "", + "password": false, + "name": "disallowed_special", + "display_name": "Disallowed Special", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "embedding_ctx_length": { + "type": "int", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": 8191, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "embedding_ctx_length", + "display_name": "Embedding Context Length", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "max_retries": { + "type": "int", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": 6, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "max_retries", + "display_name": "Max Retries", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "model": { + "type": "str", + "required": true, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "value": "text-embedding-3-small", + "fileTypes": [], + "file_path": "", + "password": false, + "options": [ + "text-embedding-3-small", + "text-embedding-3-large", + "text-embedding-ada-002" + ], + "name": "model", + "display_name": "Model", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "model_kwargs": { + "type": "NestedDict", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": {}, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "model_kwargs", + "display_name": "Model Kwargs", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "openai_api_base": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": true, + "name": "openai_api_base", + "display_name": "OpenAI API Base", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ], + "value": "" + }, + "openai_api_key": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": "", + "fileTypes": [], + "file_path": "", + "password": true, + "name": "openai_api_key", + "display_name": "OpenAI API Key", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "openai_api_type": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": true, + "name": "openai_api_type", + "display_name": "OpenAI API Type", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ], + "value": "" + }, + "openai_api_version": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "openai_api_version", + "display_name": "OpenAI API Version", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "openai_organization": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "openai_organization", + "display_name": "OpenAI Organization", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "openai_proxy": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "openai_proxy", + "display_name": "OpenAI Proxy", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "request_timeout": { + "type": "float", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "request_timeout", + "display_name": "Request Timeout", + "advanced": true, + "dynamic": false, + "info": "", + "rangeSpec": { + "min": -1, + "max": 1, + "step": 0.1 + }, + "title_case": false + }, + "show_progress_bar": { + "type": "bool", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "show_progress_bar", + "display_name": "Show Progress Bar", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "skip_empty": { + "type": "bool", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "skip_empty", + "display_name": "Skip Empty", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "tiktoken_enable": { + "type": "bool", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": true, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "tiktoken_enable", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false + }, + "tiktoken_model_name": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "tiktoken_model_name", + "display_name": "TikToken Model Name", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ] + }, + "_type": "CustomComponent" + }, + "description": "OpenAI embedding models", + "base_classes": [ + "Embeddings", + "OpenAIEmbeddings", + "Callable" + ], + "display_name": "OpenAIEmbeddings", + "documentation": "", + "custom_fields": { + "default_headers": null, + "default_query": null, + "allowed_special": null, + "disallowed_special": null, + "chunk_size": null, + "client": null, + "deployment": null, + "embedding_ctx_length": null, + "max_retries": null, + "model": null, + "model_kwargs": null, + "openai_api_base": null, + "openai_api_key": null, + "openai_api_type": null, + "openai_api_version": null, + "openai_organization": null, + "openai_proxy": null, + "request_timeout": null, + "show_progress_bar": null, + "skip_empty": null, + "tiktoken_enable": null, + "tiktoken_model_name": null + }, + "output_types": [ + "OpenAIEmbeddings", + "Callable" + ], + "field_formatters": {}, + "frozen": false, + "beta": true + }, + "id": "OpenAIEmbeddings-rbMk3" + }, + "selected": false, + "width": 384, + "height": 573, + "positionAbsolute": { + "x": 1053.9472627140208, + "y": -2.5921878249999963 + }, + "dragging": false + }, + { + "id": "URL-5zjQH", + "type": "genericNode", + "position": { + "x": 567.0838444398559, + "y": 596.6568151511171 + }, + "data": { + "type": "URL", + "node": { + "template": { + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from typing import Any, Dict\n\nfrom langchain_community.document_loaders.web_base import WebBaseLoader\n\nfrom langflow import CustomComponent\nfrom langflow.schema import Record\n\n\nclass URLComponent(CustomComponent):\n display_name = \"URL\"\n description = \"Load URLs and convert them to records.\"\n\n def build_config(self) -> Dict[str, Any]:\n return {\n \"urls\": {\"display_name\": \"URL\"},\n }\n\n async def build(\n self,\n urls: list[str],\n ) -> Record:\n\n loader = WebBaseLoader(web_paths=urls)\n docs = loader.load()\n records = self.to_records(docs)\n return records\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": false, + "dynamic": true, + "info": "", + "title_case": false + }, + "urls": { + "type": "str", + "required": true, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "urls", + "display_name": "URL", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": [ + "Text" + ], + "value": [ + "https://raw.githubusercontent.com/logspace-ai/langflow/dev/CONTRIBUTING.md" + ] + }, + "_type": "CustomComponent" + }, + "description": "Load URLs and convert them to records.", + "base_classes": [ + "Record" + ], + "display_name": "URL", + "documentation": "", + "custom_fields": { + "urls": null + }, + "output_types": [ + "Record" + ], + "field_formatters": {}, + "frozen": false, + "beta": true + }, + "id": "URL-5zjQH" + }, + "selected": false, + "width": 384, + "height": 289, + "dragging": false, + "positionAbsolute": { + "x": 567.0838444398559, + "y": 596.6568151511171 + } + } + ], + "edges": [ + { + "source": "RecursiveCharacterTextSplitter-jwfyG", + "sourceHandle": "{œbaseClassesœ:[œRecordœ],œdataTypeœ:œRecursiveCharacterTextSplitterœ,œidœ:œRecursiveCharacterTextSplitter-jwfyGœ}", + "target": "Chroma-aFGHF", + "targetHandle": "{œfieldNameœ:œinputsœ,œidœ:œChroma-aFGHFœ,œinputTypesœ:[œDocumentœ,œRecordœ],œtypeœ:œRecordœ}", + "data": { + "targetHandle": { + "fieldName": "inputs", + "id": "Chroma-aFGHF", + "inputTypes": [ + "Document", + "Record" + ], + "type": "Record" + }, + "sourceHandle": { + "baseClasses": [ + "Record" + ], + "dataType": "RecursiveCharacterTextSplitter", + "id": "RecursiveCharacterTextSplitter-jwfyG" + } + }, + "style": { + "stroke": "#555" + }, + "className": "stroke-gray-900 stroke-connection", + "id": "reactflow__edge-RecursiveCharacterTextSplitter-jwfyG{œbaseClassesœ:[œRecordœ],œdataTypeœ:œRecursiveCharacterTextSplitterœ,œidœ:œRecursiveCharacterTextSplitter-jwfyGœ}-Chroma-aFGHF{œfieldNameœ:œinputsœ,œidœ:œChroma-aFGHFœ,œinputTypesœ:[œDocumentœ,œRecordœ],œtypeœ:œRecordœ}" + }, + { + "source": "OpenAIEmbeddings-rbMk3", + "sourceHandle": "{œbaseClassesœ:[œEmbeddingsœ,œOpenAIEmbeddingsœ,œCallableœ],œdataTypeœ:œOpenAIEmbeddingsœ,œidœ:œOpenAIEmbeddings-rbMk3œ}", + "target": "Chroma-aFGHF", + "targetHandle": "{œfieldNameœ:œembeddingœ,œidœ:œChroma-aFGHFœ,œinputTypesœ:null,œtypeœ:œEmbeddingsœ}", + "data": { + "targetHandle": { + "fieldName": "embedding", + "id": "Chroma-aFGHF", + "inputTypes": null, + "type": "Embeddings" + }, + "sourceHandle": { + "baseClasses": [ + "Embeddings", + "OpenAIEmbeddings", + "Callable" + ], + "dataType": "OpenAIEmbeddings", + "id": "OpenAIEmbeddings-rbMk3" + } + }, + "style": { + "stroke": "#555" + }, + "className": "stroke-gray-900 stroke-connection", + "id": "reactflow__edge-OpenAIEmbeddings-rbMk3{œbaseClassesœ:[œEmbeddingsœ,œOpenAIEmbeddingsœ,œCallableœ],œdataTypeœ:œOpenAIEmbeddingsœ,œidœ:œOpenAIEmbeddings-rbMk3œ}-Chroma-aFGHF{œfieldNameœ:œembeddingœ,œidœ:œChroma-aFGHFœ,œinputTypesœ:null,œtypeœ:œEmbeddingsœ}" + }, + { + "source": "URL-5zjQH", + "sourceHandle": "{œbaseClassesœ:[œRecordœ],œdataTypeœ:œURLœ,œidœ:œURL-5zjQHœ}", + "target": "RecursiveCharacterTextSplitter-jwfyG", + "targetHandle": "{œfieldNameœ:œinputsœ,œidœ:œRecursiveCharacterTextSplitter-jwfyGœ,œinputTypesœ:[œDocumentœ,œRecordœ],œtypeœ:œDocumentœ}", + "data": { + "targetHandle": { + "fieldName": "inputs", + "id": "RecursiveCharacterTextSplitter-jwfyG", + "inputTypes": [ + "Document", + "Record" + ], + "type": "Document" + }, + "sourceHandle": { + "baseClasses": [ + "Record" + ], + "dataType": "URL", + "id": "URL-5zjQH" + } + }, + "style": { + "stroke": "#555" + }, + "className": "stroke-foreground stroke-connection", + "id": "reactflow__edge-URL-5zjQH{œbaseClassesœ:[œRecordœ],œdataTypeœ:œURLœ,œidœ:œURL-5zjQHœ}-RecursiveCharacterTextSplitter-jwfyG{œfieldNameœ:œinputsœ,œidœ:œRecursiveCharacterTextSplitter-jwfyGœ,œinputTypesœ:[œDocumentœ,œRecordœ],œtypeœ:œDocumentœ}" + } + ], + "viewport": { + "x": -160.3219973143573, + "y": 117.63775645863632, + "zoom": 0.48903173672366845 + } + }, + "is_component": false, + "updated_at": "2024-03-05T21:59:59.738081", + "folder": null, + "id": "7f90dc54-717d-49fe-a43f-c4dc055daa4e", + "user_id": "9365dbda-e8cf-4e95-8c84-49f8b6edb44f" +} \ No newline at end of file diff --git a/src/backend/langflow/interface/custom/attributes.py b/src/backend/langflow/interface/custom/attributes.py index 9b91af43c..7bcfb5f4b 100644 --- a/src/backend/langflow/interface/custom/attributes.py +++ b/src/backend/langflow/interface/custom/attributes.py @@ -37,7 +37,7 @@ ATTR_FUNC_MAPPING: dict[str, Callable] = { "beta": getattr_return_bool, "documentation": getattr_return_str, "icon": validate_icon, - "pinned": getattr_return_bool, + "frozen": getattr_return_bool, "is_input": getattr_return_bool, "is_output": getattr_return_bool, } diff --git a/src/backend/langflow/interface/custom/code_parser/code_parser.py b/src/backend/langflow/interface/custom/code_parser/code_parser.py index 258d2fa6b..1b4bfaf39 100644 --- a/src/backend/langflow/interface/custom/code_parser/code_parser.py +++ b/src/backend/langflow/interface/custom/code_parser/code_parser.py @@ -9,7 +9,11 @@ from fastapi import HTTPException from loguru import logger from langflow.interface.custom.eval import eval_custom_component_code -from langflow.interface.custom.schema import CallableCodeDetails, ClassCodeDetails +from langflow.interface.custom.schema import ( + CallableCodeDetails, + ClassCodeDetails, + MissingDefault, +) class CodeSyntaxError(HTTPException): @@ -95,7 +99,9 @@ class CodeParser: elif isinstance(node, ast.ImportFrom): for alias in node.names: if alias.asname: - self.data["imports"].append((node.module, f"{alias.name} as {alias.asname}")) + self.data["imports"].append( + (node.module, f"{alias.name} as {alias.asname}") + ) else: self.data["imports"].append((node.module, alias.name)) @@ -144,7 +150,9 @@ class CodeParser: return_type = None if node.returns: return_type_str = ast.unparse(node.returns) - eval_env = self.construct_eval_env(return_type_str, tuple(self.data["imports"])) + eval_env = self.construct_eval_env( + return_type_str, tuple(self.data["imports"]) + ) try: return_type = eval(return_type_str, eval_env) @@ -174,7 +182,7 @@ class CodeParser: args += self.parse_keyword_args(node) # Commented out because we don't want kwargs # showing up as fields in the frontend - # args += self.parse_kwargs(node) + args += self.parse_kwargs(node) return args @@ -185,15 +193,23 @@ class CodeParser: num_args = len(node.args.args) num_defaults = len(node.args.defaults) num_missing_defaults = num_args - num_defaults - missing_defaults = [None] * num_missing_defaults - default_values = [ast.unparse(default).strip("'") if default else None for default in node.args.defaults] + missing_defaults = [MissingDefault()] * num_missing_defaults + default_values = [ + ast.unparse(default).strip("'") if default else None + for default in node.args.defaults + ] # Now check all default values to see if there # are any "None" values in the middle - default_values = [None if value == "None" else value for value in default_values] + default_values = [ + None if value == "None" else value for value in default_values + ] defaults = missing_defaults + default_values - args = [self.parse_arg(arg, default) for arg, default in zip(node.args.args, defaults)] + args = [ + self.parse_arg(arg, default) + for arg, default in zip(node.args.args, defaults) + ] return args def parse_varargs(self, node: ast.FunctionDef) -> List[Dict[str, Any]]: @@ -211,11 +227,17 @@ class CodeParser: """ Parses the keyword-only arguments of a function or method node. """ - kw_defaults = [None] * (len(node.args.kwonlyargs) - len(node.args.kw_defaults)) + [ - ast.unparse(default) if default else None for default in node.args.kw_defaults + kw_defaults = [None] * ( + len(node.args.kwonlyargs) - len(node.args.kw_defaults) + ) + [ + ast.unparse(default) if default else None + for default in node.args.kw_defaults ] - args = [self.parse_arg(arg, default) for arg, default in zip(node.args.kwonlyargs, kw_defaults)] + args = [ + self.parse_arg(arg, default) + for arg, default in zip(node.args.kwonlyargs, kw_defaults) + ] return args def parse_kwargs(self, node: ast.FunctionDef) -> List[Dict[str, Any]]: @@ -319,7 +341,9 @@ class CodeParser: Extracts global variables from the code. """ global_var = { - "targets": [t.id if hasattr(t, "id") else ast.dump(t) for t in node.targets], + "targets": [ + t.id if hasattr(t, "id") else ast.dump(t) for t in node.targets + ], "value": ast.unparse(node.value), } self.data["global_vars"].append(global_var) diff --git a/src/backend/langflow/interface/custom/custom_component/component.py b/src/backend/langflow/interface/custom/custom_component/component.py index a889fa7b9..ce40b0f74 100644 --- a/src/backend/langflow/interface/custom/custom_component/component.py +++ b/src/backend/langflow/interface/custom/custom_component/component.py @@ -21,9 +21,7 @@ class ComponentFunctionEntrypointNameNullError(HTTPException): class Component: ERROR_CODE_NULL: ClassVar[str] = "Python code must be provided." - ERROR_FUNCTION_ENTRYPOINT_NAME_NULL: ClassVar[str] = ( - "The name of the entrypoint function must be provided." - ) + ERROR_FUNCTION_ENTRYPOINT_NAME_NULL: ClassVar[str] = "The name of the entrypoint function must be provided." code: Optional[str] = None _function_entrypoint_name: str = "build" diff --git a/src/backend/langflow/interface/custom/custom_component/custom_component.py b/src/backend/langflow/interface/custom/custom_component/custom_component.py index d2087dd2e..56febbf48 100644 --- a/src/backend/langflow/interface/custom/custom_component/custom_component.py +++ b/src/backend/langflow/interface/custom/custom_component/custom_component.py @@ -14,7 +14,6 @@ from uuid import UUID import yaml from cachetools import TTLCache, cachedmethod -from fastapi import HTTPException from langchain_core.documents import Document from pydantic import BaseModel from sqlmodel import select @@ -25,6 +24,7 @@ from langflow.interface.custom.code_parser.utils import ( ) from langflow.interface.custom.custom_component.component import Component from langflow.schema import Record +from langflow.schema.dotdict import dotdict from langflow.services.database.models.flow import Flow from langflow.services.database.utils import session_getter from langflow.services.deps import ( @@ -59,8 +59,8 @@ class CustomComponent(Component): """The field configuration of the component. Defaults to an empty dictionary.""" field_order: Optional[List[str]] = None """The field order of the component. Defaults to an empty list.""" - pinned: Optional[bool] = False - """The default pinned state of the component. Defaults to False.""" + frozen: Optional[bool] = False + """The default frozen state of the component. Defaults to False.""" build_parameters: Optional[dict] = None """The build parameters of the component. Defaults to None.""" selected_output_type: Optional[str] = None @@ -76,6 +76,28 @@ class CustomComponent(Component): """The status of the component. This is displayed on the frontend. Defaults to None.""" _flows_records: Optional[List[Record]] = None + def update_state(self, name: str, value: Any): + try: + self.vertex.graph.update_state( + name=name, record=value, caller=self.vertex.id + ) + except Exception as e: + raise ValueError(f"Error updating state: {e}") + + def append_state(self, name: str, value: Any): + try: + self.vertex.graph.append_state( + name=name, record=value, caller=self.vertex.id + ) + except Exception as e: + raise ValueError(f"Error appending state: {e}") + + def get_state(self, name: str): + try: + return self.vertex.graph.get_state(name=name) + except Exception as e: + raise ValueError(f"Error getting state: {e}") + _tree: Optional[dict] = None def __init__(self, **data): @@ -117,12 +139,21 @@ class CustomComponent(Component): def build_config(self): return self.field_config + def update_build_config( + self, + build_config: dotdict, + field_name: str, + field_value: Any, + ): + build_config[field_name] = field_value + return build_config + @property def tree(self): return self.get_code_tree(self.code or "") def to_records( - self, data: Any, text_key: str = "text", data_key: str = "data" + self, data: Any, keys: Optional[List[str]] = None, silent_errors: bool = False ) -> List[Record]: """ Converts input data into a list of Record objects. @@ -131,8 +162,9 @@ class CustomComponent(Component): data (Any): The input data to be converted. It can be a single item or a sequence of items. If the input data is a Langchain Document, text_key and data_key are ignored. - text_key (str, optional): The key to access the text value in each item. Defaults to "text". - data_key (str, optional): The key to access the data value in each item. Defaults to "data". + keys (List[str], optional): The keys to access the text and data values in each item. + It should be a list of strings where the first element is the text key and the second element is the data key. + Defaults to None, in which case the default keys "text" and "data" are used. Returns: List[Record]: A list of Record objects. @@ -145,27 +177,29 @@ class CustomComponent(Component): if not isinstance(data, Sequence): data = [data] for item in data: + data_dict = {} if isinstance(item, Document): - item = {"text": item.page_content, "data": item.metadata} + data_dict = item.metadata + data_dict["text"] = item.page_content elif isinstance(item, BaseModel): model_dump = item.model_dump() - if text_key not in model_dump: - raise ValueError(f"Key '{text_key}' not found in BaseModel item.") - if data_key not in model_dump: - raise ValueError(f"Key '{data_key}' not found in BaseModel item.") - item = {"text": model_dump[text_key], "data": model_dump[data_key]} + for key in keys: + if silent_errors: + data_dict[key] = model_dump.get(key, "") + else: + try: + data_dict[key] = model_dump[key] + except KeyError: + raise ValueError(f"Key {key} not found in {item}") + elif isinstance(item, str): - item = {"text": item, "data": {}} + data_dict = {"text": item} elif isinstance(item, dict): - if text_key not in item: - raise ValueError(f"Key '{text_key}' not found in dictionary item.") - if data_key not in item: - raise ValueError(f"Key '{data_key}' not found in dictionary item.") - item = {"text": item[text_key], "data": item[data_key]} + data_dict = item.copy() else: raise ValueError(f"Invalid data type: {type(item)}") - records.append(Record(**item)) + records.append(Record(data=data_dict)) return records @@ -200,18 +234,7 @@ class CustomComponent(Component): args = build_method["args"] for arg in args: - if arg.get("type") == "prompt": - raise HTTPException( - status_code=400, - detail={ - "error": "Type hint Error", - "traceback": ( - "Prompt type is not supported in the build method." - " Try using PromptTemplate instead." - ), - }, - ) - elif not arg.get("type") and arg.get("name") != "self": + if not arg.get("type") and arg.get("name") != "self": # Set the type to Data arg["type"] = "Data" return args @@ -372,7 +395,7 @@ class CustomComponent(Component): flows = session.exec( select(Flow) .where(Flow.user_id == self._user_id) - .where(Flow.is_component == False) + .where(Flow.is_component == False) # noqa ).all() flows_records = [flow.to_record() for flow in flows] diff --git a/src/backend/langflow/interface/custom/directory_reader/directory_reader.py b/src/backend/langflow/interface/custom/directory_reader/directory_reader.py index 448e3c485..5acc15131 100644 --- a/src/backend/langflow/interface/custom/directory_reader/directory_reader.py +++ b/src/backend/langflow/interface/custom/directory_reader/directory_reader.py @@ -80,13 +80,9 @@ class DirectoryReader: except Exception as e: logger.error(f"Error while loading component: {e}") continue - items.append( - {"name": menu["name"], "path": menu["path"], "components": components} - ) + items.append({"name": menu["name"], "path": menu["path"], "components": components}) filtered = [menu for menu in items if menu["components"]] - logger.debug( - f'Filtered components {"with errors" if with_errors else ""}: {len(filtered)}' - ) + logger.debug(f'Filtered components {"with errors" if with_errors else ""}: {len(filtered)}') return {"menu": filtered} def validate_code(self, file_content): @@ -119,9 +115,7 @@ class DirectoryReader: Walk through the directory path and return a list of all .py files. """ if not (safe_path := self.get_safe_path()): - raise CustomComponentPathValueError( - f"The path needs to start with '{self.base_path}'." - ) + raise CustomComponentPathValueError(f"The path needs to start with '{self.base_path}'.") file_list = [] safe_path_obj = Path(safe_path) @@ -131,11 +125,7 @@ class DirectoryReader: # any folders below [folder] will be ignored # basically the parent folder of the file should be a # folder in the safe_path - if ( - file_path.is_file() - and file_path.parent.parent == safe_path_obj - and not file_path.name.startswith("__") - ): + if file_path.is_file() and file_path.parent.parent == safe_path_obj and not file_path.name.startswith("__"): file_list.append(str(file_path)) return file_list @@ -173,9 +163,7 @@ class DirectoryReader: for node in ast.walk(module): if isinstance(node, ast.FunctionDef): for arg in node.args.args: - if self._is_type_hint_in_arg_annotation( - arg.annotation, type_hint_name - ): + if self._is_type_hint_in_arg_annotation(arg.annotation, type_hint_name): return True except SyntaxError: # Returns False if the code is not valid Python @@ -193,16 +181,14 @@ class DirectoryReader: and annotation.value.id == type_hint_name ) - def is_type_hint_used_but_not_imported( - self, type_hint_name: str, code: str - ) -> bool: + def is_type_hint_used_but_not_imported(self, type_hint_name: str, code: str) -> bool: """ Check if a type hint is used but not imported in the given code. """ try: - return self._is_type_hint_used_in_args( + return self._is_type_hint_used_in_args(type_hint_name, code) and not self._is_type_hint_imported( type_hint_name, code - ) and not self._is_type_hint_imported(type_hint_name, code) + ) except SyntaxError: # Returns True if there's something wrong with the code # TODO : Find a better way to handle this @@ -223,9 +209,9 @@ class DirectoryReader: return False, "Syntax error" elif not self.validate_build(file_content): return False, "Missing build function" - elif self._is_type_hint_used_in_args( + elif self._is_type_hint_used_in_args("Optional", file_content) and not self._is_type_hint_imported( "Optional", file_content - ) and not self._is_type_hint_imported("Optional", file_content): + ): return ( False, "Type hint 'Optional' is used but not imported in the code.", @@ -241,18 +227,14 @@ class DirectoryReader: from the .py files in the directory. """ response = {"menu": []} - logger.debug( - "-------------------- Building component menu list --------------------" - ) + logger.debug("-------------------- Building component menu list --------------------") for file_path in file_paths: menu_name = os.path.basename(os.path.dirname(file_path)) filename = os.path.basename(file_path) validation_result, result_content = self.process_file(file_path) if not validation_result: - logger.error( - f"Error while processing file {file_path}: {result_content}" - ) + logger.error(f"Error while processing file {file_path}: {result_content}") menu_result = self.find_menu(response, menu_name) or { "name": menu_name, @@ -265,9 +247,7 @@ class DirectoryReader: # first check if it's already CamelCase if "_" in component_name: - component_name_camelcase = " ".join( - word.title() for word in component_name.split("_") - ) + component_name_camelcase = " ".join(word.title() for word in component_name.split("_")) else: component_name_camelcase = component_name @@ -275,9 +255,7 @@ class DirectoryReader: try: output_types = self.get_output_types_from_code(result_content) except Exception as exc: - logger.exception( - f"Error while getting output types from code: {str(exc)}" - ) + logger.exception(f"Error while getting output types from code: {str(exc)}") output_types = [component_name_camelcase] else: output_types = [component_name_camelcase] @@ -293,9 +271,7 @@ class DirectoryReader: if menu_result not in response["menu"]: response["menu"].append(menu_result) - logger.debug( - "-------------------- Component menu list built --------------------" - ) + logger.debug("-------------------- Component menu list built --------------------") return response @staticmethod diff --git a/src/backend/langflow/interface/custom/schema.py b/src/backend/langflow/interface/custom/schema.py index 7c5975150..1636882ef 100644 --- a/src/backend/langflow/interface/custom/schema.py +++ b/src/backend/langflow/interface/custom/schema.py @@ -27,3 +27,12 @@ class CallableCodeDetails(BaseModel): body: list return_type: Optional[Any] = None has_return: bool = False + + +class MissingDefault: + """ + A class to represent a missing default value. + """ + + def __repr__(self): + return "MISSING" diff --git a/src/backend/langflow/interface/custom/utils.py b/src/backend/langflow/interface/custom/utils.py index 03fdeb508..dd57a8b25 100644 --- a/src/backend/langflow/interface/custom/utils.py +++ b/src/backend/langflow/interface/custom/utils.py @@ -8,6 +8,7 @@ from uuid import UUID from fastapi import HTTPException from loguru import logger +from pydantic import BaseModel from langflow.field_typing.range_spec import RangeSpec from langflow.interface.custom.attributes import ATTR_FUNC_MAPPING @@ -19,6 +20,8 @@ from langflow.interface.custom.directory_reader.utils import ( merge_nested_dicts_with_renaming, ) from langflow.interface.custom.eval import eval_custom_component_code +from langflow.interface.custom.schema import MissingDefault +from langflow.schema import dotdict from langflow.template.field.base import TemplateField from langflow.template.frontend_node.custom_components import ( CustomComponentFrontendNode, @@ -27,7 +30,13 @@ from langflow.utils import validate from langflow.utils.util import get_base_classes -def add_output_types(frontend_node: CustomComponentFrontendNode, return_types: List[str]): +class UpdateBuildConfigError(Exception): + pass + + +def add_output_types( + frontend_node: CustomComponentFrontendNode, return_types: List[str] +): """Add output types to the frontend node""" for return_type in return_types: if return_type is None: @@ -63,6 +72,7 @@ def reorder_fields(frontend_node: CustomComponentFrontendNode, field_order: List if field.name not in field_order: reordered_fields.append(field) frontend_node.template.fields = reordered_fields + frontend_node.field_order = field_order def add_base_classes(frontend_node: CustomComponentFrontendNode, return_types: List[str]): @@ -96,7 +106,7 @@ def extract_type_from_optional(field_type): str: The extracted type, or an empty string if no type was found. """ match = re.search(r"\[(.*?)\]$", field_type) - return match[1] if match else None + return match[1] if match else field_type def get_field_properties(extra_field): @@ -104,7 +114,13 @@ def get_field_properties(extra_field): field_name = extra_field["name"] field_type = extra_field.get("type", "str") field_value = extra_field.get("default", "") - field_required = "optional" not in field_type.lower() + # a required field is a field that does not contain + # optional in field_type + # and a field that does not have a default value + field_required = "optional" not in field_type.lower() and isinstance( + field_value, MissingDefault + ) + field_value = field_value if not isinstance(field_value, MissingDefault) else None if not field_required: field_type = extract_type_from_optional(field_type) @@ -149,7 +165,9 @@ def add_new_custom_field( # If options is a list, then it's a dropdown # If options is None, then it's a list of strings is_list = isinstance(field_config.get("options"), list) - field_config["is_list"] = is_list or field_config.get("is_list", False) or field_contains_list + field_config["is_list"] = ( + is_list or field_config.get("list", False) or field_contains_list + ) if "name" in field_config: warnings.warn("The 'name' key in field_config is used to build the object and can't be changed.") @@ -178,13 +196,23 @@ def add_extra_fields(frontend_node, field_config, function_args): """Add extra fields to the frontend node""" if not function_args: return + _field_config = field_config.copy() + function_args_names = [arg["name"] for arg in function_args] + # If kwargs is in the function_args and not all field_config keys are in function_args + # then we need to add the extra fields for extra_field in function_args: - if "name" not in extra_field or extra_field["name"] == "self": + if "name" not in extra_field or extra_field["name"] in [ + "self", + "kwargs", + "args", + ]: continue - field_name, field_type, field_value, field_required = get_field_properties(extra_field) - config = field_config.get(field_name, {}) + field_name, field_type, field_value, field_required = get_field_properties( + extra_field + ) + config = _field_config.pop(field_name, {}) frontend_node = add_new_custom_field( frontend_node, field_name, @@ -193,12 +221,31 @@ def add_extra_fields(frontend_node, field_config, function_args): field_required, config, ) + if "kwargs" in function_args_names and not all( + key in function_args_names for key in field_config.keys() + ): + for field_name, field_config in _field_config.copy().items(): + if "name" not in field_config or field_name == "code": + continue + config = _field_config.get(field_name, {}) + config = config.model_dump() if isinstance(config, BaseModel) else config + field_name, field_type, field_value, field_required = get_field_properties( + extra_field=config + ) + frontend_node = add_new_custom_field( + frontend_node, + field_name, + field_type, + field_value, + field_required, + config, + ) def get_field_dict(field: Union[TemplateField, dict]): """Get the field dictionary from a TemplateField or a dict""" if isinstance(field, TemplateField): - return field.model_dump(by_alias=True, exclude_none=True) + return dotdict(field.model_dump(by_alias=True, exclude_none=True)) return field @@ -206,6 +253,7 @@ def run_build_config( custom_component: CustomComponent, user_id: Optional[Union[str, UUID]] = None, update_field=None, + update_field_value=None, ): """Build the field configuration for a custom component""" @@ -230,31 +278,53 @@ def run_build_config( custom_instance = custom_class(user_id=user_id) build_config: Dict = custom_instance.build_config() - for field_name, field in build_config.items(): + for field_name, field in build_config.copy().items(): # Allow user to build TemplateField as well # as a dict with the same keys as TemplateField field_dict = get_field_dict(field) + build_config[field_name] = field_dict # This has to be done to set refresh if options or value are callable - update_field_dict(field_dict) if update_field is not None and field_name != update_field: + build_config = update_field_dict( + custom_component_instance=custom_instance, + field_dict=field_dict, + build_config=build_config, + call=False, + ) continue try: - update_field_dict(field_dict, call=True) + build_config = update_field_dict( + custom_component_instance=custom_instance, + field_dict=field_dict, + build_config=build_config, + update_field=update_field, + update_field_value=update_field_value, + call=True, + ) build_config[field_name] = field_dict except Exception as exc: logger.error(f"Error while getting build_config: {str(exc)}") + if isinstance(exc, UpdateBuildConfigError): + message = str(exc) + else: + message = f"Error while getting build_config: {str(exc)}" + raise HTTPException( + status_code=400, + detail={ + "error": message, + "traceback": traceback.format_exc(), + }, + ) from exc return build_config, custom_instance except Exception as exc: + logger.error(f"Error while building field config: {str(exc)}") - raise HTTPException( - status_code=400, - detail={ - "error": ("Invalid type convertion. Please check your code and try again."), - "traceback": traceback.format_exc(), - }, - ) from exc + if hasattr(exc, "detail") and "traceback" in exc.detail: + logger.error(exc.detail["traceback"]) + + raise exc def sanitize_template_config(template_config): @@ -278,6 +348,7 @@ def build_frontend_node(template_config): def add_code_field(frontend_node: CustomComponentFrontendNode, raw_code, field_config): + code_field = TemplateField( dynamic=True, required=True, @@ -286,7 +357,7 @@ def add_code_field(frontend_node: CustomComponentFrontendNode, raw_code, field_c value=raw_code, password=False, name="code", - advanced=field_config.pop("advanced", False), + advanced=True, field_type="code", is_list=False, ) @@ -299,12 +370,18 @@ def build_custom_component_template( custom_component: CustomComponent, user_id: Optional[Union[str, UUID]] = None, update_field: Optional[str] = None, + update_field_value: Optional[str] = None, ) -> Optional[Dict[str, Any]]: """Build a custom component template for the langchain""" try: frontend_node = build_frontend_node(custom_component.template_config) - field_config, custom_instance = run_build_config(custom_component, user_id=user_id, update_field=update_field) + field_config, custom_instance = run_build_config( + custom_component, + user_id=user_id, + update_field=update_field, + update_field_value=update_field_value, + ) entrypoint_args = custom_component.get_function_entrypoint_args @@ -324,7 +401,9 @@ def build_custom_component_template( raise HTTPException( status_code=400, detail={ - "error": ("Invalid type convertion. Please check your code and try again."), + "error": ( + f"Something went wrong while building the custom component. Hints: {str(exc)}" + ), "traceback": traceback.format_exc(), }, ) from exc @@ -334,7 +413,6 @@ def create_component_template(component): """Create a template for a component.""" component_code = component["code"] component_output_types = component["output_types"] - # remove component_extractor = CustomComponent(code=component_code) @@ -345,15 +423,15 @@ def create_component_template(component): return component_template -def build_custom_components(settings_service): +def build_custom_components(components_paths: List[str]): """Build custom components from the specified paths.""" - if not settings_service.settings.COMPONENTS_PATH: + if not components_paths: return {} - logger.info(f"Building custom components from {settings_service.settings.COMPONENTS_PATH}") + logger.info(f"Building custom components from {components_paths}") custom_components_from_file = {} processed_paths = set() - for path in settings_service.settings.COMPONENTS_PATH: + for path in components_paths: path_str = str(path) if path_str in processed_paths: continue @@ -370,26 +448,44 @@ def build_custom_components(settings_service): return custom_components_from_file -def update_field_dict(field_dict, call=False): +def update_field_dict( + custom_component_instance: "CustomComponent", + field_dict: Dict, + build_config: Dict, + update_field: Optional[str] = None, + update_field_value: Optional[Any] = None, + call: bool = False, +): """Update the field dictionary by calling options() or value() if they are callable""" - if "options" in field_dict and callable(field_dict["options"]): + if ("real_time_refresh" in field_dict or "refresh_button" in field_dict) and any( + ( + field_dict.get("real_time_refresh", False), + field_dict.get("refresh_button", False), + ) + ): if call: - field_dict["options"] = field_dict["options"]() - # Also update the "refresh" key - field_dict["refresh"] = True - - if "value" in field_dict and callable(field_dict["value"]): - if call: - field_dict["value"] = field_dict["value"]() - field_dict["refresh"] = True + try: + dd_build_config = dotdict(build_config) + custom_component_instance.update_build_config( + dd_build_config, update_field, update_field_value + ) + build_config = dd_build_config + except Exception as exc: + logger.error(f"Error while running update_build_config: {str(exc)}") + raise UpdateBuildConfigError( + f"Error while running update_build_config: {str(exc)}" + ) from exc # Let's check if "range_spec" is a RangeSpec object if "rangeSpec" in field_dict and isinstance(field_dict["rangeSpec"], RangeSpec): field_dict["rangeSpec"] = field_dict["rangeSpec"].model_dump() + return build_config -def sanitize_field_config(field_config: Dict): +def sanitize_field_config(field_config: Union[Dict, TemplateField]): # If any of the already existing keys are in field_config, remove them + if isinstance(field_config, TemplateField): + field_config = field_config.to_dict() for key in [ "name", "field_type", diff --git a/src/backend/langflow/interface/initialize/loading.py b/src/backend/langflow/interface/initialize/loading.py index d9b678639..fd6f27db6 100644 --- a/src/backend/langflow/interface/initialize/loading.py +++ b/src/backend/langflow/interface/initialize/loading.py @@ -30,6 +30,7 @@ from langflow.interface.retrievers.base import retriever_creator from langflow.interface.toolkits.base import toolkits_creator from langflow.interface.utils import load_file_into_dict from langflow.interface.wrappers.base import wrapper_creator +from langflow.schema.schema import Record from langflow.utils import validate if TYPE_CHECKING: @@ -165,8 +166,10 @@ async def instantiate_custom_component(node_type, class_object, params, user_id, else: # Call the build method directly if it's sync build_result = custom_component.build(**params_copy) - - return custom_component, build_result, {"repr": custom_component.custom_repr()} + custom_repr = custom_component.custom_repr() + if not custom_repr and isinstance(build_result, (dict, Record, str)): + custom_repr = build_result + return custom_component, build_result, {"repr": custom_repr} def instantiate_wrapper(node_type, class_object, params): diff --git a/src/backend/langflow/interface/listing.py b/src/backend/langflow/interface/listing.py index 79a7335d4..a831f1098 100644 --- a/src/backend/langflow/interface/listing.py +++ b/src/backend/langflow/interface/listing.py @@ -21,7 +21,7 @@ class AllTypesDict(LazyLoadDictBase): from langflow.interface.types import get_all_types_dict settings_service = get_settings_service() - return get_all_types_dict(settings_service=settings_service) + return get_all_types_dict(settings_service.settings.COMPONENTS_PATH) lazy_load_dict = AllTypesDict() diff --git a/src/backend/langflow/interface/types.py b/src/backend/langflow/interface/types.py index 3dcddf99f..4908a60e3 100644 --- a/src/backend/langflow/interface/types.py +++ b/src/backend/langflow/interface/types.py @@ -2,7 +2,9 @@ from cachetools import LRUCache, cached from langflow.interface.agents.base import agent_creator from langflow.interface.chains.base import chain_creator -from langflow.interface.custom.directory_reader.utils import merge_nested_dicts_with_renaming +from langflow.interface.custom.directory_reader.utils import ( + merge_nested_dicts_with_renaming, +) from langflow.interface.custom.utils import build_custom_components from langflow.interface.document_loaders.base import documentloader_creator from langflow.interface.embeddings.base import embedding_creator @@ -13,7 +15,6 @@ from langflow.interface.retrievers.base import retriever_creator from langflow.interface.text_splitters.base import textsplitter_creator from langflow.interface.toolkits.base import toolkits_creator from langflow.interface.tools.base import tool_creator -from langflow.interface.utilities.base import utility_creator from langflow.interface.wrappers.base import wrapper_creator @@ -48,7 +49,7 @@ def build_langchain_types_dict(): # sourcery skip: dict-assign-update-to-union # vectorstore_creator, documentloader_creator, textsplitter_creator, - utility_creator, + # utility_creator, output_parser_creator, retriever_creator, ] @@ -62,8 +63,26 @@ def build_langchain_types_dict(): # sourcery skip: dict-assign-update-to-union return all_types -def get_all_types_dict(settings_service): +def get_all_types_dict(components_paths): """Get all types dictionary combining native and custom components.""" native_components = build_langchain_types_dict() - custom_components_from_file = build_custom_components(settings_service) - return merge_nested_dicts_with_renaming(native_components, custom_components_from_file) + custom_components_from_file = build_custom_components( + components_paths=components_paths + ) + return merge_nested_dicts_with_renaming( + native_components, custom_components_from_file + ) + + +def get_all_components(components_paths, as_dict=False): + """Get all components names combining native and custom components.""" + all_types_dict = get_all_types_dict(components_paths) + components = [] if not as_dict else {} + for category in all_types_dict.values(): + for component in category.values(): + component["name"] = component["display_name"] + if as_dict: + components[component["name"]] = component + else: + components.append(component) + return components diff --git a/src/backend/langflow/interface/utils.py b/src/backend/langflow/interface/utils.py index 30c55f1ef..8e7f476f5 100644 --- a/src/backend/langflow/interface/utils.py +++ b/src/backend/langflow/interface/utils.py @@ -43,9 +43,7 @@ def try_setting_streaming_options(langchain_object): llm = None if hasattr(langchain_object, "llm"): llm = langchain_object.llm - elif hasattr(langchain_object, "llm_chain") and hasattr( - langchain_object.llm_chain, "llm" - ): + elif hasattr(langchain_object, "llm_chain") and hasattr(langchain_object.llm_chain, "llm"): llm = langchain_object.llm_chain.llm if isinstance(llm, BaseLanguageModel): @@ -71,9 +69,7 @@ def extract_input_variables_from_prompt(prompt: str) -> list[str]: # Extract the variable name from either the single or double brace match if match.group(1): # Match found in double braces - variable_name = ( - "{{" + match.group(1) + "}}" - ) # Re-add single braces for JSON strings + variable_name = "{{" + match.group(1) + "}}" # Re-add single braces for JSON strings else: # Match found in single braces variable_name = match.group(2) if variable_name is not None: @@ -109,9 +105,7 @@ def set_langchain_cache(settings): if cache_type := os.getenv("LANGFLOW_LANGCHAIN_CACHE"): try: - cache_class = import_class( - f"langchain.cache.{cache_type or settings.LANGCHAIN_CACHE}" - ) + cache_class = import_class(f"langchain.cache.{cache_type or settings.LANGCHAIN_CACHE}") logger.debug(f"Setting up LLM caching with {cache_class.__name__}") set_llm_cache(cache_class()) diff --git a/src/backend/langflow/main.py b/src/backend/langflow/main.py index 4b4c19a25..76724521f 100644 --- a/src/backend/langflow/main.py +++ b/src/backend/langflow/main.py @@ -8,7 +8,9 @@ from fastapi import FastAPI, Request from fastapi.middleware.cors import CORSMiddleware from fastapi.responses import FileResponse from fastapi.staticfiles import StaticFiles + from langflow.api import router +from langflow.initial_setup.setup import create_or_update_starter_projects from langflow.interface.utils import setup_llm_caching from langflow.services.plugins.langfuse_plugin import LangfuseInstance from langflow.services.utils import initialize_services, teardown_services @@ -21,6 +23,7 @@ def get_lifespan(fix_migration=False, socketio_server=None): initialize_services(fix_migration=fix_migration, socketio_server=socketio_server) setup_llm_caching() LangfuseInstance.update() + create_or_update_starter_projects() yield teardown_services() @@ -114,6 +117,7 @@ def setup_app(static_files_dir: Optional[Path] = None, backend_only: bool = Fals if __name__ == "__main__": import uvicorn + from langflow.__main__ import get_number_of_workers configure() diff --git a/src/backend/langflow/memory.py b/src/backend/langflow/memory.py index c8f73f25e..b0ec04f1c 100644 --- a/src/backend/langflow/memory.py +++ b/src/backend/langflow/memory.py @@ -40,8 +40,8 @@ def get_messages( for row in messages_df.itertuples(): record = Record( - text=row.message, data={ + "text": row.message, "sender": row.sender, "sender_name": row.sender_name, "session_id": row.session_id, @@ -81,3 +81,14 @@ def add_messages(records: Union[list[Record], Record]): except Exception as e: logger.exception(e) raise e + + +def delete_messages(session_id: str): + """ + Delete messages from the monitor service based on the provided session ID. + + Args: + session_id (str): The session ID associated with the messages to delete. + """ + monitor_service = get_monitor_service() + monitor_service.delete_messages(session_id) diff --git a/src/backend/langflow/processing/process.py b/src/backend/langflow/processing/process.py index 408db0a53..f034656be 100644 --- a/src/backend/langflow/processing/process.py +++ b/src/backend/langflow/processing/process.py @@ -198,6 +198,7 @@ async def run_graph( stream: bool, session_id: Optional[str] = None, inputs: Optional[dict[str, Union[List[str], str]]] = None, + outputs: Optional[List[str]] = None, artifacts: Optional[Dict[str, Any]] = None, session_service: Optional[SessionService] = None, ): @@ -212,7 +213,12 @@ async def run_graph( if inputs is None: inputs = {} - outputs = await graph.run(inputs, stream=stream) + outputs = await graph.run( + inputs, + outputs, + stream=stream, + session_id=session_id, + ) if session_id and session_service: session_service.update_session(session_id, (graph, artifacts)) return outputs, session_id @@ -255,22 +261,25 @@ def process_tweaks(graph_data: Dict[str, Any], tweaks: Dict[str, Dict[str, Any]] :param graph_data: The dictionary containing the graph data. It must contain a 'data' key with 'nodes' as its child or directly contain 'nodes' key. Each node should have an 'id' and 'data'. - :param tweaks: A dictionary where the key is the node id and the value is a dictionary of the tweaks. - The inner dictionary contains the name of a certain parameter as the key and the value to be tweaked. - + :param tweaks: The dictionary containing the tweaks. The keys can be the node id or the name of the tweak. + The values can be a dictionary containing the tweaks for the node or the value of the tweak. :return: The modified graph_data dictionary. :raises ValueError: If the input is not in the expected format. """ nodes = validate_input(graph_data, tweaks) + nodes_map = {node.get("id"): node for node in nodes} + + all_nodes_tweaks = {} + for key, value in tweaks.items(): + if isinstance(value, dict): + if node := nodes_map.get(key): + apply_tweaks(node, value) + else: + all_nodes_tweaks[key] = value for node in nodes: - if isinstance(node, dict) and isinstance(node.get("id"), str): - node_id = node["id"] - if node_tweaks := tweaks.get(node_id): - apply_tweaks(node, node_tweaks) - else: - logger.warning("Each node should be a dictionary with an 'id' key of type str") + apply_tweaks(node, all_nodes_tweaks) return graph_data diff --git a/src/backend/langflow/schema/__init__.py b/src/backend/langflow/schema/__init__.py index 8cd0af848..14230578c 100644 --- a/src/backend/langflow/schema/__init__.py +++ b/src/backend/langflow/schema/__init__.py @@ -1,3 +1,4 @@ +from .dotdict import dotdict from .schema import Record -__all__ = ["Record"] +__all__ = ["Record", "dotdict"] diff --git a/src/backend/langflow/schema/dotdict.py b/src/backend/langflow/schema/dotdict.py new file mode 100644 index 000000000..f85c928bb --- /dev/null +++ b/src/backend/langflow/schema/dotdict.py @@ -0,0 +1,71 @@ +class dotdict(dict): + """ + dotdict allows accessing dictionary elements using dot notation (e.g., dict.key instead of dict['key']). + It automatically converts nested dictionaries into dotdict instances, enabling dot notation on them as well. + + Note: + - Only keys that are valid attribute names (e.g., strings that could be variable names) are accessible via dot notation. + - Keys which are not valid Python attribute names or collide with the dict method names (like 'items', 'keys') + should be accessed using the traditional dict['key'] notation. + """ + + def __getattr__(self, attr): + """ + Override dot access to behave like dictionary lookup. Automatically convert nested dicts to dotdicts. + + Args: + attr (str): Attribute to access. + + Returns: + The value associated with 'attr' in the dictionary, converted to dotdict if it is a dict. + + Raises: + AttributeError: If the attribute is not found in the dictionary. + """ + try: + value = self[attr] + if isinstance(value, dict) and not isinstance(value, dotdict): + value = dotdict(value) + self[attr] = value # Update self to nest dotdict for future accesses + return value + except KeyError: + raise AttributeError(f"'dotdict' object has no attribute '{attr}'") + + def __setattr__(self, key, value): + """ + Override attribute setting to work as dictionary item assignment. + + Args: + key (str): The key under which to store the value. + value: The value to store in the dictionary. + """ + if isinstance(value, dict) and not isinstance(value, dotdict): + value = dotdict(value) + self[key] = value + + def __delattr__(self, key): + """ + Override attribute deletion to work as dictionary item deletion. + + Args: + key (str): The key of the item to delete from the dictionary. + + Raises: + AttributeError: If the key is not found in the dictionary. + """ + try: + del self[key] + except KeyError: + raise AttributeError(f"'dotdict' object has no attribute '{key}'") + + def __missing__(self, key): + """ + Handle missing keys by returning an empty dotdict. This allows chaining access without raising KeyError. + + Args: + key: The missing key. + + Returns: + An empty dotdict instance for the given missing key. + """ + return dotdict() diff --git a/src/backend/langflow/schema/schema.py b/src/backend/langflow/schema/schema.py index 639c9da96..077c20cf5 100644 --- a/src/backend/langflow/schema/schema.py +++ b/src/backend/langflow/schema/schema.py @@ -1,4 +1,4 @@ -from typing import Any, Optional +import copy from langchain_core.documents import Document from pydantic import BaseModel @@ -9,12 +9,11 @@ class Record(BaseModel): Represents a record with text and optional data. Attributes: - text (str): The text of the record. data (dict, optional): Additional data associated with the record. """ - text: Optional[str] = "" data: dict = {} + _default_value: str = "" @classmethod def from_document(cls, document: Document) -> "Record": @@ -27,7 +26,22 @@ class Record(BaseModel): Returns: Record: The converted Record. """ - return cls(text=document.page_content, data=document.metadata) + data = document.metadata + data["text"] = document.page_content + return cls(data=data) + + def __add__(self, other: "Record") -> "Record": + """ + Concatenates the text of two records and combines their data. + + Args: + other (Record): The other record to concatenate with. + + Returns: + Record: The concatenated record. + """ + combined_data = {**self.data, **other.data} + return Record(data=combined_data) def to_lc_document(self) -> Document: """ @@ -38,20 +52,63 @@ class Record(BaseModel): """ return Document(page_content=self.text, metadata=self.data) - def __call__(self, *args: Any, **kwds: Any) -> Any: + def __getattr__(self, key): """ - Returns the text of the record. + Allows attribute-like access to the data dictionary. + """ + try: + if key == "data" or key.startswith("_"): + return super().__getattr__(key) - Returns: - Any: The text of the record. + return self.data.get(key, self._default_value) + except KeyError: + # Fallback to default behavior to raise AttributeError for undefined attributes + raise AttributeError( + f"'{type(self).__name__}' object has no attribute '{key}'" + ) + + def __setattr__(self, key, value): """ - return self.text + Allows attribute-like setting of values in the data dictionary, + while still allowing direct assignment to class attributes. + """ + if key == "data" or key.startswith("_"): + super().__setattr__(key, value) + else: + self.data[key] = value + + def __delattr__(self, key): + """ + Allows attribute-like deletion from the data dictionary. + """ + if key == "data" or key.startswith("_"): + super().__delattr__(key) + else: + del self.data[key] + + def __deepcopy__(self, memo): + """ + Custom deepcopy implementation to handle copying of the Record object. + """ + cls = self.__class__ + result = cls.__new__(cls) + memo[id(self)] = result + for k, v in self.__dict__.items(): + setattr(result, k, copy.deepcopy(v, memo)) + return result def __str__(self) -> str: """ - Returns the text of the record. - - Returns: - str: The text and data of the record. + Returns a string representation of the Record, including text and data. """ - return self.model_dump_json(indent=2) + # Assuming a method to dump model data as JSON string exists. + # If it doesn't, you might need to implement it or use json.dumps() directly. + # build the string considering all keys in the data dictionary + prefix = "Record(" + suffix = ")" + text = ", ".join([f"{k}={v}" for k, v in self.data.items()]) + return prefix + text + suffix + + # check which attributes the Record has by checking the keys in the data dictionary + def __dir__(self): + return super().__dir__() + list(self.data.keys()) diff --git a/src/backend/langflow/services/database/models/flow/model.py b/src/backend/langflow/services/database/models/flow/model.py index 1211c40ed..b91cc409d 100644 --- a/src/backend/langflow/services/database/models/flow/model.py +++ b/src/backend/langflow/services/database/models/flow/model.py @@ -4,9 +4,11 @@ from datetime import datetime from typing import TYPE_CHECKING, Dict, Optional from uuid import UUID, uuid4 +from emoji import purely_emoji from pydantic import field_serializer, field_validator from sqlmodel import JSON, Column, Field, Relationship, SQLModel +from langflow.interface.custom.attributes import validate_icon from langflow.schema.schema import Record if TYPE_CHECKING: @@ -16,13 +18,49 @@ if TYPE_CHECKING: class FlowBase(SQLModel): name: str = Field(index=True) description: Optional[str] = Field(index=True, nullable=True, default=None) + icon: Optional[str] = Field(default=None, nullable=True) + icon_bg_color: Optional[str] = Field(default=None, nullable=True) data: Optional[Dict] = Field(default=None, nullable=True) is_component: Optional[bool] = Field(default=False, nullable=True) - updated_at: Optional[datetime] = Field( - default_factory=datetime.utcnow, nullable=True - ) + updated_at: Optional[datetime] = Field(default_factory=datetime.utcnow, nullable=True) folder: Optional[str] = Field(default=None, nullable=True) + @field_validator("icon_bg_color") + def validate_icon_bg_color(cls, v): + if v is not None and not isinstance(v, str): + raise ValueError("Icon background color must be a string") + # validate that is is a hex color + if v and not v.startswith("#"): + raise ValueError("Icon background color must start with #") + + # validate that it is a valid hex color + if v and len(v) != 7: + raise ValueError("Icon background color must be 7 characters long") + return v + + @field_validator("icon") + def validate_icon_atr(cls, v): + # const emojiRegex = /\p{Emoji}/u; + # const isEmoji = emojiRegex.test(data?.node?.icon!); + # emoji pattern in Python + if v is None: + return v + + emoji = validate_icon(v) + + if purely_emoji(emoji): + # this is indeed an emoji + return emoji + # otherwise it should be a valid lucide icon + if v is not None and not isinstance(v, str): + raise ValueError("Icon must be a string") + # is should be lowercase and contain only letters and hyphens + if v and not v.islower(): + raise ValueError("Icon must be lowercase") + if v and not v.replace("-", "").isalpha(): + raise ValueError("Icon must contain only letters and hyphens") + return v + @field_validator("data") def validate_json(v): if not v: @@ -58,7 +96,7 @@ class FlowBase(SQLModel): class Flow(FlowBase, table=True): id: UUID = Field(default_factory=uuid4, primary_key=True, unique=True) data: Optional[Dict] = Field(default=None, sa_column=Column(JSON)) - user_id: UUID = Field(index=True, foreign_key="user.id", nullable=True) + user_id: Optional[UUID] = Field(index=True, foreign_key="user.id", nullable=True) user: "User" = Relationship(back_populates="flows") def to_record(self): @@ -80,7 +118,7 @@ class FlowCreate(FlowBase): class FlowRead(FlowBase): id: UUID - user_id: UUID = Field() + user_id: Optional[UUID] = Field() class FlowUpdate(SQLModel): diff --git a/src/backend/langflow/services/database/service.py b/src/backend/langflow/services/database/service.py index 3cecdd340..ebaccf600 100644 --- a/src/backend/langflow/services/database/service.py +++ b/src/backend/langflow/services/database/service.py @@ -36,10 +36,7 @@ class DatabaseService(Service): def _create_engine(self) -> "Engine": """Create the engine for the database.""" settings_service = get_settings_service() - if ( - settings_service.settings.DATABASE_URL - and settings_service.settings.DATABASE_URL.startswith("sqlite") - ): + if settings_service.settings.DATABASE_URL and settings_service.settings.DATABASE_URL.startswith("sqlite"): connect_args = {"check_same_thread": False} else: connect_args = {} @@ -51,9 +48,7 @@ class DatabaseService(Service): def __exit__(self, exc_type, exc_value, traceback): if exc_type is not None: # If an exception has been raised - logger.error( - f"Session rollback because of exception: {exc_type.__name__} {exc_value}" - ) + logger.error(f"Session rollback because of exception: {exc_type.__name__} {exc_value}") self._session.rollback() else: self._session.commit() @@ -70,9 +65,7 @@ class DatabaseService(Service): settings_service = get_settings_service() if settings_service.auth_settings.AUTO_LOGIN: with Session(self.engine) as session: - flows = session.exec( - select(models.Flow).where(models.Flow.user_id is None) - ).all() + flows = session.exec(select(models.Flow).where(models.Flow.user_id is None)).all() if flows: logger.debug("Migrating flows to default superuser") username = settings_service.auth_settings.SUPERUSER @@ -102,9 +95,7 @@ class DatabaseService(Service): expected_columns = list(model.model_fields.keys()) try: - available_columns = [ - col["name"] for col in inspector.get_columns(table) - ] + available_columns = [col["name"] for col in inspector.get_columns(table)] except sa.exc.NoSuchTableError: logger.error(f"Missing table: {table}") return False @@ -161,20 +152,18 @@ class DatabaseService(Service): try: command.check(alembic_cfg) except Exception as exc: - if isinstance( - exc, (util.exc.CommandError, util.exc.AutogenerateDiffsDetected) - ): + if isinstance(exc, (util.exc.CommandError, util.exc.AutogenerateDiffsDetected)): command.upgrade(alembic_cfg, "head") time.sleep(3) try: command.check(alembic_cfg) - except util.exc.AutogenerateDiffsDetected as e: + except util.exc.AutogenerateDiffsDetected as exc: logger.error(f"AutogenerateDiffsDetected: {exc}") if not fix: raise RuntimeError( "Something went wrong running migrations. Please, run `langflow migration --fix`" - ) from e + ) from exc if fix: self.try_downgrade_upgrade_until_success(alembic_cfg) @@ -199,10 +188,7 @@ class DatabaseService(Service): # We will check that all models are in the database # and that the database is up to date with all columns sql_models = [models.Flow, models.User, models.ApiKey] - return [ - TableResults(sql_model.__tablename__, self.check_table(sql_model)) - for sql_model in sql_models - ] + return [TableResults(sql_model.__tablename__, self.check_table(sql_model)) for sql_model in sql_models] def check_table(self, model): results = [] @@ -211,9 +197,7 @@ class DatabaseService(Service): expected_columns = list(model.__fields__.keys()) available_columns = [] try: - available_columns = [ - col["name"] for col in inspector.get_columns(table_name) - ] + available_columns = [col["name"] for col in inspector.get_columns(table_name)] results.append(Result(name=table_name, type="table", success=True)) except sa.exc.NoSuchTableError: logger.error(f"Missing table: {table_name}") @@ -244,9 +228,7 @@ class DatabaseService(Service): try: table.create(self.engine, checkfirst=True) except OperationalError as oe: - logger.warning( - f"Table {table} already exists, skipping. Exception: {oe}" - ) + logger.warning(f"Table {table} already exists, skipping. Exception: {oe}") except Exception as exc: logger.error(f"Error creating table {table}: {exc}") raise RuntimeError(f"Error creating table {table}") from exc @@ -258,9 +240,7 @@ class DatabaseService(Service): if table not in table_names: logger.error("Something went wrong creating the database and tables.") logger.error("Please check your database settings.") - raise RuntimeError( - "Something went wrong creating the database and tables." - ) + raise RuntimeError("Something went wrong creating the database and tables.") logger.debug("Database and tables created successfully") diff --git a/src/backend/langflow/services/deps.py b/src/backend/langflow/services/deps.py index 19f3dcbf0..7d2338b04 100644 --- a/src/backend/langflow/services/deps.py +++ b/src/backend/langflow/services/deps.py @@ -1,3 +1,4 @@ +from contextlib import contextmanager from typing import TYPE_CHECKING, Generator from langflow.services import ServiceType, service_manager @@ -54,6 +55,19 @@ def get_session() -> Generator["Session", None, None]: yield from db_service.get_session() +@contextmanager +def session_scope(): + session = next(get_session()) + try: + yield session + session.commit() + except: + session.rollback() + raise + finally: + session.close() + + def get_cache_service() -> "BaseCacheService": return service_manager.get(ServiceType.CACHE_SERVICE) # type: ignore diff --git a/src/backend/langflow/services/monitor/schema.py b/src/backend/langflow/services/monitor/schema.py index 2c1e34cd5..dc3e2454c 100644 --- a/src/backend/langflow/services/monitor/schema.py +++ b/src/backend/langflow/services/monitor/schema.py @@ -60,11 +60,11 @@ class MessageModel(BaseModel): "The record does not have the required fields 'sender' and 'sender_name' in the data." ) return cls( - sender=record.data["sender"], - sender_name=record.data["sender_name"], + sender=record.sender, + sender_name=record.sender_name, message=record.text, - session_id=record.data.get("session_id", ""), - artifacts=record.data.get("artifacts", {}), + session_id=record.session_id, + artifacts=record.artifacts or {}, ) diff --git a/src/backend/langflow/services/monitor/service.py b/src/backend/langflow/services/monitor/service.py index 6702a89eb..fdae04a4e 100644 --- a/src/backend/langflow/services/monitor/service.py +++ b/src/backend/langflow/services/monitor/service.py @@ -44,7 +44,9 @@ class MonitorService(Service): def ensure_tables_exist(self): for table_name, model in self.table_map.items(): - drop_and_create_table_if_schema_mismatch(str(self.db_path), table_name, model) + drop_and_create_table_if_schema_mismatch( + str(self.db_path), table_name, model + ) def add_row( self, @@ -105,6 +107,12 @@ class MonitorService(Service): with duckdb.connect(str(self.db_path)) as conn: conn.execute(query) + def delete_messages(self, session_id: str): + query = f"DELETE FROM messages WHERE session_id = '{session_id}'" + + with duckdb.connect(str(self.db_path)) as conn: + conn.execute(query) + def add_message(self, message: MessageModel): self.add_row("messages", message) diff --git a/src/backend/langflow/services/settings/base.py b/src/backend/langflow/services/settings/base.py index 93f8b3085..f2e78b8cb 100644 --- a/src/backend/langflow/services/settings/base.py +++ b/src/backend/langflow/services/settings/base.py @@ -65,6 +65,8 @@ class Settings(BaseSettings): STORAGE_TYPE: str = "local" + CELERY_ENABLED: bool = False + @validator("CONFIG_DIR", pre=True, allow_reuse=True) def set_langflow_dir(cls, value): if not value: diff --git a/src/backend/langflow/services/socket/utils.py b/src/backend/langflow/services/socket/utils.py index 4176d081c..c1f012e18 100644 --- a/src/backend/langflow/services/socket/utils.py +++ b/src/backend/langflow/services/socket/utils.py @@ -7,7 +7,7 @@ from sqlmodel import select from langflow.api.utils import format_elapsed_time from langflow.api.v1.schemas import ResultDataResponse, VertexBuildResponse from langflow.graph.graph.base import Graph -from langflow.graph.vertex.base import StatelessVertex +from langflow.graph.vertex.base import Vertex from langflow.services.database.models.flow.model import Flow from langflow.services.deps import get_session from langflow.services.monitor.utils import log_vertex_build @@ -63,7 +63,7 @@ async def build_vertex( return start_time = time.perf_counter() try: - if isinstance(vertex, StatelessVertex) or not vertex._built: + if isinstance(vertex, Vertex) or not vertex._built: await vertex.build(user_id=None, session_id=sid) params = vertex._built_object_repr() valid = True diff --git a/src/backend/langflow/services/task/service.py b/src/backend/langflow/services/task/service.py index 3f7a81f2c..4d9a4412f 100644 --- a/src/backend/langflow/services/task/service.py +++ b/src/backend/langflow/services/task/service.py @@ -1,11 +1,14 @@ -from typing import Any, Callable, Coroutine, Union +from typing import TYPE_CHECKING, Any, Callable, Coroutine, Union + +from loguru import logger from langflow.services.base import Service from langflow.services.task.backends.anyio import AnyIOBackend from langflow.services.task.backends.base import TaskBackend from langflow.services.task.utils import get_celery_worker_status -from langflow.utils.logger import configure -from loguru import logger + +if TYPE_CHECKING: + from langflow.services.settings.service import SettingsService def check_celery_availability(): @@ -20,28 +23,31 @@ def check_celery_availability(): return status -try: - configure() - status = check_celery_availability() - - USE_CELERY = status.get("availability") is not None -except ImportError: - USE_CELERY = False - - class TaskService(Service): name = "task_service" - def __init__(self): - self.backend = self.get_backend() + def __init__(self, settings_service: "SettingsService"): + self.settings_service = settings_service + try: + if self.settings_service.settings.CELERY_ENABLED: + USE_CELERY = True + status = check_celery_availability() + + USE_CELERY = status.get("availability") is not None + else: + USE_CELERY = False + except ImportError: + USE_CELERY = False + self.use_celery = USE_CELERY + self.backend = self.get_backend() @property def backend_name(self) -> str: return self.backend.name def get_backend(self) -> TaskBackend: - if USE_CELERY: + if self.use_celery: from langflow.services.task.backends.celery import CeleryBackend logger.debug("Using Celery backend") diff --git a/src/backend/langflow/template/field/base.py b/src/backend/langflow/template/field/base.py index 455d5779b..76e5a13d0 100644 --- a/src/backend/langflow/template/field/base.py +++ b/src/backend/langflow/template/field/base.py @@ -65,10 +65,17 @@ class TemplateField(BaseModel): info: Optional[str] = "" """Additional information about the field to be shown in the tooltip. Defaults to an empty string.""" - refresh: Optional[bool] = None - """Specifies if the field should be refreshed. Defaults to False.""" + real_time_refresh: Optional[bool] = None + """Specifies if the field should have real time refresh. `refresh_button` must be False. Defaults to None.""" - range_spec: Optional[RangeSpec] = Field(default=None, serialization_alias="rangeSpec") + refresh_button: Optional[bool] = None + """Specifies if the field should have a refresh button. Defaults to False.""" + refresh_button_text: Optional[str] = None + """Specifies the text for the refresh button. Defaults to None.""" + + range_spec: Optional[RangeSpec] = Field( + default=None, serialization_alias="rangeSpec" + ) """Range specification for the field. Defaults to None.""" title_case: bool = False @@ -84,10 +91,10 @@ class TemplateField(BaseModel): if self.field_type in ["str", "Text"]: if "input_types" not in result: result["input_types"] = ["Text"] - else: - result["input_types"].append("Text") if self.field_type == "Text": result["type"] = "str" + else: + result["type"] = self.field_type return result @field_serializer("file_path") @@ -117,6 +124,10 @@ class TemplateField(BaseModel): if not isinstance(value, list): raise ValueError("file_types must be a list") return [ - (f".{file_type}" if isinstance(file_type, str) and not file_type.startswith(".") else file_type) + ( + f".{file_type}" + if isinstance(file_type, str) and not file_type.startswith(".") + else file_type + ) for file_type in value ] diff --git a/src/backend/langflow/template/field/prompt.py b/src/backend/langflow/template/field/prompt.py new file mode 100644 index 000000000..89a4bf745 --- /dev/null +++ b/src/backend/langflow/template/field/prompt.py @@ -0,0 +1,14 @@ +from typing import Optional + +from langflow.template.field.base import TemplateField + + +class DefaultPromptField(TemplateField): + name: str + display_name: Optional[str] = None + field_type: str = "str" + + advanced: bool = False + multiline: bool = True + input_types: list[str] = ["Document", "BaseOutputParser", "Text", "Record"] + value: str = "" # Set the value to empty string diff --git a/src/backend/langflow/template/frontend_node/base.py b/src/backend/langflow/template/frontend_node/base.py index 2a19ec9c9..7bd68ddf9 100644 --- a/src/backend/langflow/template/frontend_node/base.py +++ b/src/backend/langflow/template/frontend_node/base.py @@ -71,8 +71,11 @@ class FrontendNode(BaseModel): """Full path of the frontend node.""" field_formatters: FieldFormatters = Field(default_factory=FieldFormatters) """Field formatters for the frontend node.""" - pinned: bool = False - """Whether the frontend node is pinned.""" + frozen: bool = False + """Whether the frontend node is frozen.""" + + field_order: list[str] = [] + """Order of the fields in the frontend node.""" beta: bool = False error: Optional[str] = None diff --git a/src/backend/langflow/template/frontend_node/memories.py b/src/backend/langflow/template/frontend_node/memories.py index 588bcc2c0..93ea561dd 100644 --- a/src/backend/langflow/template/frontend_node/memories.py +++ b/src/backend/langflow/template/frontend_node/memories.py @@ -15,7 +15,7 @@ from langflow.template.template.base import Template class MemoryFrontendNode(FrontendNode): - pinned: bool = True + frozen: bool = True def add_extra_fields(self) -> None: # chat history should have another way to add common field? diff --git a/src/backend/langflow/template/template/base.py b/src/backend/langflow/template/template/base.py index 9bc375b0f..d7632e239 100644 --- a/src/backend/langflow/template/template/base.py +++ b/src/backend/langflow/template/template/base.py @@ -1,14 +1,14 @@ from typing import Callable, Union +from pydantic import BaseModel, model_serializer + from langflow.template.field.base import TemplateField from langflow.utils.constants import DIRECT_TYPES -from pydantic import BaseModel, model_serializer class Template(BaseModel): type_name: str fields: list[TemplateField] - field_order: list[str] = [] def process_fields( self, @@ -30,7 +30,6 @@ class Template(BaseModel): for field in self.fields: result[field.name] = field.model_dump(by_alias=True, exclude_none=True) result["_type"] = result.pop("type_name") - result.pop("field_order", None) return result # For backwards compatibility diff --git a/src/backend/langflow/utils/util.py b/src/backend/langflow/utils/util.py index 7e1206222..ad6660cc5 100644 --- a/src/backend/langflow/utils/util.py +++ b/src/backend/langflow/utils/util.py @@ -5,8 +5,8 @@ from functools import wraps from typing import Any, Dict, List, Optional, Union from docstring_parser import parse -from langchain_core.documents import Document +from langflow.schema.schema import Record from langflow.template.frontend_node.constants import FORCE_SHOW_FIELDS from langflow.utils import constants @@ -15,8 +15,12 @@ def remove_ansi_escape_codes(text): return re.sub(r"\x1b\[[0-9;]*[a-zA-Z]", "", text) -def build_template_from_function(name: str, type_to_loader_dict: Dict, add_function: bool = False): - classes = [item.__annotations__["return"].__name__ for item in type_to_loader_dict.values()] +def build_template_from_function( + name: str, type_to_loader_dict: Dict, add_function: bool = False +): + classes = [ + item.__annotations__["return"].__name__ for item in type_to_loader_dict.values() + ] # Raise error if name is not in chains if name not in classes: @@ -37,8 +41,10 @@ def build_template_from_function(name: str, type_to_loader_dict: Dict, add_funct for name_, value_ in value.__repr_args__(): if name_ == "default_factory": try: - variables[class_field_items]["default"] = get_default_factory( - module=_class.__base__.__module__, function=value_ + variables[class_field_items]["default"] = ( + get_default_factory( + module=_class.__base__.__module__, function=value_ + ) ) except Exception: variables[class_field_items]["default"] = None @@ -46,7 +52,9 @@ def build_template_from_function(name: str, type_to_loader_dict: Dict, add_funct variables[class_field_items][name_] = value_ variables[class_field_items]["placeholder"] = ( - docs.params[class_field_items] if class_field_items in docs.params else "" + docs.params[class_field_items] + if class_field_items in docs.params + else "" ) # Adding function to base classes to allow # the output to be a function @@ -61,7 +69,9 @@ def build_template_from_function(name: str, type_to_loader_dict: Dict, add_funct } -def build_template_from_class(name: str, type_to_cls_dict: Dict, add_function: bool = False): +def build_template_from_class( + name: str, type_to_cls_dict: Dict, add_function: bool = False +): classes = [item.__name__ for item in type_to_cls_dict.values()] # Raise error if name is not in chains @@ -85,8 +95,11 @@ def build_template_from_class(name: str, type_to_cls_dict: Dict, add_function: b for name_, value_ in value.__repr_args__(): if name_ == "default_factory": try: - variables[class_field_items]["default"] = get_default_factory( - module=_class.__base__.__module__, function=value_ + variables[class_field_items]["default"] = ( + get_default_factory( + module=_class.__base__.__module__, + function=value_, + ) ) except Exception: variables[class_field_items]["default"] = None @@ -94,7 +107,9 @@ def build_template_from_class(name: str, type_to_cls_dict: Dict, add_function: b variables[class_field_items][name_] = value_ variables[class_field_items]["placeholder"] = ( - docs.params[class_field_items] if class_field_items in docs.params else "" + docs.params[class_field_items] + if class_field_items in docs.params + else "" ) base_classes = get_base_classes(_class) # Adding function to base classes to allow @@ -126,7 +141,9 @@ def build_template_from_method( # Check if the method exists in this class if not hasattr(_class, method_name): - raise ValueError(f"Method {method_name} not found in class {class_name}") + raise ValueError( + f"Method {method_name} not found in class {class_name}" + ) # Get the method method = getattr(_class, method_name) @@ -145,8 +162,14 @@ def build_template_from_method( "_type": _type, **{ name: { - "default": param.default if param.default != param.empty else None, - "type": param.annotation if param.annotation != param.empty else None, + "default": ( + param.default if param.default != param.empty else None + ), + "type": ( + param.annotation + if param.annotation != param.empty + else None + ), "required": param.default == param.empty, } for name, param in params.items() @@ -233,7 +256,9 @@ def sync_to_async(func): return async_wrapper -def format_dict(dictionary: Dict[str, Any], class_name: Optional[str] = None) -> Dict[str, Any]: +def format_dict( + dictionary: Dict[str, Any], class_name: Optional[str] = None +) -> Dict[str, Any]: """ Formats a dictionary by removing certain keys and modifying the values of other keys. @@ -243,7 +268,7 @@ def format_dict(dictionary: Dict[str, Any], class_name: Optional[str] = None) -> """ for key, value in dictionary.items(): - if key == "_type": + if key in ["_type"]: continue _type: Union[str, type] = get_type(value) @@ -319,7 +344,9 @@ def check_list_type(_type: str, value: Dict[str, Any]) -> str: The modified type string. """ if any(list_type in _type for list_type in ["List", "Sequence", "Set"]): - _type = _type.replace("List[", "").replace("Sequence[", "").replace("Set[", "")[:-1] + _type = ( + _type.replace("List[", "").replace("Sequence[", "").replace("Set[", "")[:-1] + ) value["list"] = True else: value["list"] = False @@ -422,7 +449,9 @@ def set_headers_value(value: Dict[str, Any]) -> None: value["value"] = """{"Authorization": "Bearer "}""" -def add_options_to_field(value: Dict[str, Any], class_name: Optional[str], key: str) -> None: +def add_options_to_field( + value: Dict[str, Any], class_name: Optional[str], key: str +) -> None: """ Adds options to the field based on the class name and key. """ @@ -439,10 +468,20 @@ def add_options_to_field(value: Dict[str, Any], class_name: Optional[str], key: value["value"] = options_map[class_name][0] -def build_loader_repr_from_documents(documents: List[Document]) -> str: - if documents: - avg_length = sum(len(doc.page_content) for doc in documents) / len(documents) - return f"""{len(documents)} documents - \nAvg. Document Length (characters): {int(avg_length)} - Documents: {documents[:3]}...""" - return "0 documents" +def build_loader_repr_from_records(records: List[Record]) -> str: + """ + Builds a string representation of the loader based on the given records. + + Args: + records (List[Record]): A list of records. + + Returns: + str: A string representation of the loader. + + """ + if records: + avg_length = sum(len(doc.text) for doc in records) / len(records) + return f"""{len(records)} records + \nAvg. Record Length (characters): {int(avg_length)} + Records: {records[:3]}...""" + return "0 records" diff --git a/src/backend/langflow/utils/validate.py b/src/backend/langflow/utils/validate.py index 21821538c..e7bd4ae05 100644 --- a/src/backend/langflow/utils/validate.py +++ b/src/backend/langflow/utils/validate.py @@ -6,8 +6,6 @@ from typing import Dict, List, Optional, Union from langflow.field_typing.constants import CUSTOM_COMPONENT_SUPPORTED_TYPES -PROMPT_INPUT_TYPES = ["Document", "BaseOutputParser", "Text", "Record"] - def add_type_ignores(): if not hasattr(ast, "TypeIgnore"): diff --git a/src/frontend/package-lock.json b/src/frontend/package-lock.json index fed40b4f0..791e88f0e 100644 --- a/src/frontend/package-lock.json +++ b/src/frontend/package-lock.json @@ -100,6 +100,7 @@ "prettier-plugin-tailwindcss": "^0.3.0", "pretty-quick": "^3.1.3", "tailwindcss": "^3.3.3", + "tailwindcss-dotted-background": "^1.1.0", "typescript": "^5.2.2", "vite": "^4.5.2" } @@ -10854,6 +10855,15 @@ "tailwindcss": ">=3.0.0 || insiders" } }, + "node_modules/tailwindcss-dotted-background": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/tailwindcss-dotted-background/-/tailwindcss-dotted-background-1.1.0.tgz", + "integrity": "sha512-uFzCW5Bpyn8XgppTkyzqdHecH7XCDaS/eXvegDrOCYE6PxTm7dRrD9cuUcZe6mxQFVfkLu9rDmHJdqbjz9FdLA==", + "dev": true, + "peerDependencies": { + "tailwindcss": "^3.0.0" + } + }, "node_modules/tailwindcss/node_modules/glob-parent": { "version": "6.0.2", "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz", diff --git a/src/frontend/package.json b/src/frontend/package.json index 41c9fcd43..4df74af54 100644 --- a/src/frontend/package.json +++ b/src/frontend/package.json @@ -122,6 +122,7 @@ "prettier-plugin-tailwindcss": "^0.3.0", "pretty-quick": "^3.1.3", "tailwindcss": "^3.3.3", + "tailwindcss-dotted-background": "^1.1.0", "typescript": "^5.2.2", "vite": "^4.5.2" } diff --git a/src/frontend/src/App.css b/src/frontend/src/App.css index 095c63bd7..6aa681415 100644 --- a/src/frontend/src/App.css +++ b/src/frontend/src/App.css @@ -90,3 +90,12 @@ body { .jv-indent::-webkit-scrollbar-thumb:hover { background-color: #bbb !important; } + +.custom-hover { + transition: background-color 0.5s ease; +} + +.custom-hover:hover { + background-color: rgba(99, 102, 241, 0.1); /* Medium indigo color with 20% opacity */ +} + diff --git a/src/frontend/src/App.tsx b/src/frontend/src/App.tsx index ad76ff49a..b57dad461 100644 --- a/src/frontend/src/App.tsx +++ b/src/frontend/src/App.tsx @@ -69,7 +69,7 @@ export default function App() { .catch(() => { setFetchError(true); }); - }, 20000); + }, 20000); // 20 seconds // Clean up the timer on component unmount return () => { diff --git a/src/frontend/src/CustomNodes/GenericNode/components/parameterComponent/index.tsx b/src/frontend/src/CustomNodes/GenericNode/components/parameterComponent/index.tsx index 3ce16df5a..b0fd6804d 100644 --- a/src/frontend/src/CustomNodes/GenericNode/components/parameterComponent/index.tsx +++ b/src/frontend/src/CustomNodes/GenericNode/components/parameterComponent/index.tsx @@ -23,14 +23,16 @@ import { OUTPUT_HANDLER_HOVER, TOOLTIP_EMPTY, } from "../../../../constants/constants"; -import { postCustomComponentUpdate } from "../../../../controllers/API"; import useAlertStore from "../../../../stores/alertStore"; import useFlowStore from "../../../../stores/flowStore"; import useFlowsManagerStore from "../../../../stores/flowsManagerStore"; import { useTypesStore } from "../../../../stores/typesStore"; -import { APIClassType } from "../../../../types/api"; +import { APIClassType, ResponseErrorTypeAPI } from "../../../../types/api"; import { ParameterComponentType } from "../../../../types/components"; -import { NodeDataType } from "../../../../types/flow"; +import { + handleUpdateValues, + throttledHandleUpdateValues, +} from "../../../../utils/parameterUtils"; import { convertObjToArray, convertValuesToNumbers, @@ -86,72 +88,92 @@ export default function ParameterComponent({ const takeSnapshot = useFlowsManagerStore((state) => state.takeSnapshot); - const handleUpdateValues = async ( - name: string, - data: NodeDataType, - delayAnimation: boolean = true - ) => { + const handleRefreshButtonPress = async (name, data) => { setIsLoading(true); - const code = data.node?.template["code"]?.value; - if (!code) { - console.error("Code not found in the template"); - return; - } - try { - const res = await postCustomComponentUpdate(code, name); - if (res.status === 200 && data.node?.template) { + let newTemplate = await handleUpdateValues(name, data); + if (newTemplate) { setNode(data.id, (oldNode) => { let newNode = cloneDeep(oldNode); - newNode.data = { ...newNode.data, }; - - newNode.data.node.template[name] = res.data.template[name]; - + newNode.data.node.template = newTemplate; return newNode; }); } - } catch (err) { - setErrorData(err as { title: string; list?: Array }); + } catch (error) { + let responseError = error as ResponseErrorTypeAPI; + setErrorData({ + title: "Error while updating the Component", + list: [responseError.response.data.detail.error ?? "Unknown error"], + }); } - + setIsLoading(false); renderTooltips(); - if (delayAnimation) { - try { - // Wait for at least 500 milliseconds - await new Promise((resolve) => setTimeout(resolve, 500)); - // Continue with the request - // If the request takes longer than 500 milliseconds, it will not wait an additional 500 milliseconds - } catch (error) { - console.error("Error occurred while waiting for refresh:", error); - } finally { - setIsLoading(false); - } - } else setIsLoading(false); }; useEffect(() => { - function fetchData() { + async function fetchData() { if ( - data.node?.template[name]?.refresh && - Object.keys(data.node?.template[name]?.options ?? {}).length === 0 + (data.node?.template[name]?.real_time_refresh || + data.node?.template[name]?.refresh_button) && + // options can be undefined but not an empty array + (data.node?.template[name]?.options?.length ?? 0) === 0 ) { - handleUpdateValues(name, data, false); + setIsLoading(true); + try { + let newTemplate = await handleUpdateValues(name, data); + if (newTemplate) { + setNode(data.id, (oldNode) => { + let newNode = cloneDeep(oldNode); + newNode.data = { + ...newNode.data, + }; + newNode.data.node.template = newTemplate; + return newNode; + }); + } + } catch (error) { + let responseError = error as ResponseErrorTypeAPI; + setErrorData({ + title: "Error while updating the Component", + list: [responseError.response.data.detail.error ?? "Unknown error"], + }); + } + setIsLoading(false); + renderTooltips(); } } fetchData(); }, []); - const handleOnNewValue = ( + const handleOnNewValue = async ( newValue: string | string[] | boolean | Object[] - ): void => { + ): Promise => { if (data.node!.template[name].value !== newValue) { takeSnapshot(); } + const shouldUpdate = + data.node?.template[name].real_time_refresh && + !data.node?.template[name].refresh_button && + data.node!.template[name].value !== newValue; data.node!.template[name].value = newValue; // necessary to enable ctrl+z inside the input - + let newTemplate; + if (shouldUpdate) { + setIsLoading(true); + try { + newTemplate = await throttledHandleUpdateValues(name, data); + } catch (error) { + let responseError = error as ResponseErrorTypeAPI; + setErrorData({ + title: "Error while updating the Component", + list: [responseError.response.data.detail.error ?? "Unknown error"], + }); + } + setIsLoading(false); + // this de + } setNode(data.id, (oldNode) => { let newNode = cloneDeep(oldNode); @@ -159,7 +181,9 @@ export default function ParameterComponent({ ...newNode.data, }; - newNode.data.node.template[name].value = newValue; + if (data.node?.template[name].real_time_refresh && newTemplate) { + newNode.data.node.template = newTemplate; + } else newNode.data.node.template[name].value = newValue; return newNode; }); @@ -253,15 +277,15 @@ export default function ParameterComponent({ {item.display_name === "" ? "" : " - "} {item.display_name.split(", ").length > 2 ? item.display_name.split(", ").map((el, index) => ( - - - {index === + + + {index === item.display_name.split(", ").length - 1 - ? el - : (el += `, `)} - - - )) + ? el + : (el += `, `)} + + + )) : item.display_name} ) : ( @@ -270,14 +294,14 @@ export default function ParameterComponent({ {item.type === "" ? "" : " - "} {item.type.split(", ").length > 2 ? item.type.split(", ").map((el, index) => ( - - - {index === item.type.split(", ").length - 1 - ? el - : (el += `, `)} - - - )) + + + {index === item.type.split(", ").length - 1 + ? el + : (el += `, `)} + + + )) : item.type} )} @@ -291,11 +315,14 @@ export default function ParameterComponent({ refHtml.current = {TOOLTIP_EMPTY}; } } + // If optionalHandle is an empty list, then it is not an optional handle + if (optionalHandle && optionalHandle.length === 0) { + optionalHandle = null; + } useEffect(() => { renderTooltips(); }, [tooltipTitle, flow]); - return !showNode ? ( left && LANGFLOW_SUPPORTED_TYPES.has(type ?? "") && !optionalHandle ? ( <> @@ -346,7 +373,7 @@ export default function ParameterComponent({ className={ "relative mt-1 flex w-full flex-wrap items-center justify-between bg-muted px-5 py-2" + ((name === "code" && type === "code") || - (name.includes("code") && proxy) + (name.includes("code") && proxy) ? " hidden " : "") } @@ -355,21 +382,26 @@ export default function ParameterComponent({
- {!left && data.node?.pinned && + {!left && data.node?.frozen && (
- -
} + +
+ )} {proxy ? ( {proxy.id}}> - {title} + + {title} + ) : ( - {title} - )} + + {title} + + )} {required ? " *" : ""} @@ -431,22 +463,29 @@ export default function ParameterComponent({ )} {left === true && - type === "str" && - !data.node?.template[name].options ? ( + type === "str" && + !data.node?.template[name].options ? (
{data.node?.template[name].list ? ( -
+
- {data.node?.template[name].refresh && ( + {/* {data.node?.template[name].refresh_button && (
+
+ )} */} +
+ ) : data.node?.template[name].multiline ? ( +
+
+ +
+ {data.node?.template[name].refresh_button && ( +
+
)}
- ) : data.node?.template[name].multiline ? ( - ) : (
-
+
- {data.node?.template[name].refresh && ( + {data.node?.template[name].refresh_button && (
@@ -518,11 +587,12 @@ export default function ParameterComponent({ ) : left === true && type === "str" && (data.node?.template[name].options || - data.node?.template[name]?.refresh) ? ( + data.node?.template[name]?.real_time_refresh) ? ( // TODO: Improve CSS
- {data.node?.template[name].refresh && ( + {data.node?.template[name].refresh_button && (
@@ -603,10 +676,10 @@ export default function ParameterComponent({ editNode={false} value={ !data.node!.template[name].value || - data.node!.template[name].value?.toString() === "{}" + data.node!.template[name].value?.toString() === "{}" ? { - yourkey: "value", - } + yourkey: "value", + } : data.node!.template[name].value } onChange={handleOnNewValue} @@ -620,7 +693,7 @@ export default function ParameterComponent({ editNode={false} value={ data.node!.template[name].value?.length === 0 || - !data.node!.template[name].value + !data.node!.template[name].value ? [{ "": "" }] : convertObjToArray(data.node!.template[name].value) } @@ -628,8 +701,14 @@ export default function ParameterComponent({ onChange={(newValue) => { const valueToNumbers = convertValuesToNumbers(newValue); setErrorDuplicateKey(hasDuplicateKeys(valueToNumbers)); - handleOnNewValue(valueToNumbers); + // if data.node?.template[name].list is true, then the value is an array of objects + // else we need to get the first object of the array + + if (data.node?.template[name].list) { + handleOnNewValue(valueToNumbers); + } else handleOnNewValue(valueToNumbers[0]); }} + isList={data.node?.template[name].list ?? false} />
) : ( diff --git a/src/frontend/src/CustomNodes/GenericNode/index.tsx b/src/frontend/src/CustomNodes/GenericNode/index.tsx index bfc277c08..e4a48eddb 100644 --- a/src/frontend/src/CustomNodes/GenericNode/index.tsx +++ b/src/frontend/src/CustomNodes/GenericNode/index.tsx @@ -12,10 +12,10 @@ import { RUN_TIMESTAMP_PREFIX, STATUS_BUILD, STATUS_BUILDING, - priorityFields, } from "../../constants/constants"; import { BuildStatus } from "../../constants/enums"; import NodeToolbarComponent from "../../pages/FlowPage/components/nodeToolbarComponent"; +import useAlertStore from "../../stores/alertStore"; import { useDarkStore } from "../../stores/darkStore"; import useFlowStore from "../../stores/flowStore"; import useFlowsManagerStore from "../../stores/flowsManagerStore"; @@ -24,7 +24,7 @@ import { validationStatusType } from "../../types/components"; import { NodeDataType } from "../../types/flow"; import { handleKeyDown, scapedJSONStringfy } from "../../utils/reactflowUtils"; import { nodeColors, nodeIconsLucide } from "../../utils/styleUtils"; -import { classNames, cn, getFieldTitle } from "../../utils/utils"; +import { classNames, cn, getFieldTitle, sortFields } from "../../utils/utils"; import ParameterComponent from "./components/parameterComponent"; export default function GenericNode({ @@ -43,6 +43,7 @@ export default function GenericNode({ const flowPool = useFlowStore((state) => state.flowPool); const buildFlow = useFlowStore((state) => state.buildFlow); const setNode = useFlowStore((state) => state.setNode); + const setErrorData = useAlertStore((state) => state.setErrorData); const name = nodeIconsLucide[data.type] ? data.type : types[data.type]; const [inputName, setInputName] = useState(false); const [nodeName, setNodeName] = useState(data.node!.display_name); @@ -64,6 +65,18 @@ export default function GenericNode({ const takeSnapshot = useFlowsManagerStore((state) => state.takeSnapshot); + if (!data.node!.template) { + setErrorData({ + title: `Error in component ${data.node!.display_name}`, + list: [ + `The component ${data.node!.display_name} has no template.`, + `Please contact the developer of the component to fix this issue.`, + ], + }); + takeSnapshot(); + deleteNode(data.id); + } + function countHandles(): void { let count = Object.keys(data.node!.template) .filter((templateField) => templateField.charAt(0) !== "_") @@ -511,10 +524,10 @@ export default function GenericNode({ ) : !validationStatus ? ( {STATUS_BUILD} ) : ( -
+
{lastRunTime && ( -
+
{RUN_TIMESTAMP_PREFIX}
{lastRunTime} @@ -522,19 +535,19 @@ export default function GenericNode({
)}
-
+
Duration:
-
+
{validationStatus?.data.duration}

- + Output -
+
{validationString.split("\n").map((line, index) => ( -
{line}
+
{line}
))}
@@ -639,15 +652,7 @@ export default function GenericNode({ <> {Object.keys(data.node!.template) .filter((templateField) => templateField.charAt(0) !== "_") - .sort((a, b) => { - if (priorityFields.has(a.toLowerCase())) { - return -1; - } else if (priorityFields.has(b.toLowerCase())) { - return 1; - } else { - return a.localeCompare(b); - } - }) + .sort((a, b) => sortFields(a, b, data.node?.field_order ?? [])) .map((templateField: string, idx) => (
{data.node!.template[templateField].show && diff --git a/src/frontend/src/assets/undraw_design_components_9vy6.svg b/src/frontend/src/assets/undraw_design_components_9vy6.svg new file mode 100644 index 000000000..c21e134e9 --- /dev/null +++ b/src/frontend/src/assets/undraw_design_components_9vy6.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/src/frontend/src/assets/undraw_transfer_files_re_a2a9.svg b/src/frontend/src/assets/undraw_transfer_files_re_a2a9.svg new file mode 100644 index 000000000..c5930b9ea --- /dev/null +++ b/src/frontend/src/assets/undraw_transfer_files_re_a2a9.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/src/frontend/src/components/AccordionComponent/index.tsx b/src/frontend/src/components/AccordionComponent/index.tsx index 9a3c4a212..3d9aac314 100644 --- a/src/frontend/src/components/AccordionComponent/index.tsx +++ b/src/frontend/src/components/AccordionComponent/index.tsx @@ -50,9 +50,7 @@ export default function AccordionComponent({ {trigger} -
- {children} -
+
{children}
diff --git a/src/frontend/src/components/IOview/index.tsx b/src/frontend/src/components/IOview/index.tsx index 73f0a068e..ced31a177 100644 --- a/src/frontend/src/components/IOview/index.tsx +++ b/src/frontend/src/components/IOview/index.tsx @@ -20,7 +20,12 @@ import { Badge } from "../ui/badge"; import { Button } from "../ui/button"; import { Tabs, TabsContent, TabsList, TabsTrigger } from "../ui/tabs"; -export default function IOView({ children, open, setOpen }): JSX.Element { +export default function IOView({ children, open, setOpen, disable }: { + children: JSX.Element; + open: boolean; + setOpen: (open: boolean) => void; + disable?: boolean; +}): JSX.Element { const inputs = useFlowStore((state) => state.inputs).filter( (input) => input.type !== "ChatInput" ); @@ -98,6 +103,7 @@ export default function IOView({ children, open, setOpen }): JSX.Element { size={haveChat ? (selectedTab === 0 ? "large-thin" : "large") : "small"} open={open} setOpen={setOpen} + disable={disable} > {children} {/* TODO ADAPT TO ALL TYPES OF INPUTS AND OUTPUTS */} diff --git a/src/frontend/src/components/NewFLowCard2/index.tsx b/src/frontend/src/components/NewFLowCard2/index.tsx new file mode 100644 index 000000000..fe63413e2 --- /dev/null +++ b/src/frontend/src/components/NewFLowCard2/index.tsx @@ -0,0 +1,26 @@ +import { Card, CardContent, CardDescription, CardTitle } from "../ui/card"; +import useFlowsManagerStore from "../../stores/flowsManagerStore"; +import { useNavigate } from "react-router-dom"; +import IconComponent from "../genericIconComponent"; +import { cn } from "../../utils/utils"; + +export default function NewFlowCardComponent() { + const addFlow = useFlowsManagerStore((state) => state.addFlow); + const navigate = useNavigate(); + return ( + { + addFlow(true).then((id) => { + navigate("/flow/" + id); + }); + }} className="pt-4 w-80 h-64 cursor-pointer bg-background"> + +
+ +
+
+ + Blank Flow + +
+ ) +} \ No newline at end of file diff --git a/src/frontend/src/components/NewFlowCardComponent/index.tsx b/src/frontend/src/components/NewFlowCardComponent/index.tsx new file mode 100644 index 000000000..1fe5b4b46 --- /dev/null +++ b/src/frontend/src/components/NewFlowCardComponent/index.tsx @@ -0,0 +1,33 @@ +import { useNavigate } from "react-router-dom"; +import useFlowsManagerStore from "../../stores/flowsManagerStore"; +import { cn } from "../../utils/utils"; +import IconComponent from "../genericIconComponent"; +import { Card, CardContent } from "../ui/card"; + +export default function NewFlowCardComponent({}: {}) { + const addFlow = useFlowsManagerStore((state) => state.addFlow); + const navigate = useNavigate(); + + return ( + + + + + + ); +} diff --git a/src/frontend/src/components/chatComponent/index.tsx b/src/frontend/src/components/chatComponent/index.tsx index da612340a..3d3a01562 100644 --- a/src/frontend/src/components/chatComponent/index.tsx +++ b/src/frontend/src/components/chatComponent/index.tsx @@ -1,14 +1,26 @@ -import { useEffect, useRef, useState } from "react"; +import { useEffect, useMemo, useRef, useState } from "react"; import useFlowStore from "../../stores/flowStore"; import { ChatType } from "../../types/chat"; import IOView from "../IOview"; import ChatTrigger from "../ViewTriggers/chat"; +import { Transition } from "@headlessui/react"; +import ForwardedIconComponent from "../genericIconComponent"; +import { Separator } from "../ui/separator"; +import ShareModal from "../../modals/shareModal"; +import { useStoreStore } from "../../stores/storeStore"; +import useFlowsManagerStore from "../../stores/flowsManagerStore"; +import { classNames } from "../../utils/utils"; +import ApiModal from "../../modals/ApiModal"; -export default function Chat({ flow }: ChatType): JSX.Element { +export default function FlowToolbar({ flow }: ChatType): JSX.Element { const [open, setOpen] = useState(false); const flowState = useFlowStore((state) => state.flowState); const nodes = useFlowStore((state) => state.nodes); const hasIO = useFlowStore((state) => state.hasIO); + const hasStore = useStoreStore((state) => state.hasStore); + const validApiKey = useStoreStore((state) => state.validApiKey); + const currentFlow = useFlowsManagerStore((state) => state.currentFlow); + const hasApiKey = useStoreStore((state) => state.hasApiKey); useEffect(() => { const handleKeyDown = (event: KeyboardEvent) => { @@ -29,16 +41,102 @@ export default function Chat({ flow }: ChatType): JSX.Element { const prevNodesRef = useRef(); + const ModalMemo = useMemo( + () => ( + + + + ), + [hasApiKey, validApiKey, currentFlow, hasStore] + ); + return ( <> -
- {/* */} - {hasIO && ( - - + +
+
+
+ {hasIO ? ( + +
+ + Run +
+ ) : ( +
+ + Run +
)} +
+
+ +
+
+ {currentFlow && currentFlow.data && ( + +
+ + API +
+
+ )} +
+
+ +
+
+
{ModalMemo}
+
+
+
); } diff --git a/src/frontend/src/components/codeTabsComponent/index.tsx b/src/frontend/src/components/codeTabsComponent/index.tsx index 8ddb09552..a83a36fc2 100644 --- a/src/frontend/src/components/codeTabsComponent/index.tsx +++ b/src/frontend/src/components/codeTabsComponent/index.tsx @@ -126,7 +126,7 @@ export default function CodeTabsComponent({ {tab.code} diff --git a/src/frontend/src/components/dropdownComponent/index.tsx b/src/frontend/src/components/dropdownComponent/index.tsx index 6dfd31dba..48a610d29 100644 --- a/src/frontend/src/components/dropdownComponent/index.tsx +++ b/src/frontend/src/components/dropdownComponent/index.tsx @@ -5,6 +5,7 @@ import { classNames } from "../../utils/utils"; import IconComponent from "../genericIconComponent"; export default function Dropdown({ + disabled, isLoading, value, options, @@ -28,6 +29,7 @@ export default function Dropdown({ <> { setInternalValue(value); onSelect(value); diff --git a/src/frontend/src/components/exampleComponent/index.tsx b/src/frontend/src/components/exampleComponent/index.tsx new file mode 100644 index 000000000..347797c47 --- /dev/null +++ b/src/frontend/src/components/exampleComponent/index.tsx @@ -0,0 +1,100 @@ +import { useNavigate } from "react-router-dom"; +import useFlowsManagerStore from "../../stores/flowsManagerStore"; +import { FlowType } from "../../types/flow"; +import { updateIds } from "../../utils/reactflowUtils"; +import { cn } from "../../utils/utils"; +import ShadTooltip from "../ShadTooltipComponent"; +import IconComponent from "../genericIconComponent"; +import { Button } from "../ui/button"; +import { + Card, + CardDescription, + CardFooter, + CardHeader, + CardTitle, +} from "../ui/card"; + +export default function CollectionCardComponent({ + flow, +}: { + flow: FlowType; + authorized?: boolean; +}) { + const addFlow = useFlowsManagerStore((state) => state.addFlow); + const navigate = useNavigate(); + const emojiRegex = /\p{Emoji}/u; + const isEmoji = (str: string) => emojiRegex.test(str); + + return ( + +
+ +
+ + {flow.icon && isEmoji(flow.icon) && ( +
+
{flow.icon}
+
+ )} + {(!flow.icon || !isEmoji(flow.icon)) && ( +
+ +
+ )} + +
{flow.name}
+
+
+
+ + +
{flow.description}
+
+
+
+
+ + +
+
+ +
+
+
+
+ ); +} diff --git a/src/frontend/src/components/genericIconComponent/index.tsx b/src/frontend/src/components/genericIconComponent/index.tsx index 62299bdf6..9c3ea4402 100644 --- a/src/frontend/src/components/genericIconComponent/index.tsx +++ b/src/frontend/src/components/genericIconComponent/index.tsx @@ -1,36 +1,53 @@ -import { forwardRef } from "react"; +import dynamicIconImports from "lucide-react/dynamicIconImports"; +import { Suspense, forwardRef, lazy, memo } from "react"; import { IconComponentProps } from "../../types/components"; import { nodeIconsLucide } from "../../utils/styleUtils"; -const ForwardedIconComponent = forwardRef( - ( - { - name, - className, - iconColor, - stroke, - strokeWidth, - id = "", - }: IconComponentProps, - ref - ) => { - const TargetIcon = nodeIconsLucide[name] ?? nodeIconsLucide["unknown"]; +const ForwardedIconComponent = memo( + forwardRef( + ( + { + name, + className, + iconColor, + stroke, + strokeWidth, + id = "", + }: IconComponentProps, + ref + ) => { + let TargetIcon = nodeIconsLucide[name]; + if (!TargetIcon) { + // check if name exists in dynamicIconImports + if (!dynamicIconImports[name]) { + TargetIcon = nodeIconsLucide["unknown"]; + } else TargetIcon = lazy(dynamicIconImports[name]); + } - const style = { - strokeWidth: strokeWidth ?? 1.5, - ...(stroke && { stroke: stroke }), - ...(iconColor && { color: iconColor, stroke: stroke }), - }; + const style = { + strokeWidth: strokeWidth ?? 1.5, + ...(stroke && { stroke: stroke }), + ...(iconColor && { color: iconColor, stroke: stroke }), + }; - return ( - - ); - } + if (!TargetIcon) { + return null; // Render nothing until the icon is loaded + } + const fallback = ( +
+ ); + return ( + + + + ); + } + ) ); export default ForwardedIconComponent; diff --git a/src/frontend/src/components/headerComponent/components/menuBar/index.tsx b/src/frontend/src/components/headerComponent/components/menuBar/index.tsx index 146824f80..6db42aacc 100644 --- a/src/frontend/src/components/headerComponent/components/menuBar/index.tsx +++ b/src/frontend/src/components/headerComponent/components/menuBar/index.tsx @@ -18,6 +18,9 @@ import { cn } from "../../../../utils/utils"; import ShadTooltip from "../../../ShadTooltipComponent"; import IconComponent from "../../../genericIconComponent"; import { Button } from "../../../ui/button"; +import { UPLOAD_ERROR_ALERT } from "../../../../constants/alerts_constants"; +import ExportModal from "../../../../modals/exportModal"; +import { useStoreStore } from "../../../../stores/storeStore"; export const MenuBar = ({ removeFunction, @@ -32,7 +35,9 @@ export const MenuBar = ({ const saveLoading = useFlowsManagerStore((state) => state.saveLoading); const [openSettings, setOpenSettings] = useState(false); const n = useFlowStore((state) => state.nodes); - + const uploadFlow = useFlowsManagerStore((state) => state.uploadFlow); + const hasApiKey = useStoreStore((state) => state.hasApiKey); + const validApiKey = useStoreStore((state) => state.validApiKey); const navigate = useNavigate(); const isBuilding = useFlowStore((state) => state.isBuilding); @@ -100,7 +105,28 @@ export const MenuBar = ({ /> Settings - + { + uploadFlow({ newProject: false, isComponent: false }).catch( + (error) => { + setErrorData({ + title: UPLOAD_ERROR_ALERT, + list: [error], + }); + } + ); + }} + > + + Import + + +
+ + Export +
+
{ undo(); diff --git a/src/frontend/src/components/keypairListComponent/index.tsx b/src/frontend/src/components/keypairListComponent/index.tsx index 034628643..9f452c988 100644 --- a/src/frontend/src/components/keypairListComponent/index.tsx +++ b/src/frontend/src/components/keypairListComponent/index.tsx @@ -12,10 +12,11 @@ export default function KeypairListComponent({ disabled, editNode = false, duplicateKey, + isList = true, }: KeyPairListComponentType): JSX.Element { useEffect(() => { if (disabled && value.length > 0 && value[0] !== "") { - onChange([""]); + onChange([{ "": "" }]); } }, [disabled]); @@ -79,6 +80,7 @@ export default function KeypairListComponent({ : "keypair" + (index + 100).toString() } type="text" + disabled={disabled} value={obj[key]} className={editNode ? "input-edit-node" : ""} placeholder="Type a value..." @@ -87,7 +89,7 @@ export default function KeypairListComponent({ } /> - {index === ref.current.length - 1 ? ( + {isList && index === ref.current.length - 1 ? ( - ) : ( + ) : isList ? ( + ) : ( + "" )}
); diff --git a/src/frontend/src/components/newChatView/chatMessage/index.tsx b/src/frontend/src/components/newChatView/chatMessage/index.tsx index 075d5e418..760a9d140 100644 --- a/src/frontend/src/components/newChatView/chatMessage/index.tsx +++ b/src/frontend/src/components/newChatView/chatMessage/index.tsx @@ -134,7 +134,7 @@ export default function ChatMessage({ )}
{!chat.isSend ? ( -
+
{hidden && chat.thought && chat.thought !== "" && (
)} {chat.thought && chat.thought !== "" && !hidden &&

} -
-
-
+
+
+
{useMemo( () => chatMessage === "" && lockChat ? ( @@ -169,7 +169,7 @@ export default function ChatMessage({ void; @@ -24,10 +27,7 @@ function RefreshButton({ handleUpdateValues(name, data); }; - const classNames = cn( - className, - disabled ? "cursor-not-allowed" : "cursor-pointer" - ); + const classNames = cn(className, disabled ? "cursor-not-allowed" : ""); // icon class name should take into account the disabled state and the loading state const disabledIconTextClass = disabled ? "text-muted-foreground" : ""; @@ -38,13 +38,20 @@ function RefreshButton({ ); return ( - + ); } diff --git a/src/frontend/src/components/undrawCards/index.tsx b/src/frontend/src/components/undrawCards/index.tsx new file mode 100644 index 000000000..3368d64b8 --- /dev/null +++ b/src/frontend/src/components/undrawCards/index.tsx @@ -0,0 +1,49 @@ + +import { useNavigate } from "react-router-dom"; +/// +//@ts-ignore +import { ReactComponent as TransferFiles } from "../../assets/undraw_transfer_files_re_a2a9.svg" +//@ts-ignore +import { ReactComponent as BasicPrompt } from "../../assets/undraw_design_components_9vy6.svg" + +import useFlowsManagerStore from "../../stores/flowsManagerStore"; +import { FlowType } from "../../types/flow" +import { updateIds } from "../../utils/reactflowUtils"; +import ShadTooltip from "../ShadTooltipComponent" +import { Card, CardContent, CardDescription, CardFooter, CardTitle } from "../ui/card" + +export default function UndrawCardComponent({ + flow +}: { flow: FlowType }) { + const addFlow = useFlowsManagerStore((state) => state.addFlow); + const navigate = useNavigate(); + + function selectImage() { + switch (flow.name) { + case "Data Ingestion": + return + case "Basic Prompting": + return + default: + return + } + } + + return ( + { + updateIds(flow.data!); + addFlow(true, flow).then((id) => { + navigate("/flow/" + id); + }); + }} className="pt-4 w-80 h-64 cursor-pointer bg-background"> + +
+ {selectImage()} +
+
+ + {flow.name} + +
+ ) +} \ No newline at end of file diff --git a/src/frontend/src/constants/alerts_constants.tsx b/src/frontend/src/constants/alerts_constants.tsx index 205933b4b..0710e3d00 100644 --- a/src/frontend/src/constants/alerts_constants.tsx +++ b/src/frontend/src/constants/alerts_constants.tsx @@ -58,4 +58,3 @@ export const FLOW_BUILD_SUCCESS_ALERT = `Flow built successfully`; export const SAVE_SUCCESS_ALERT = "Changes saved successfully!"; // Generic Node - diff --git a/src/frontend/src/constants/constants.ts b/src/frontend/src/constants/constants.ts index 3f4584102..4927e9d81 100644 --- a/src/frontend/src/constants/constants.ts +++ b/src/frontend/src/constants/constants.ts @@ -1,6 +1,7 @@ // src/constants/constants.ts import { languageMap } from "../types/components"; +import { FlowType } from "../types/flow"; /** * invalid characters for flow name @@ -727,3 +728,87 @@ export const STATUS_BUILD = "Build to validate status."; export const STATUS_BUILDING = "Building..."; export const SAVED_HOVER = "Last saved at "; export const RUN_TIMESTAMP_PREFIX = "Last Run: "; +export const STARTER_FOLDER_NAME = "Starter Projects"; +export const PRIORITY_SIDEBAR_ORDER = [ + "saved_components", + "inputs", + "outputs", + "prompts", + "data", + "prompt", + "models", + "helpers", + "experimental", +]; +/* +Data ingestion +Basic Prompting +Chat com memória +Working with data (file/website) +API requests +Vector Store +Assistant +*/ + +export const EXAMPLES_MOCK:FlowType[] = [ + { + name: "Working with data", + id: "Working with data Description", + data: { + nodes: [], + edges: [], + viewport: { zoom: 1, x: 1, y: 1 } + }, + description: "This flow represents the first process in our application.", + folder: STARTER_FOLDER_NAME, + user_id: undefined, + }, + { + name: "Basic Prompting", + id: "Basic Prompting Description", + data: { + nodes: [], + edges: [], + viewport: { zoom: 1, x: 1, y: 1 } + }, + description: "This flow represents the first process in our application.", + folder: STARTER_FOLDER_NAME, + user_id: undefined, + }, + { + name: "Chat with memory", + id: "Chat with memory Description", + data: { + nodes: [], + edges: [], + viewport: { zoom: 1, x: 1, y: 1 } + }, + description: "This flow represents the first process in our application.", + folder: STARTER_FOLDER_NAME, + user_id: undefined, + }, + { + name: "API requests", + id: "API requests Description", + data: { + nodes: [], + edges: [], + viewport: { zoom: 1, x: 1, y: 1 } + }, + description: "This flow represents the first process in our application.", + folder: STARTER_FOLDER_NAME, + user_id: undefined, + }, + { + name: "Assistant", + id: "Assistant Description", + data: { + nodes: [], + edges: [], + viewport: { zoom: 1, x: 1, y: 1 } + }, + description: "This flow represents the first process in our application.", + folder: STARTER_FOLDER_NAME, + user_id: undefined, + }, +]; diff --git a/src/frontend/src/controllers/API/index.ts b/src/frontend/src/controllers/API/index.ts index ff6305cfd..a8ee9c6e7 100644 --- a/src/frontend/src/controllers/API/index.ts +++ b/src/frontend/src/controllers/API/index.ts @@ -369,11 +369,13 @@ export async function postCustomComponent( export async function postCustomComponentUpdate( code: string, - field: string + field: string, + field_value: any ): Promise> { return await api.post(`${BASE_URL_API}custom_component/update`, { code, field, + field_value, }); } diff --git a/src/frontend/src/modals/ApiModal/index.tsx b/src/frontend/src/modals/ApiModal/index.tsx index 81d6defee..9c451beeb 100644 --- a/src/frontend/src/modals/ApiModal/index.tsx +++ b/src/frontend/src/modals/ApiModal/index.tsx @@ -98,6 +98,9 @@ const ApiModal = forwardRef( let arrNodesWithValues: string[] = []; flow["data"]!["nodes"].forEach((node) => { + if (!node["data"]["node"]["template"]) { + return; + } Object.keys(node["data"]["node"]["template"]) .filter( (templateField) => diff --git a/src/frontend/src/modals/DeleteConfirmationModal/index.tsx b/src/frontend/src/modals/DeleteConfirmationModal/index.tsx index a1f757985..b2087f40d 100644 --- a/src/frontend/src/modals/DeleteConfirmationModal/index.tsx +++ b/src/frontend/src/modals/DeleteConfirmationModal/index.tsx @@ -35,8 +35,7 @@ export default function DeleteConfirmationModal({ - Confirm deletion of {description ?? "component"}? -

+ Confirm deletion of {description ?? "component"}?

Note: This action is irreversible.
diff --git a/src/frontend/src/modals/EditNodeModal/index.tsx b/src/frontend/src/modals/EditNodeModal/index.tsx index e789e1ab8..619a6f7e2 100644 --- a/src/frontend/src/modals/EditNodeModal/index.tsx +++ b/src/frontend/src/modals/EditNodeModal/index.tsx @@ -165,8 +165,19 @@ const EditNodeModal = forwardRef( ) ) ?? false; return ( -
- Attention: API keys in specified fields are automatically removed upon sharing. + Attention: API keys in specified fields are automatically + removed upon sharing. diff --git a/src/frontend/src/pages/FlowPage/components/PageComponent/index.tsx b/src/frontend/src/pages/FlowPage/components/PageComponent/index.tsx index fbc1a0d35..91299ec6f 100644 --- a/src/frontend/src/pages/FlowPage/components/PageComponent/index.tsx +++ b/src/frontend/src/pages/FlowPage/components/PageComponent/index.tsx @@ -12,7 +12,6 @@ import ReactFlow, { updateEdge, } from "reactflow"; import GenericNode from "../../../../CustomNodes/GenericNode"; -import Chat from "../../../../components/chatComponent"; import { INVALID_SELECTION_ERROR_ALERT, UPLOAD_ALERT_LIST, @@ -38,6 +37,7 @@ import { getRandomName, isWrappedWithClass } from "../../../../utils/utils"; import ConnectionLineComponent from "../ConnectionLineComponent"; import SelectionMenu from "../SelectionMenuComponent"; import ExtraSidebar from "../extraSidebarComponent"; +import FlowToolbar from "../../../../components/chatComponent"; const nodeTypes = { genericNode: GenericNode, @@ -109,7 +109,7 @@ export default function Page({ ...old.data, node: { ...old.data.node, - pinned: old.data?.node?.pinned ? false : true, + frozen: old.data?.node?.frozen ? false : true, }, }, })); @@ -481,7 +481,7 @@ export default function Page({ }} /> - {!view && } + {!view && }
) : ( <> diff --git a/src/frontend/src/pages/FlowPage/components/extraSidebarComponent/index.tsx b/src/frontend/src/pages/FlowPage/components/extraSidebarComponent/index.tsx index fe78895c0..bd483f78c 100644 --- a/src/frontend/src/pages/FlowPage/components/extraSidebarComponent/index.tsx +++ b/src/frontend/src/pages/FlowPage/components/extraSidebarComponent/index.tsx @@ -1,11 +1,11 @@ import { cloneDeep } from "lodash"; +import { LinkIcon, SparklesIcon } from "lucide-react"; import { useEffect, useMemo, useState } from "react"; import ShadTooltip from "../../../../components/ShadTooltipComponent"; import IconComponent from "../../../../components/genericIconComponent"; import { Input } from "../../../../components/ui/input"; import { Separator } from "../../../../components/ui/separator"; -import { UPLOAD_ERROR_ALERT } from "../../../../constants/alerts_constants"; -import ApiModal from "../../../../modals/ApiModal"; +import { PRIORITY_SIDEBAR_ORDER } from "../../../../constants/constants"; import ExportModal from "../../../../modals/exportModal"; import ShareModal from "../../../../modals/shareModal"; import useAlertStore from "../../../../stores/alertStore"; @@ -26,6 +26,7 @@ import { } from "../../../../utils/utils"; import DisclosureComponent from "../DisclosureComponent"; import SidebarDraggableComponent from "./sideBarDraggableComponent"; +import { sortKeys } from "./utils"; export default function ExtraSidebar(): JSX.Element { const data = useTypesStore((state) => state.data); @@ -234,66 +235,6 @@ export default function ExtraSidebar(): JSX.Element { return (
-
- {hasStore && validApiKey && ( - -
{ModalMemo}
-
- )} -
- - - -
- {(!hasApiKey || !validApiKey) && ( - -
{ExportMemo}
-
- )} - -
- {currentFlow && currentFlow.data && ( - - - - )} -
-
-
-
handleBlur()} @@ -317,82 +258,110 @@ export default function ExtraSidebar(): JSX.Element { />
- +
{Object.keys(dataFilter) - .sort((a, b) => { - if (a.toLowerCase() === "saved_components") { - return -1; - } else if (b.toLowerCase() === "saved_components") { - return 1; - } else if (a.toLowerCase() === "custom_components") { - return -2; - } else if (b.toLowerCase() === "custom_components") { - return 2; - } else { - return a.localeCompare(b); - } - }) + .sort(sortKeys) .map((SBSectionName: keyof APIObjectType, index) => Object.keys(dataFilter[SBSectionName]).length > 0 ? ( - -
- {Object.keys(dataFilter[SBSectionName]) - .sort((a, b) => - sensitiveSort( - dataFilter[SBSectionName][a].display_name, - dataFilter[SBSectionName][b].display_name + <> + {index === 0 && ( +
+
+ Native Components +
+
+ )} + {index === PRIORITY_SIDEBAR_ORDER.length - 1 && ( + <> + +
+ {/* BUG ON THIS ICON */} + + + + Discover More + +
+
+
+ +
+
+
+
+ Legacy Components +
+ + )} + +
+ {Object.keys(dataFilter[SBSectionName]) + .sort((a, b) => + sensitiveSort( + dataFilter[SBSectionName][a].display_name, + dataFilter[SBSectionName][b].display_name + ) ) - ) - .map((SBItemName: string, index) => ( - - - onDragStart(event, { - //split type to remove type in nodes saved with same name removing it's - type: removeCountFromString(SBItemName), - node: dataFilter[SBSectionName][SBItemName], - }) - } - color={nodeColors[SBSectionName]} - itemName={SBItemName} - //convert error to boolean - error={!!dataFilter[SBSectionName][SBItemName].error} - display_name={ + .map((SBItemName: string, index) => ( + - - ))} -
-
+ side="right" + key={index} + > + + onDragStart(event, { + //split type to remove type in nodes saved with same name removing it's + type: removeCountFromString(SBItemName), + node: dataFilter[SBSectionName][SBItemName], + }) + } + color={nodeColors[SBSectionName]} + itemName={SBItemName} + //convert error to boolean + error={ + !!dataFilter[SBSectionName][SBItemName].error + } + display_name={ + dataFilter[SBSectionName][SBItemName].display_name + } + official={ + dataFilter[SBSectionName][SBItemName].official === + false + ? false + : true + } + /> + + ))} +
+
+ ) : (
) diff --git a/src/frontend/src/pages/FlowPage/components/extraSidebarComponent/sideBarDraggableComponent/index.tsx b/src/frontend/src/pages/FlowPage/components/extraSidebarComponent/sideBarDraggableComponent/index.tsx index f33e97a1d..7d468b4ba 100644 --- a/src/frontend/src/pages/FlowPage/components/extraSidebarComponent/sideBarDraggableComponent/index.tsx +++ b/src/frontend/src/pages/FlowPage/components/extraSidebarComponent/sideBarDraggableComponent/index.tsx @@ -43,6 +43,7 @@ export const SidebarDraggableComponent = forwardRef( const deleteComponent = useFlowsManagerStore( (state) => state.deleteComponent ); + const version = useDarkStore((state) => state.version); const [cursorPos, setCursorPos] = useState({ x: 0, y: 0 }); const popoverRef = useRef(null); diff --git a/src/frontend/src/pages/FlowPage/components/extraSidebarComponent/utils.tsx b/src/frontend/src/pages/FlowPage/components/extraSidebarComponent/utils.tsx new file mode 100644 index 000000000..ee0509cd0 --- /dev/null +++ b/src/frontend/src/pages/FlowPage/components/extraSidebarComponent/utils.tsx @@ -0,0 +1,26 @@ +import { PRIORITY_SIDEBAR_ORDER } from "../../../../constants/constants"; + +export function sortKeys(a: string, b: string) { + // Define the order of specific keys + + const indexA = PRIORITY_SIDEBAR_ORDER.indexOf(a.toLowerCase()); + const indexB = PRIORITY_SIDEBAR_ORDER.indexOf(b.toLowerCase()); + + // Check if both keys are in the predefined order + if (indexA !== -1 && indexB !== -1) { + return indexA - indexB; + } + + // If only 'a' is in the predefined order, it should come first + if (indexA !== -1) { + return -1; + } + + // If only 'b' is in the predefined order, it should come first + if (indexB !== -1) { + return 1; + } + + // If neither 'a' nor 'b' are in the predefined order, sort them alphabetically + return a.localeCompare(b); +} diff --git a/src/frontend/src/pages/FlowPage/components/nodeToolbarComponent/index.tsx b/src/frontend/src/pages/FlowPage/components/nodeToolbarComponent/index.tsx index 2e8df54cd..2967578ac 100644 --- a/src/frontend/src/pages/FlowPage/components/nodeToolbarComponent/index.tsx +++ b/src/frontend/src/pages/FlowPage/components/nodeToolbarComponent/index.tsx @@ -65,7 +65,7 @@ export default function NodeToolbarComponent({ const isMinimal = numberOfHandles <= 1; const isGroup = data.node?.flow ? true : false; - const pinned = data.node?.pinned ?? false; + const frozen = data.node?.frozen ?? false; const paste = useFlowStore((state) => state.paste); const nodes = useFlowStore((state) => state.nodes); const edges = useFlowStore((state) => state.edges); @@ -155,6 +155,23 @@ export default function NodeToolbarComponent({ case "copy": const node = nodes.filter((node) => node.id === data.id); setLastCopiedSelection({ nodes: _.cloneDeep(node), edges: [] }); + break; + case "duplicate": + paste( + { + nodes: [nodes.find((node) => node.id === data.id)!], + edges: [], + }, + { + x: 50, + y: 10, + paneX: nodes.find((node) => node.id === data.id)?.position + .x, + paneY: nodes.find((node) => node.id === data.id)?.position + .y, + } + ); + break; } }; @@ -308,7 +325,7 @@ export default function NodeToolbarComponent({ <>
- {hasCode ? ( + {hasCode && ( - - ) : ( - - )} - + @@ -378,7 +372,7 @@ export default function NodeToolbarComponent({ ...old.data, node: { ...old.data.node, - pinned: old.data?.node?.pinned ? false : true, + frozen: old.data?.node?.frozen ? false : true, }, }, })); @@ -389,7 +383,7 @@ export default function NodeToolbarComponent({ className={cn( "h-4 w-4 transition-all", // TODO UPDATE THIS COLOR TO BE A VARIABLE - pinned ? "animate-wiggle text-ice" : "" + frozen ? "animate-wiggle text-ice" : "" )} /> @@ -433,53 +427,17 @@ export default function NodeToolbarComponent({ E
- )} - - {isSaved ? ( - -
- {" "} - Save{" "} - {navigator.userAgent.toUpperCase().includes("MAC") ? ( - - ) : ( - Ctrl + - )} - S -
-
- ) : ( - hasCode && ( - -
- {" "} - Save{" "} - {navigator.userAgent.toUpperCase().includes("MAC") ? ( - - ) : ( - Ctrl + - )} - S -
-
- ) - )} + )} + +
+ + Duplicate +
{" "} +
{" "} Copy{" "} diff --git a/src/frontend/src/pages/MainPage/components/components/index.tsx b/src/frontend/src/pages/MainPage/components/components/index.tsx index 86ee65d0c..65189fa8a 100644 --- a/src/frontend/src/pages/MainPage/components/components/index.tsx +++ b/src/frontend/src/pages/MainPage/components/components/index.tsx @@ -24,6 +24,7 @@ export default function ComponentsComponent({ const uploadFlow = useFlowsManagerStore((state) => state.uploadFlow); const removeFlow = useFlowsManagerStore((state) => state.removeFlow); const isLoading = useFlowsManagerStore((state) => state.isLoading); + const setExamples = useFlowsManagerStore((state) => state.setExamples); const flows = useFlowsManagerStore((state) => state.flows); const setSuccessData = useAlertStore((state) => state.setSuccessData); const setErrorData = useAlertStore((state) => state.setErrorData); @@ -35,7 +36,7 @@ export default function ComponentsComponent({ useEffect(() => { if (isLoading) return; - const all = flows + let all = flows .filter((f) => (f.is_component ?? false) === is_component) .sort((a, b) => { if (a?.updated_at && b?.updated_at) { diff --git a/src/frontend/src/pages/MainPage/index.tsx b/src/frontend/src/pages/MainPage/index.tsx index cc4115e7e..91ed4066d 100644 --- a/src/frontend/src/pages/MainPage/index.tsx +++ b/src/frontend/src/pages/MainPage/index.tsx @@ -1,19 +1,24 @@ import { Group, ToyBrick } from "lucide-react"; -import { useEffect } from "react"; +import { useEffect, useState } from "react"; import { Outlet, useLocation, useNavigate } from "react-router-dom"; import DropdownButton from "../../components/DropdownButtonComponent"; +import NewFlowCardComponent from "../../components/NewFLowCard2";; +import ExampleCardComponent from "../../components/exampleComponent"; import IconComponent from "../../components/genericIconComponent"; import PageLayout from "../../components/pageLayout"; import SidebarNav from "../../components/sidebarComponent"; import { Button } from "../../components/ui/button"; import { CONSOLE_ERROR_MSG } from "../../constants/alerts_constants"; import { + EXAMPLES_MOCK, MY_COLLECTION_DESC, USER_PROJECTS_HEADER, } from "../../constants/constants"; +import BaseModal from "../../modals/baseModal"; import useAlertStore from "../../stores/alertStore"; import useFlowsManagerStore from "../../stores/flowsManagerStore"; import { downloadFlows } from "../../utils/reactflowUtils"; +import UndrawCardComponent from "../../components/undrawCards"; export default function HomePage(): JSX.Element { const addFlow = useFlowsManagerStore((state) => state.addFlow); const uploadFlow = useFlowsManagerStore((state) => state.uploadFlow); @@ -25,6 +30,8 @@ export default function HomePage(): JSX.Element { const setErrorData = useAlertStore((state) => state.setErrorData); const location = useLocation(); const pathname = location.pathname; + const [openModal, setOpenModal] = useState(false); + const examples = useFlowsManagerStore((state) => state.examples); const is_component = pathname === "/components"; const dropdownOptions = [ { @@ -98,11 +105,7 @@ export default function HomePage(): JSX.Element { { - addFlow(true).then((id) => { - navigate("/flow/" + id); - }); - }} + onFirstBtnClick={() => setOpenModal(true)} options={dropdownOptions} />
@@ -116,6 +119,28 @@ export default function HomePage(): JSX.Element {
+ + + + Get Started + + {/* + +
+ {EXAMPLES_MOCK.map((example, idx) => { + return ; + })} + +
+
+
); } diff --git a/src/frontend/src/stores/flowStore.ts b/src/frontend/src/stores/flowStore.ts index 5ec2ee3a2..16f6b75f4 100644 --- a/src/frontend/src/stores/flowStore.ts +++ b/src/frontend/src/stores/flowStore.ts @@ -190,8 +190,8 @@ const useFlowStore = create((set, get) => ({ get().setNodes((oldNodes) => oldNodes.map((node) => { if (node.id === id) { - if((node.data as NodeDataType).node?.pinned){ - (newChange.data as NodeDataType).node!.pinned = false; + if ((node.data as NodeDataType).node?.frozen) { + (newChange.data as NodeDataType).node!.frozen = false; } return newChange; } @@ -450,9 +450,10 @@ const useFlowStore = create((set, get) => ({ status: BuildStatus, runId: string ) { - if (vertexBuildData && vertexBuildData.inactive_vertices) { - get().removeFromVerticesBuild(vertexBuildData.inactive_vertices); + if (vertexBuildData && vertexBuildData.inactivated_vertices) { + get().removeFromVerticesBuild(vertexBuildData.inactivated_vertices); } + if (vertexBuildData.next_vertices_ids) { // next_vertices_ids is a list of vertices that are going to be built next // verticesLayers is a list of list of vertices ids, where each list is a layer of vertices @@ -534,10 +535,19 @@ const useFlowStore = create((set, get) => ({ runId: string; } | null ) => { - console.log("updateVerticesBuild", vertices); set({ verticesBuild: vertices }); }, verticesBuild: null, + addToVerticesBuild: (vertices: string[]) => { + const verticesBuild = get().verticesBuild; + if (!verticesBuild) return; + set({ + verticesBuild: { + ...verticesBuild, + verticesIds: [...verticesBuild.verticesIds, ...vertices], + }, + }); + }, removeFromVerticesBuild: (vertices: string[]) => { const verticesBuild = get().verticesBuild; if (!verticesBuild) return; @@ -551,7 +561,6 @@ const useFlowStore = create((set, get) => ({ }); }, updateBuildStatus: (nodeIdList: string[], status: BuildStatus) => { - console.log("updateBuildStatus", nodeIdList, status); const newFlowBuildStatus = { ...get().flowBuildStatus }; nodeIdList.forEach((id) => { newFlowBuildStatus[id] = { @@ -561,7 +570,6 @@ const useFlowStore = create((set, get) => ({ const timestamp_string = new Date(Date.now()).toLocaleString(); newFlowBuildStatus[id].timestamp = timestamp_string; } - console.log("updateBuildStatus", newFlowBuildStatus); }); set({ flowBuildStatus: newFlowBuildStatus }); }, diff --git a/src/frontend/src/stores/flowsManagerStore.ts b/src/frontend/src/stores/flowsManagerStore.ts index 49849ba54..2c9fca428 100644 --- a/src/frontend/src/stores/flowsManagerStore.ts +++ b/src/frontend/src/stores/flowsManagerStore.ts @@ -2,6 +2,7 @@ import { AxiosError } from "axios"; import { cloneDeep } from "lodash"; import { Edge, Node, Viewport, XYPosition } from "reactflow"; import { create } from "zustand"; +import { STARTER_FOLDER_NAME } from "../constants/constants"; import { deleteFlowFromDatabase, readFlowsFromDatabase, @@ -37,6 +38,10 @@ const past = {}; const future = {}; const useFlowsManagerStore = create((set, get) => ({ + examples: [], + setExamples: (examples: FlowType[]) => { + set({ examples }); + }, currentFlowId: "", setCurrentFlowId: (currentFlowId: string) => { set((state) => ({ @@ -62,7 +67,16 @@ const useFlowsManagerStore = create((set, get) => ({ .then((dbData) => { if (dbData) { const { data, flows } = processFlows(dbData, false); - get().setFlows(flows); + get().setExamples( + flows.filter( + (f) => f.folder === STARTER_FOLDER_NAME && !f.user_id + ) + ); + get().setFlows( + flows.filter( + (f) => !(f.folder === STARTER_FOLDER_NAME && !f.user_id) + ) + ); useTypesStore.setState((state) => ({ data: { ...state.data, ["saved_components"]: data }, })); @@ -92,7 +106,6 @@ const useFlowsManagerStore = create((set, get) => ({ true ); } - set({ saveLoading: true }); }, 500); // Delay of 500ms because chat message depends on it. }, saveFlow: (flow: FlowType, silent?: boolean) => { diff --git a/src/frontend/src/style/applies.css b/src/frontend/src/style/applies.css index 77a760255..4bb0dd1c4 100644 --- a/src/frontend/src/style/applies.css +++ b/src/frontend/src/style/applies.css @@ -69,7 +69,7 @@ @apply flex h-full w-[14.5rem] flex-col overflow-hidden border-r scrollbar-hide; } .side-bar-search-div-placement { - @apply relative mx-auto mb-2 mt-2 flex items-center; + @apply relative mx-auto flex items-center py-3; } .side-bar-components-icon { @apply h-6 w-4 text-ring; @@ -114,7 +114,10 @@ @apply pointer-events-none; } .extra-side-bar-buttons { - @apply relative inline-flex w-full items-center justify-center rounded-md bg-background px-2 py-2 text-foreground shadow-sm ring-1 ring-inset ring-input transition-all duration-500 ease-in-out; + @apply relative inline-flex w-full items-center justify-center rounded-md bg-background px-2 py-2 text-foreground transition-all duration-500 ease-in-out; + } + .header-menubar-item { + @apply relative flex cursor-default select-none items-center rounded-sm px-2 py-1.5 text-sm outline-none transition-colors hover:bg-accent hover:text-accent-foreground data-[disabled]:pointer-events-none data-[disabled]:opacity-50 cursor-pointer; } .extra-side-bar-buttons:hover { @apply hover:bg-muted; @@ -316,7 +319,7 @@ @apply border-none ring ring-[#FF9090]; } .built-invalid-status-dark { - @apply border-none ring ring-[#751C1C] + @apply border-none ring ring-[#751C1C]; } .building-status { @apply border-none ring; @@ -431,7 +434,9 @@ .code-area-external-link:hover { @apply hover:text-accent-foreground; } - + .dropdown-component-disabled { + @apply pointer-events-none cursor-not-allowed; + } .dropdown-component-outline { @apply input-edit-node relative pr-8; } @@ -441,11 +446,17 @@ .dropdown-component-display { @apply block w-full truncate bg-background; } + .dropdown-component-display-disabled { + @apply text-muted-foreground; + } .dropdown-component-arrow { @apply pointer-events-none absolute inset-y-0 right-0 flex items-center pr-2; } .dropdown-component-arrow-color { - @apply extra-side-bar-save-disable h-5 w-5; + @apply h-5 w-5 text-accent-foreground; + } + .dropdown-component-arrow-color-disable { + @apply h-5 w-5 text-muted-foreground; } .dropdown-component-options { @apply z-10 mt-1 max-h-60 overflow-auto rounded-md bg-background py-1 text-base shadow-lg ring-1 ring-black ring-opacity-5 focus:outline-none sm:text-sm; @@ -487,7 +498,7 @@ @apply flex items-center gap-0.5 rounded-md px-1.5 py-1 text-sm font-medium; } .header-menu-bar-display { - @apply flex max-w-[120px] cursor-pointer items-center gap-2 lg:max-w-[200px]; + @apply flex max-w-[115px] cursor-pointer items-center gap-2 lg:max-w-[145px]; } .header-menu-flow-name { @apply flex-1 truncate; @@ -902,7 +913,7 @@ @apply flex-max-width px-2 py-6 pl-4 pr-9; } .form-modal-chatbot-icon { - @apply flex flex-col mb-3 ml-3 mr-6 mt-1; + @apply mb-3 ml-3 mr-6 mt-1 flex flex-col; } .form-modal-chat-image { @apply flex flex-col items-center gap-1; diff --git a/src/frontend/src/style/index.css b/src/frontend/src/style/index.css index fb4242661..e744fb503 100644 --- a/src/frontend/src/style/index.css +++ b/src/frontend/src/style/index.css @@ -50,6 +50,9 @@ --component-icon: #d8598a; --flow-icon: #2f67d0; + --hover: #F2F4F5; + --disabled-run: #6366f1; + /* Colors that are shared in dark and light mode */ --blur-shared: #151923de; @@ -69,6 +72,8 @@ --background: 224 35% 7.5%; /* hsl(224 40% 10%) */ --foreground: 213 31% 80%; /* hsl(213 31% 91%) */ --ice: #60A5FA; + --hover: #1A202E; + --disabled-run: #6366f1; --muted: 223 27% 11%; /* hsl(223 27% 11%) */ --muted-foreground: 215.4 16.3% 56.9%; /* hsl(215 16% 56%) */ diff --git a/src/frontend/src/types/api/index.ts b/src/frontend/src/types/api/index.ts index 7b33b9ffb..172f01dad 100644 --- a/src/frontend/src/types/api/index.ts +++ b/src/frontend/src/types/api/index.ts @@ -27,8 +27,9 @@ export type APIClassType = { documentation: string; error?: string; official?: boolean; - pinned?: boolean; + frozen?: boolean; flow?: FlowType; + field_order?: string[]; [key: string]: | Array | string @@ -54,7 +55,9 @@ export type TemplateVariableType = { input_types?: Array; display_name?: string; name?: string; - refresh?: boolean; + real_time_refresh?: boolean; + refresh_button?: boolean; + refresh_button_text?: string; [key: string]: any; }; export type sendAllProps = { @@ -141,8 +144,8 @@ export type VerticesOrderTypeAPI = { export type VertexBuildTypeAPI = { id: string; + inactivated_vertices: Array | null; next_vertices_ids: Array; - inactive_vertices: Array | null; run_id: string; valid: boolean; params: string; @@ -158,3 +161,17 @@ export type VertexDataTypeAPI = { timedelta?: number; duration?: string; }; + +export type CodeErrorDataTypeAPI = { + error: string | undefined; + traceback: string | undefined; +}; + +// the error above is inside this error.response.data.detail.error +// which comes from a request to the API +// to type the error we need to know the structure of the object + +// error that has a response, that has a data, that has a detail, that has an error +export type ResponseErrorTypeAPI = { + response: { data: { detail: CodeErrorDataTypeAPI } }; +}; diff --git a/src/frontend/src/types/components/index.ts b/src/frontend/src/types/components/index.ts index 87e9a4592..78333add9 100644 --- a/src/frontend/src/types/components/index.ts +++ b/src/frontend/src/types/components/index.ts @@ -30,6 +30,7 @@ export type ToggleComponentType = { editNode?: boolean; }; export type DropDownComponentType = { + disabled?: boolean; isLoading?: boolean; value: string; options: string[]; @@ -70,6 +71,7 @@ export type KeyPairListComponentType = { editNode?: boolean; duplicateKey?: boolean; editNodeModal?: boolean; + isList?: boolean; }; export type DictComponentType = { diff --git a/src/frontend/src/types/flow/index.ts b/src/frontend/src/types/flow/index.ts index 967d4e424..8b259186a 100644 --- a/src/frontend/src/types/flow/index.ts +++ b/src/frontend/src/types/flow/index.ts @@ -13,6 +13,10 @@ export type FlowType = { updated_at?: string; date_created?: string; parent?: string; + folder?: string; + user_id?: string; + icon?: string; + icon_bg_color?: string; }; export type NodeType = { diff --git a/src/frontend/src/types/zustand/flow/index.ts b/src/frontend/src/types/zustand/flow/index.ts index a805b24fd..0f5fe6ecc 100644 --- a/src/frontend/src/types/zustand/flow/index.ts +++ b/src/frontend/src/types/zustand/flow/index.ts @@ -107,6 +107,7 @@ export type FlowStoreType = { runId: string; } | null ) => void; + addToVerticesBuild: (vertices: string[]) => void; removeFromVerticesBuild: (vertices: string[]) => void; verticesBuild: { verticesIds: string[]; diff --git a/src/frontend/src/types/zustand/flowsManager/index.ts b/src/frontend/src/types/zustand/flowsManager/index.ts index 471d4cf17..87fbf9a22 100644 --- a/src/frontend/src/types/zustand/flowsManager/index.ts +++ b/src/frontend/src/types/zustand/flowsManager/index.ts @@ -44,6 +44,8 @@ export type FlowsManagerStoreType = { undo: () => void; redo: () => void; takeSnapshot: () => void; + examples: Array; + setExamples: (examples: FlowType[]) => void; }; export type UseUndoRedoOptions = { diff --git a/src/frontend/src/utils/buildUtils.ts b/src/frontend/src/utils/buildUtils.ts index ba3327959..34edd1592 100644 --- a/src/frontend/src/utils/buildUtils.ts +++ b/src/frontend/src/utils/buildUtils.ts @@ -32,6 +32,7 @@ function getInactiveVertexData(vertexId: string): VertexBuildTypeAPI { id: vertexId, data: inactiveData, params: "Inactive", + inactivated_vertices: null, run_id: "", next_vertices_ids: [], inactive_vertices: null, diff --git a/src/frontend/src/utils/parameterUtils.ts b/src/frontend/src/utils/parameterUtils.ts new file mode 100644 index 000000000..9635ce96d --- /dev/null +++ b/src/frontend/src/utils/parameterUtils.ts @@ -0,0 +1,35 @@ +import { throttle } from "lodash"; +import { postCustomComponentUpdate } from "../controllers/API"; +import { ResponseErrorTypeAPI } from "../types/api"; +import { NodeDataType } from "../types/flow"; + +export const handleUpdateValues = async (name: string, data: NodeDataType) => { + const code = data.node?.template["code"]?.value; + if (!code) { + console.error("Code not found in the template"); + return; + } + try { + let newTemplate = await postCustomComponentUpdate( + code, + name, + data.node?.template[name]?.value + ) + .then((res) => { + console.log("res", res); + if (res.status === 200 && data.node?.template) { + return res.data.template; + } + }) + .catch((error) => { + throw error; + }); + return newTemplate; + } catch (error) { + console.error("Error occurred while updating the node:", error); + let errorType = error as ResponseErrorTypeAPI; + throw errorType; + } +}; + +export const throttledHandleUpdateValues = throttle(handleUpdateValues, 10); diff --git a/src/frontend/src/utils/styleUtils.ts b/src/frontend/src/utils/styleUtils.ts index 67b38f6ac..f2d28fa28 100644 --- a/src/frontend/src/utils/styleUtils.ts +++ b/src/frontend/src/utils/styleUtils.ts @@ -4,13 +4,13 @@ import { ArrowLeft, ArrowUpToLine, Bell, + Binary, BookMarked, BookmarkPlus, Bot, - Snowflake, Boxes, Braces, - Cable, + BrainCircuit, Check, CheckCircle2, ChevronDown, @@ -31,7 +31,9 @@ import { Compass, Copy, Cpu, + Database, Delete, + Dot, Download, DownloadCloud, Edit, @@ -41,12 +43,13 @@ import { EyeOff, File, FileDown, + SquarePen, FileSearch, FileSearch2, + FileSliders, FileText, FileType2, FileUp, - Fingerprint, FlaskConical, FolderPlus, FormInput, @@ -64,7 +67,6 @@ import { Key, Laptop2, Layers, - Lightbulb, Link, Loader2, Lock, @@ -81,16 +83,19 @@ import { MoonIcon, MoreHorizontal, Network, + Package2, Paperclip, Pencil, PencilLine, Pin, Play, Plus, + PlusCircle, + PlusSquare, + PocketKnife, Redo, RefreshCcw, Repeat, - Rocket, Save, SaveAll, Scissors, @@ -101,6 +106,7 @@ import { Share2, Shield, Sliders, + Snowflake, Sparkles, Square, Store, @@ -123,7 +129,6 @@ import { Variable, Wand2, Workflow, - Wrench, X, XCircle, Zap, @@ -206,6 +211,9 @@ export const gradients = [ ]; export const nodeColors: { [char: string]: string } = { + inputs: "#9AAE42", + outputs: "#AA2411", + data: "#6344BE", prompts: "#4367BF", models: "#AA2411", model_specs: "#6344BE", @@ -225,19 +233,23 @@ export const nodeColors: { [char: string]: string } = { textsplitters: "#B47CB5", toolkits: "#DB2C2C", wrappers: "#E6277A", - utilities: "#31A3CC", + helpers: "#31A3CC", + experimental: "#E6277A", + langchain_utilities: "#31A3CC", output_parsers: "#E6A627", str: "#31a3cc", Text: "#31a3cc", retrievers: "#e6b25a", unknown: "#9CA3AF", custom_components: "#ab11ab", - io: "#e6b25a", }; export const nodeNames: { [char: string]: string } = { + inputs: "Inputs", + outputs: "Outputs", + data: "Data", prompts: "Prompts", - models: "Language Models", + models: "Models", model_specs: "Model Specs", chains: "Chains", agents: "Agents", @@ -253,14 +265,18 @@ export const nodeNames: { [char: string]: string } = { wrappers: "Wrappers", textsplitters: "Text Splitters", retrievers: "Retrievers", - utilities: "Utilities", + helpers: "Helpers", + experimental: "Experimental", + langchain_utilities: "Utilities", output_parsers: "Output Parsers", custom_components: "Custom", - io: "I/O", unknown: "Other", }; export const nodeIconsLucide: iconsType = { + inputs: Download, + outputs: Upload, + data: Database, AzureChatOpenAi: AzureIcon, Ollama: OllamaIcon, ChatOllama: OllamaIcon, @@ -323,26 +339,28 @@ export const nodeIconsLucide: iconsType = { VertexAIEmbeddings: VertexAIIcon, Share3: ShareIcon, Share4: Share2Icon, - agents: Rocket, + agents: Bot, Workflow, User, WikipediaAPIWrapper: SvgWikipedia, chains: Link, memories: Cpu, - models: Bot, - model_specs: Lightbulb, + models: BrainCircuit, + model_specs: FileSliders, prompts: TerminalSquare, - tools: Wrench, + tools: Hammer, advanced: Laptop2, chat: MessageCircle, - embeddings: Fingerprint, + embeddings: Binary, saved_components: GradientSave, documentloaders: Paperclip, vectorstores: Layers, - toolkits: Hammer, + toolkits: Package2, textsplitters: Scissors, wrappers: Gift, - utilities: Wand2, + helpers: Wand2, + experimental: FlaskConical, + langchain_utilities: PocketKnife, WolframAlphaAPIWrapper: SvgWolfram, output_parsers: Compass, retrievers: FileSearch, @@ -359,6 +377,7 @@ export const nodeIconsLucide: iconsType = { XCircle, Info, CheckCircle2, + SquarePen, Zap, MessagesSquare, ExternalLink, @@ -383,6 +402,8 @@ export const nodeIconsLucide: iconsType = { Circle, CircleDot, Clipboard, + PlusCircle, + PlusSquare, Code2, Variable, Snowflake, @@ -449,7 +470,6 @@ export const nodeIconsLucide: iconsType = { TerminalSquare, TextCursorInput, Repeat, - io: Cable, Sliders, ScreenShare, Code, @@ -461,4 +481,5 @@ export const nodeIconsLucide: iconsType = { Delete, Command, ArrowBigUp, + Dot, }; diff --git a/src/frontend/src/utils/utils.ts b/src/frontend/src/utils/utils.ts index 22fe93185..9651169eb 100644 --- a/src/frontend/src/utils/utils.ts +++ b/src/frontend/src/utils/utils.ts @@ -1,5 +1,6 @@ import clsx, { ClassValue } from "clsx"; import { twMerge } from "tailwind-merge"; +import { priorityFields } from "../constants/constants"; import { ADJECTIVES, DESCRIPTIONS, NOUNS } from "../flow_constants"; import { APIDataType, @@ -143,7 +144,7 @@ export function groupByFamily( // se existir o flow for (const node of flow) { // para cada node do flow - if (node!.data!.node!.flow) break; // não faz nada se o node for um group + if (node!.data!.node!.flow || !node!.data!.node!.template) break; // não faz nada se o node for um group const nodeData = node.data; const foundNode = checkedNodes.get(nodeData.type); // verifica se o tipo do node já foi checado @@ -642,3 +643,42 @@ export function getFieldTitle( ? template[templateField].display_name! : template[templateField].name ?? templateField; } + +export function sortFields(a, b, fieldOrder) { + // Early return for empty fields + if (!a && !b) return 0; + if (!a) return 1; + if (!b) return -1; + + // Normalize the case to ensure case-insensitive comparison + const normalizedFieldA = a.toLowerCase(); + const normalizedFieldB = b.toLowerCase(); + + const aIsPriority = priorityFields.has(normalizedFieldA); + const bIsPriority = priorityFields.has(normalizedFieldB); + + // Sort by priority + if (aIsPriority && !bIsPriority) return -1; + if (!aIsPriority && bIsPriority) return 1; + + // Check if either field is in the fieldOrder array + const indexOfA = fieldOrder.indexOf(normalizedFieldA); + const indexOfB = fieldOrder.indexOf(normalizedFieldB); + + // If both fields are in fieldOrder, sort by their order in the array + if (indexOfA !== -1 && indexOfB !== -1) { + return indexOfA - indexOfB; + } + + // If only one of the fields is in fieldOrder, that field comes first + if (indexOfA !== -1) { + return -1; + } + if (indexOfB !== -1) { + return 1; + } + + // Default case for fields not in priorityFields and not found in fieldOrder + // You might want to sort them alphabetically or in another specific manner + return a.localeCompare(b); +} diff --git a/src/frontend/tailwind.config.js b/src/frontend/tailwind.config.js index 80a5416c3..1dbe0b02d 100644 --- a/src/frontend/tailwind.config.js +++ b/src/frontend/tailwind.config.js @@ -88,7 +88,7 @@ module.exports = { "chat-bot-icon": "var(--chat-bot-icon)", "chat-user-icon": "var(--chat-user-icon)", "ice": "var(--ice)", - + hover: "var(--hover)", white: "var(--white)", border: "hsl(var(--border))", input: "hsl(var(--input))", @@ -224,5 +224,6 @@ module.exports = { }), require("@tailwindcss/typography"), require("daisyui"), + require('tailwindcss-dotted-background'), ], }; diff --git a/src/frontend/tests/end-to-end/assets/flow_group_test.json b/src/frontend/tests/end-to-end/assets/flow_group_test.json index d8d9a2a87..9df38cfb3 100644 --- a/src/frontend/tests/end-to-end/assets/flow_group_test.json +++ b/src/frontend/tests/end-to-end/assets/flow_group_test.json @@ -1 +1,509 @@ -{"id":"8404c1fc-1bce-43b4-a8bc-3febea587fc8","data":{"nodes":[{"id":"PythonFunctionTool-RfJui","type":"genericNode","position":{"x":117.54690105175428,"y":-84.2465475108354},"data":{"type":"PythonFunctionTool","node":{"template":{"code":{"type":"code","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"value":"\ndef python_function(text: str) -> str:\n \"\"\"This is a default python function that returns the input text\"\"\"\n return text\n","fileTypes":[],"file_path":"","password":false,"name":"code","advanced":false,"dynamic":false,"info":"","title_case":false},"description":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"value":"Returns the Text you send. This is a testing tool.","fileTypes":[],"file_path":"","password":false,"name":"description","advanced":false,"dynamic":false,"info":"","title_case":false,"input_types":["Text"]},"name":{"type":"str","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"value":"PythonFunction","fileTypes":[],"file_path":"","password":false,"name":"name","advanced":false,"dynamic":false,"info":"","title_case":false,"input_types":["Text"]},"return_direct":{"type":"bool","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"value":false,"fileTypes":[],"file_path":"","password":false,"name":"return_direct","advanced":false,"dynamic":false,"info":"","title_case":false},"_type":"PythonFunctionTool"},"description":"Python function to be executed.","base_classes":["BaseTool","Tool"],"display_name":"PythonFunctionTool","documentation":"","custom_fields":{},"output_types":[],"field_formatters":{},"pinned":false,"beta":false},"id":"PythonFunctionTool-RfJui"},"selected":true,"width":384,"height":466,"positionAbsolute":{"x":117.54690105175428,"y":-84.2465475108354},"dragging":false},{"id":"AgentInitializer-tPdJw","type":"genericNode","position":{"x":677.68677055088,"y":127.19859565276168},"data":{"type":"AgentInitializer","node":{"template":{"llm":{"type":"BaseLanguageModel","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":false,"name":"llm","display_name":"Language Model","advanced":false,"dynamic":false,"info":"","title_case":false},"memory":{"type":"BaseChatMemory","required":false,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":false,"name":"memory","display_name":"Memory","advanced":false,"dynamic":false,"info":"","title_case":false},"tools":{"type":"Tool","required":true,"placeholder":"","list":true,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":false,"name":"tools","display_name":"Tools","advanced":false,"dynamic":false,"info":"","title_case":false},"agent":{"type":"str","required":true,"placeholder":"","list":true,"show":true,"multiline":false,"value":"zero-shot-react-description","fileTypes":[],"file_path":"","password":false,"options":["zero-shot-react-description","react-docstore","self-ask-with-search","conversational-react-description","chat-zero-shot-react-description","chat-conversational-react-description","structured-chat-zero-shot-react-description","openai-functions","openai-multi-functions","JsonAgent","CSVAgent","VectorStoreAgent","VectorStoreRouterAgent","SQLAgent"],"name":"agent","display_name":"Agent Type","advanced":false,"dynamic":false,"info":"","title_case":false,"input_types":["Text"]},"code":{"type":"code","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"value":"from typing import Callable, List, Optional, Union\n\nfrom langchain.agents import AgentExecutor, AgentType, initialize_agent, types\nfrom langflow import CustomComponent\nfrom langflow.field_typing import BaseChatMemory, BaseLanguageModel, Tool\n\n\nclass AgentInitializerComponent(CustomComponent):\n display_name: str = \"Agent Initializer\"\n description: str = \"Initialize a Langchain Agent.\"\n documentation: str = \"https://python.langchain.com/docs/modules/agents/agent_types/\"\n\n def build_config(self):\n agents = list(types.AGENT_TO_CLASS.keys())\n # field_type and required are optional\n return {\n \"agent\": {\"options\": agents, \"value\": agents[0], \"display_name\": \"Agent Type\"},\n \"max_iterations\": {\"display_name\": \"Max Iterations\", \"value\": 10},\n \"memory\": {\"display_name\": \"Memory\"},\n \"tools\": {\"display_name\": \"Tools\"},\n \"llm\": {\"display_name\": \"Language Model\"},\n \"code\": {\"advanced\": True},\n }\n\n def build(\n self,\n agent: str,\n llm: BaseLanguageModel,\n tools: List[Tool],\n max_iterations: int,\n memory: Optional[BaseChatMemory] = None,\n ) -> Union[AgentExecutor, Callable]:\n agent = AgentType(agent)\n if memory:\n return initialize_agent(\n tools=tools,\n llm=llm,\n agent=agent,\n memory=memory,\n return_intermediate_steps=True,\n handle_parsing_errors=True,\n max_iterations=max_iterations,\n )\n return initialize_agent(\n tools=tools,\n llm=llm,\n agent=agent,\n return_intermediate_steps=True,\n handle_parsing_errors=True,\n max_iterations=max_iterations,\n )\n","fileTypes":[],"file_path":"","password":false,"name":"code","advanced":true,"dynamic":true,"info":"","title_case":false},"max_iterations":{"type":"int","required":true,"placeholder":"","list":false,"show":true,"multiline":false,"value":10,"fileTypes":[],"file_path":"","password":false,"name":"max_iterations","display_name":"Max Iterations","advanced":false,"dynamic":false,"info":"","title_case":false},"_type":"CustomComponent"},"description":"Initialize a Langchain Agent.","base_classes":["Runnable","Chain","Serializable","object","AgentExecutor","Generic","RunnableSerializable","Callable"],"display_name":"Agent Initializer","documentation":"https://python.langchain.com/docs/modules/agents/agent_types/","custom_fields":{"agent":null,"llm":null,"tools":null,"max_iterations":null,"memory":null},"output_types":["AgentExecutor","Callable"],"field_formatters":{},"pinned":false,"beta":true},"id":"AgentInitializer-tPdJw"},"selected":false,"width":384,"height":522},{"id":"ChatOpenAISpecs-stxRM","type":"genericNode","position":{"x":18.226716205350385,"y":432.6122491402193},"data":{"type":"ChatOpenAISpecs","node":{"template":{"code":{"type":"code","required":true,"placeholder":"","list":false,"show":true,"multiline":true,"value":"from typing import Optional, Union\n\nfrom langchain.llms import BaseLLM\nfrom langchain_community.chat_models.openai import ChatOpenAI\nfrom langflow import CustomComponent\nfrom langflow.field_typing import BaseLanguageModel, NestedDict\n\n\nclass ChatOpenAIComponent(CustomComponent):\n display_name = \"ChatOpenAI\"\n description = \"`OpenAI` Chat large language models API.\"\n icon = \"OpenAI\"\n\n def build_config(self):\n return {\n \"max_tokens\": {\n \"display_name\": \"Max Tokens\",\n \"advanced\": False,\n \"required\": False,\n },\n \"model_kwargs\": {\n \"display_name\": \"Model Kwargs\",\n \"advanced\": True,\n \"required\": False,\n },\n \"model_name\": {\n \"display_name\": \"Model Name\",\n \"advanced\": False,\n \"required\": False,\n \"options\": [\n \"gpt-4-turbo-preview\",\n \"gpt-4-0125-preview\",\n \"gpt-4-1106-preview\",\n \"gpt-4-vision-preview\",\n \"gpt-3.5-turbo-0125\",\n \"gpt-3.5-turbo-1106\",\n ],\n },\n \"openai_api_base\": {\n \"display_name\": \"OpenAI API Base\",\n \"advanced\": False,\n \"required\": False,\n \"info\": (\n \"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\n\"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\"\n ),\n },\n \"openai_api_key\": {\n \"display_name\": \"OpenAI API Key\",\n \"advanced\": False,\n \"required\": False,\n \"password\": True,\n },\n \"temperature\": {\n \"display_name\": \"Temperature\",\n \"advanced\": False,\n \"required\": False,\n \"value\": 0.7,\n },\n }\n\n def build(\n self,\n max_tokens: Optional[int] = 256,\n model_kwargs: NestedDict = {},\n model_name: str = \"gpt-4-1106-preview\",\n openai_api_base: Optional[str] = None,\n openai_api_key: Optional[str] = None,\n temperature: float = 0.7,\n ) -> Union[BaseLanguageModel, BaseLLM]:\n if not openai_api_base:\n openai_api_base = \"https://api.openai.com/v1\"\n return ChatOpenAI(\n max_tokens=max_tokens,\n model_kwargs=model_kwargs,\n model=model_name,\n base_url=openai_api_base,\n api_key=openai_api_key,\n temperature=temperature,\n )\n","fileTypes":[],"file_path":"","password":false,"name":"code","advanced":false,"dynamic":true,"info":"","title_case":false},"max_tokens":{"type":"int","required":false,"placeholder":"","list":false,"show":true,"multiline":false,"value":256,"fileTypes":[],"file_path":"","password":false,"name":"max_tokens","display_name":"Max Tokens","advanced":false,"dynamic":false,"info":"","title_case":false},"model_kwargs":{"type":"NestedDict","required":false,"placeholder":"","list":false,"show":true,"multiline":false,"value":{},"fileTypes":[],"file_path":"","password":false,"name":"model_kwargs","display_name":"Model Kwargs","advanced":true,"dynamic":false,"info":"","title_case":false},"model_name":{"type":"str","required":false,"placeholder":"","list":true,"show":true,"multiline":false,"value":"gpt-4-1106-preview","fileTypes":[],"file_path":"","password":false,"options":["gpt-4-turbo-preview","gpt-4-0125-preview","gpt-4-1106-preview","gpt-4-vision-preview","gpt-3.5-turbo-0125","gpt-3.5-turbo-1106"],"name":"model_name","display_name":"Model Name","advanced":false,"dynamic":false,"info":"","title_case":false,"input_types":["Text"]},"openai_api_base":{"type":"str","required":false,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":false,"name":"openai_api_base","display_name":"OpenAI API Base","advanced":false,"dynamic":false,"info":"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\n\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.","title_case":false,"input_types":["Text"]},"openai_api_key":{"type":"str","required":false,"placeholder":"","list":false,"show":true,"multiline":false,"fileTypes":[],"file_path":"","password":true,"name":"openai_api_key","display_name":"OpenAI API Key","advanced":false,"dynamic":false,"info":"","title_case":false,"input_types":["Text"]},"temperature":{"type":"float","required":false,"placeholder":"","list":false,"show":true,"multiline":false,"value":0.7,"fileTypes":[],"file_path":"","password":false,"name":"temperature","display_name":"Temperature","advanced":false,"dynamic":false,"info":"","rangeSpec":{"min":-1,"max":1,"step":0.1},"title_case":false},"_type":"CustomComponent"},"description":"`OpenAI` Chat large language models API.","icon":"OpenAI","base_classes":["Runnable","BaseLLM","Serializable","BaseLanguageModel","object","Generic","RunnableSerializable"],"display_name":"ChatOpenAI","documentation":"","custom_fields":{"max_tokens":null,"model_kwargs":null,"model_name":null,"openai_api_base":null,"openai_api_key":null,"temperature":null},"output_types":["BaseLanguageModel","BaseLLM"],"field_formatters":{},"pinned":false,"beta":true},"id":"ChatOpenAISpecs-stxRM"},"selected":false,"width":384,"height":666,"positionAbsolute":{"x":18.226716205350385,"y":432.6122491402193},"dragging":false}],"edges":[{"source":"ChatOpenAISpecs-stxRM","sourceHandle":"{œbaseClassesœ:[œRunnableœ,œBaseLLMœ,œSerializableœ,œBaseLanguageModelœ,œobjectœ,œGenericœ,œRunnableSerializableœ],œdataTypeœ:œChatOpenAISpecsœ,œidœ:œChatOpenAISpecs-stxRMœ}","target":"AgentInitializer-tPdJw","targetHandle":"{œfieldNameœ:œllmœ,œidœ:œAgentInitializer-tPdJwœ,œinputTypesœ:null,œtypeœ:œBaseLanguageModelœ}","data":{"targetHandle":{"fieldName":"llm","id":"AgentInitializer-tPdJw","inputTypes":null,"type":"BaseLanguageModel"},"sourceHandle":{"baseClasses":["Runnable","BaseLLM","Serializable","BaseLanguageModel","object","Generic","RunnableSerializable"],"dataType":"ChatOpenAISpecs","id":"ChatOpenAISpecs-stxRM"}},"style":{"stroke":"#555"},"className":"stroke-foreground stroke-connection","id":"reactflow__edge-ChatOpenAISpecs-stxRM{œbaseClassesœ:[œRunnableœ,œBaseLLMœ,œSerializableœ,œBaseLanguageModelœ,œobjectœ,œGenericœ,œRunnableSerializableœ],œdataTypeœ:œChatOpenAISpecsœ,œidœ:œChatOpenAISpecs-stxRMœ}-AgentInitializer-tPdJw{œfieldNameœ:œllmœ,œidœ:œAgentInitializer-tPdJwœ,œinputTypesœ:null,œtypeœ:œBaseLanguageModelœ}"},{"source":"PythonFunctionTool-RfJui","sourceHandle":"{œbaseClassesœ:[œBaseToolœ,œToolœ],œdataTypeœ:œPythonFunctionToolœ,œidœ:œPythonFunctionTool-RfJuiœ}","target":"AgentInitializer-tPdJw","targetHandle":"{œfieldNameœ:œtoolsœ,œidœ:œAgentInitializer-tPdJwœ,œinputTypesœ:null,œtypeœ:œToolœ}","data":{"targetHandle":{"fieldName":"tools","id":"AgentInitializer-tPdJw","inputTypes":null,"type":"Tool"},"sourceHandle":{"baseClasses":["BaseTool","Tool"],"dataType":"PythonFunctionTool","id":"PythonFunctionTool-RfJui"}},"style":{"stroke":"#555"},"className":"stroke-foreground stroke-connection","id":"reactflow__edge-PythonFunctionTool-RfJui{œbaseClassesœ:[œBaseToolœ,œToolœ],œdataTypeœ:œPythonFunctionToolœ,œidœ:œPythonFunctionTool-RfJuiœ}-AgentInitializer-tPdJw{œfieldNameœ:œtoolsœ,œidœ:œAgentInitializer-tPdJwœ,œinputTypesœ:null,œtypeœ:œToolœ}"}],"viewport":{"x":37.63043052737157,"y":71.47518177614131,"zoom":0.5140569133280332}},"description":"Uncover Business Opportunities with NLP.","name":"Untitled document (20)","last_tested_version":"0.7.0a0","is_component":false} \ No newline at end of file +{ + "id": "8404c1fc-1bce-43b4-a8bc-3febea587fc8", + "data": { + "nodes": [ + { + "id": "PythonFunctionTool-RfJui", + "type": "genericNode", + "position": { "x": 117.54690105175428, "y": -84.2465475108354 }, + "data": { + "type": "PythonFunctionTool", + "node": { + "template": { + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "\ndef python_function(text: str) -> str:\n \"\"\"This is a default python function that returns the input text\"\"\"\n return text\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false + }, + "description": { + "type": "str", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "Returns the Text you send. This is a testing tool.", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "description", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": ["Text"] + }, + "name": { + "type": "str", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": "PythonFunction", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "name", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": ["Text"] + }, + "return_direct": { + "type": "bool", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "return_direct", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false + }, + "_type": "PythonFunctionTool" + }, + "description": "Python function to be executed.", + "base_classes": ["BaseTool", "Tool"], + "display_name": "PythonFunctionTool", + "documentation": "", + "custom_fields": {}, + "output_types": [], + "field_formatters": {}, + "pinned": false, + "beta": false + }, + "id": "PythonFunctionTool-RfJui" + }, + "selected": true, + "width": 384, + "height": 466, + "positionAbsolute": { "x": 117.54690105175428, "y": -84.2465475108354 }, + "dragging": false + }, + { + "id": "AgentInitializer-tPdJw", + "type": "genericNode", + "position": { "x": 677.68677055088, "y": 127.19859565276168 }, + "data": { + "type": "AgentInitializer", + "node": { + "template": { + "llm": { + "type": "BaseLanguageModel", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "llm", + "display_name": "Language Model", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false + }, + "memory": { + "type": "BaseChatMemory", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "memory", + "display_name": "Memory", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false + }, + "tools": { + "type": "Tool", + "required": true, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "tools", + "display_name": "Tools", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false + }, + "agent": { + "type": "str", + "required": true, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "value": "zero-shot-react-description", + "fileTypes": [], + "file_path": "", + "password": false, + "options": [ + "zero-shot-react-description", + "react-docstore", + "self-ask-with-search", + "conversational-react-description", + "chat-zero-shot-react-description", + "chat-conversational-react-description", + "structured-chat-zero-shot-react-description", + "openai-functions", + "openai-multi-functions", + "JsonAgent", + "CSVAgent", + "VectorStoreAgent", + "VectorStoreRouterAgent", + "SQLAgent" + ], + "name": "agent", + "display_name": "Agent Type", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": ["Text"] + }, + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from typing import Callable, List, Optional, Union\n\nfrom langchain.agents import AgentExecutor, AgentType, initialize_agent, types\nfrom langflow import CustomComponent\nfrom langflow.field_typing import BaseChatMemory, BaseLanguageModel, Tool\n\n\nclass AgentInitializerComponent(CustomComponent):\n display_name: str = \"Agent Initializer\"\n description: str = \"Initialize a Langchain Agent.\"\n documentation: str = \"https://python.langchain.com/docs/modules/agents/agent_types/\"\n\n def build_config(self):\n agents = list(types.AGENT_TO_CLASS.keys())\n # field_type and required are optional\n return {\n \"agent\": {\"options\": agents, \"value\": agents[0], \"display_name\": \"Agent Type\"},\n \"max_iterations\": {\"display_name\": \"Max Iterations\", \"value\": 10},\n \"memory\": {\"display_name\": \"Memory\"},\n \"tools\": {\"display_name\": \"Tools\"},\n \"llm\": {\"display_name\": \"Language Model\"},\n \"code\": {\"advanced\": True},\n }\n\n def build(\n self,\n agent: str,\n llm: BaseLanguageModel,\n tools: List[Tool],\n max_iterations: int,\n memory: Optional[BaseChatMemory] = None,\n ) -> Union[AgentExecutor, Callable]:\n agent = AgentType(agent)\n if memory:\n return initialize_agent(\n tools=tools,\n llm=llm,\n agent=agent,\n memory=memory,\n return_intermediate_steps=True,\n handle_parsing_errors=True,\n max_iterations=max_iterations,\n )\n return initialize_agent(\n tools=tools,\n llm=llm,\n agent=agent,\n return_intermediate_steps=True,\n handle_parsing_errors=True,\n max_iterations=max_iterations,\n )\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": true, + "dynamic": true, + "info": "", + "title_case": false + }, + "max_iterations": { + "type": "int", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": 10, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "max_iterations", + "display_name": "Max Iterations", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false + }, + "_type": "CustomComponent" + }, + "description": "Initialize a Langchain Agent.", + "base_classes": [ + "Runnable", + "Chain", + "Serializable", + "object", + "AgentExecutor", + "Generic", + "RunnableSerializable", + "Callable" + ], + "display_name": "Agent Initializer", + "documentation": "https://python.langchain.com/docs/modules/agents/agent_types/", + "custom_fields": { + "agent": null, + "llm": null, + "tools": null, + "max_iterations": null, + "memory": null + }, + "output_types": ["AgentExecutor", "Callable"], + "field_formatters": {}, + "pinned": false, + "beta": true + }, + "id": "AgentInitializer-tPdJw" + }, + "selected": false, + "width": 384, + "height": 522 + }, + { + "id": "ChatOpenAISpecs-stxRM", + "type": "genericNode", + "position": { "x": 18.226716205350385, "y": 432.6122491402193 }, + "data": { + "type": "ChatOpenAISpecs", + "node": { + "template": { + "code": { + "type": "code", + "required": true, + "placeholder": "", + "list": false, + "show": true, + "multiline": true, + "value": "from typing import Optional, Union\n\nfrom langchain.llms import BaseLLM\nfrom langchain_community.chat_models.openai import ChatOpenAI\nfrom langflow import CustomComponent\nfrom langflow.field_typing import BaseLanguageModel, NestedDict\n\n\nclass ChatOpenAIComponent(CustomComponent):\n display_name = \"ChatOpenAI\"\n description = \"`OpenAI` Chat large language models API.\"\n icon = \"OpenAI\"\n\n def build_config(self):\n return {\n \"max_tokens\": {\n \"display_name\": \"Max Tokens\",\n \"advanced\": False,\n \"required\": False,\n },\n \"model_kwargs\": {\n \"display_name\": \"Model Kwargs\",\n \"advanced\": True,\n \"required\": False,\n },\n \"model_name\": {\n \"display_name\": \"Model Name\",\n \"advanced\": False,\n \"required\": False,\n \"options\": [\n \"gpt-4-turbo-preview\",\n \"gpt-4-0125-preview\",\n \"gpt-4-1106-preview\",\n \"gpt-4-vision-preview\",\n \"gpt-3.5-turbo-0125\",\n \"gpt-3.5-turbo-1106\",\n ],\n },\n \"openai_api_base\": {\n \"display_name\": \"OpenAI API Base\",\n \"advanced\": False,\n \"required\": False,\n \"info\": (\n \"The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\\n\\n\"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\"\n ),\n },\n \"openai_api_key\": {\n \"display_name\": \"OpenAI API Key\",\n \"advanced\": False,\n \"required\": False,\n \"password\": True,\n },\n \"temperature\": {\n \"display_name\": \"Temperature\",\n \"advanced\": False,\n \"required\": False,\n \"value\": 0.7,\n },\n }\n\n def build(\n self,\n max_tokens: Optional[int] = 256,\n model_kwargs: NestedDict = {},\n model_name: str = \"gpt-4-1106-preview\",\n openai_api_base: Optional[str] = None,\n openai_api_key: Optional[str] = None,\n temperature: float = 0.7,\n ) -> Union[BaseLanguageModel, BaseLLM]:\n if not openai_api_base:\n openai_api_base = \"https://api.openai.com/v1\"\n return ChatOpenAI(\n max_tokens=max_tokens,\n model_kwargs=model_kwargs,\n model=model_name,\n base_url=openai_api_base,\n api_key=openai_api_key,\n temperature=temperature,\n )\n", + "fileTypes": [], + "file_path": "", + "password": false, + "name": "code", + "advanced": false, + "dynamic": true, + "info": "", + "title_case": false + }, + "max_tokens": { + "type": "int", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": 256, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "max_tokens", + "display_name": "Max Tokens", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false + }, + "model_kwargs": { + "type": "NestedDict", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": {}, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "model_kwargs", + "display_name": "Model Kwargs", + "advanced": true, + "dynamic": false, + "info": "", + "title_case": false + }, + "model_name": { + "type": "str", + "required": false, + "placeholder": "", + "list": true, + "show": true, + "multiline": false, + "value": "gpt-4-1106-preview", + "fileTypes": [], + "file_path": "", + "password": false, + "options": [ + "gpt-4-turbo-preview", + "gpt-4-0125-preview", + "gpt-4-1106-preview", + "gpt-4-vision-preview", + "gpt-3.5-turbo-0125", + "gpt-3.5-turbo-1106" + ], + "name": "model_name", + "display_name": "Model Name", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": ["Text"] + }, + "openai_api_base": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "openai_api_base", + "display_name": "OpenAI API Base", + "advanced": false, + "dynamic": false, + "info": "The base URL of the OpenAI API. Defaults to https://api.openai.com/v1.\n\nYou can change this to use other APIs like JinaChat, LocalAI and Prem.", + "title_case": false, + "input_types": ["Text"] + }, + "openai_api_key": { + "type": "str", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "fileTypes": [], + "file_path": "", + "password": true, + "name": "openai_api_key", + "display_name": "OpenAI API Key", + "advanced": false, + "dynamic": false, + "info": "", + "title_case": false, + "input_types": ["Text"] + }, + "temperature": { + "type": "float", + "required": false, + "placeholder": "", + "list": false, + "show": true, + "multiline": false, + "value": 0.7, + "fileTypes": [], + "file_path": "", + "password": false, + "name": "temperature", + "display_name": "Temperature", + "advanced": false, + "dynamic": false, + "info": "", + "rangeSpec": { "min": -1, "max": 1, "step": 0.1 }, + "title_case": false + }, + "_type": "CustomComponent" + }, + "description": "`OpenAI` Chat large language models API.", + "icon": "OpenAI", + "base_classes": [ + "Runnable", + "BaseLLM", + "Serializable", + "BaseLanguageModel", + "object", + "Generic", + "RunnableSerializable" + ], + "display_name": "ChatOpenAI", + "documentation": "", + "custom_fields": { + "max_tokens": null, + "model_kwargs": null, + "model_name": null, + "openai_api_base": null, + "openai_api_key": null, + "temperature": null + }, + "output_types": ["BaseLanguageModel", "BaseLLM"], + "field_formatters": {}, + "pinned": false, + "beta": true + }, + "id": "ChatOpenAISpecs-stxRM" + }, + "selected": false, + "width": 384, + "height": 666, + "positionAbsolute": { "x": 18.226716205350385, "y": 432.6122491402193 }, + "dragging": false + } + ], + "edges": [ + { + "source": "ChatOpenAISpecs-stxRM", + "sourceHandle": "{œbaseClassesœ:[œRunnableœ,œBaseLLMœ,œSerializableœ,œBaseLanguageModelœ,œobjectœ,œGenericœ,œRunnableSerializableœ],œdataTypeœ:œChatOpenAISpecsœ,œidœ:œChatOpenAISpecs-stxRMœ}", + "target": "AgentInitializer-tPdJw", + "targetHandle": "{œfieldNameœ:œllmœ,œidœ:œAgentInitializer-tPdJwœ,œinputTypesœ:null,œtypeœ:œBaseLanguageModelœ}", + "data": { + "targetHandle": { + "fieldName": "llm", + "id": "AgentInitializer-tPdJw", + "inputTypes": null, + "type": "BaseLanguageModel" + }, + "sourceHandle": { + "baseClasses": [ + "Runnable", + "BaseLLM", + "Serializable", + "BaseLanguageModel", + "object", + "Generic", + "RunnableSerializable" + ], + "dataType": "ChatOpenAISpecs", + "id": "ChatOpenAISpecs-stxRM" + } + }, + "style": { "stroke": "#555" }, + "className": "stroke-foreground stroke-connection", + "id": "reactflow__edge-ChatOpenAISpecs-stxRM{œbaseClassesœ:[œRunnableœ,œBaseLLMœ,œSerializableœ,œBaseLanguageModelœ,œobjectœ,œGenericœ,œRunnableSerializableœ],œdataTypeœ:œChatOpenAISpecsœ,œidœ:œChatOpenAISpecs-stxRMœ}-AgentInitializer-tPdJw{œfieldNameœ:œllmœ,œidœ:œAgentInitializer-tPdJwœ,œinputTypesœ:null,œtypeœ:œBaseLanguageModelœ}" + }, + { + "source": "PythonFunctionTool-RfJui", + "sourceHandle": "{œbaseClassesœ:[œBaseToolœ,œToolœ],œdataTypeœ:œPythonFunctionToolœ,œidœ:œPythonFunctionTool-RfJuiœ}", + "target": "AgentInitializer-tPdJw", + "targetHandle": "{œfieldNameœ:œtoolsœ,œidœ:œAgentInitializer-tPdJwœ,œinputTypesœ:null,œtypeœ:œToolœ}", + "data": { + "targetHandle": { + "fieldName": "tools", + "id": "AgentInitializer-tPdJw", + "inputTypes": null, + "type": "Tool" + }, + "sourceHandle": { + "baseClasses": ["BaseTool", "Tool"], + "dataType": "PythonFunctionTool", + "id": "PythonFunctionTool-RfJui" + } + }, + "style": { "stroke": "#555" }, + "className": "stroke-foreground stroke-connection", + "id": "reactflow__edge-PythonFunctionTool-RfJui{œbaseClassesœ:[œBaseToolœ,œToolœ],œdataTypeœ:œPythonFunctionToolœ,œidœ:œPythonFunctionTool-RfJuiœ}-AgentInitializer-tPdJw{œfieldNameœ:œtoolsœ,œidœ:œAgentInitializer-tPdJwœ,œinputTypesœ:null,œtypeœ:œToolœ}" + } + ], + "viewport": { + "x": 37.63043052737157, + "y": 71.47518177614131, + "zoom": 0.5140569133280332 + } + }, + "description": "Uncover Business Opportunities with NLP.", + "name": "Untitled document (20)", + "last_tested_version": "0.7.0a0", + "is_component": false +} diff --git a/src/frontend/tests/end-to-end/codeAreaModalComponent.spec.ts b/src/frontend/tests/end-to-end/codeAreaModalComponent.spec.ts index 1f0c2f4ac..5e509cb01 100644 --- a/src/frontend/tests/end-to-end/codeAreaModalComponent.spec.ts +++ b/src/frontend/tests/end-to-end/codeAreaModalComponent.spec.ts @@ -37,9 +37,6 @@ test("CodeAreaModalComponent", async ({ page }) => { await page.getByTestId("more-options-modal").click(); await page.getByTestId("edit-button-modal").click(); - await page.locator('//*[@id="showcode"]').click(); - expect(await page.locator('//*[@id="showcode"]').isChecked()).toBeFalsy(); - await page.locator('//*[@id="showdescription"]').click(); expect( await page.locator('//*[@id="showdescription"]').isChecked() @@ -53,9 +50,6 @@ test("CodeAreaModalComponent", async ({ page }) => { await page.locator('//*[@id="showreturn_direct"]').isChecked() ).toBeFalsy(); - await page.locator('//*[@id="showcode"]').click(); - expect(await page.locator('//*[@id="showcode"]').isChecked()).toBeTruthy(); - await page.locator('//*[@id="showdescription"]').click(); expect( await page.locator('//*[@id="showdescription"]').isChecked() @@ -69,9 +63,6 @@ test("CodeAreaModalComponent", async ({ page }) => { await page.locator('//*[@id="showreturn_direct"]').isChecked() ).toBeTruthy(); - await page.locator('//*[@id="showcode"]').click(); - expect(await page.locator('//*[@id="showcode"]').isChecked()).toBeFalsy(); - await page.locator('//*[@id="showdescription"]').click(); expect( await page.locator('//*[@id="showdescription"]').isChecked() @@ -85,9 +76,6 @@ test("CodeAreaModalComponent", async ({ page }) => { await page.locator('//*[@id="showreturn_direct"]').isChecked() ).toBeFalsy(); - await page.locator('//*[@id="showcode"]').click(); - expect(await page.locator('//*[@id="showcode"]').isChecked()).toBeTruthy(); - await page.locator('//*[@id="showdescription"]').click(); expect( await page.locator('//*[@id="showdescription"]').isChecked() @@ -101,9 +89,6 @@ test("CodeAreaModalComponent", async ({ page }) => { await page.locator('//*[@id="showreturn_direct"]').isChecked() ).toBeTruthy(); - await page.locator('//*[@id="showcode"]').click(); - expect(await page.locator('//*[@id="showcode"]').isChecked()).toBeFalsy(); - await page.locator('//*[@id="saveChangesBtn"]').click(); const plusButtonLocator = page.locator('//*[@id="code-input-0"]'); @@ -114,22 +99,6 @@ test("CodeAreaModalComponent", async ({ page }) => { await page.getByTestId("more-options-modal").click(); await page.getByTestId("edit-button-modal").click(); - await page.locator('//*[@id="showcode"]').click(); - expect(await page.locator('//*[@id="showcode"]').isChecked()).toBeTruthy(); - - await page.locator('//*[@id="code-area-editcode"]').click(); - - let value = await page.locator('//*[@id="codeValue"]').inputValue(); - - if ( - value != - 'def python_function(text: str) -> str: """This is a default python function that returns the input text""" return text' - ) { - expect(false).toBeTruthy(); - } - - await page.locator('//*[@id="checkAndSaveBtn"]').click(); - await page.locator('//*[@id="saveChangesBtn"]').click(); await page.getByTestId("div-generic-node").click(); diff --git a/src/frontend/tests/end-to-end/dropdownComponent.spec.ts b/src/frontend/tests/end-to-end/dropdownComponent.spec.ts index 3fdb49d54..e8b462266 100644 --- a/src/frontend/tests/end-to-end/dropdownComponent.spec.ts +++ b/src/frontend/tests/end-to-end/dropdownComponent.spec.ts @@ -43,36 +43,132 @@ test("dropDownComponent", async ({ page }) => { expect(false).toBeTruthy(); } - // showcode - await page.locator('//*[@id="showcode"]').click(); - expect(await page.locator('//*[@id="showcode"]').isChecked()).toBeTruthy(); + await page.locator('//*[@id="showcache"]').click(); + expect(await page.locator('//*[@id="showcache"]').isChecked()).toBeFalsy(); + + await page.locator('//*[@id="showcache"]').click(); + expect(await page.locator('//*[@id="showcache"]').isChecked()).toBeTruthy(); + + await page.locator('//*[@id="showcredentials_profile_name"]').click(); + expect( + await page.locator('//*[@id="showcredentials_profile_name"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showcredentials_profile_name"]').click(); + expect( + await page.locator('//*[@id="showcredentials_profile_name"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showendpoint_url"]').click(); + expect( + await page.locator('//*[@id="showendpoint_url"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showendpoint_url"]').click(); + expect( + await page.locator('//*[@id="showendpoint_url"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showmodel_kwargs"]').click(); + expect( + await page.locator('//*[@id="showmodel_kwargs"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showmodel_kwargs"]').click(); + expect( + await page.locator('//*[@id="showmodel_kwargs"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showregion_name"]').click(); + expect( + await page.locator('//*[@id="showregion_name"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showregion_name"]').click(); + expect( + await page.locator('//*[@id="showregion_name"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showstreaming"]').click(); + expect( + await page.locator('//*[@id="showstreaming"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showstreaming"]').click(); + expect( + await page.locator('//*[@id="showstreaming"]').isChecked() + ).toBeTruthy(); // showmodel_id await page.locator('//*[@id="showmodel_id"]').click(); expect(await page.locator('//*[@id="showmodel_id"]').isChecked()).toBeFalsy(); - // showcode - await page.locator('//*[@id="showcode"]').click(); - expect(await page.locator('//*[@id="showcode"]').isChecked()).toBeFalsy(); - // showmodel_id await page.locator('//*[@id="showmodel_id"]').click(); expect( await page.locator('//*[@id="showmodel_id"]').isChecked() ).toBeTruthy(); - // showcode - await page.locator('//*[@id="showcode"]').click(); - expect(await page.locator('//*[@id="showcode"]').isChecked()).toBeTruthy(); + await page.locator('//*[@id="showcache"]').click(); + expect(await page.locator('//*[@id="showcache"]').isChecked()).toBeFalsy(); + + await page.locator('//*[@id="showcache"]').click(); + expect(await page.locator('//*[@id="showcache"]').isChecked()).toBeTruthy(); + + await page.locator('//*[@id="showcredentials_profile_name"]').click(); + expect( + await page.locator('//*[@id="showcredentials_profile_name"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showcredentials_profile_name"]').click(); + expect( + await page.locator('//*[@id="showcredentials_profile_name"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showendpoint_url"]').click(); + expect( + await page.locator('//*[@id="showendpoint_url"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showendpoint_url"]').click(); + expect( + await page.locator('//*[@id="showendpoint_url"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showmodel_kwargs"]').click(); + expect( + await page.locator('//*[@id="showmodel_kwargs"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showmodel_kwargs"]').click(); + expect( + await page.locator('//*[@id="showmodel_kwargs"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showregion_name"]').click(); + expect( + await page.locator('//*[@id="showregion_name"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showregion_name"]').click(); + expect( + await page.locator('//*[@id="showregion_name"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showstreaming"]').click(); + expect( + await page.locator('//*[@id="showstreaming"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showstreaming"]').click(); + expect( + await page.locator('//*[@id="showstreaming"]').isChecked() + ).toBeTruthy(); // showmodel_id await page.locator('//*[@id="showmodel_id"]').click(); expect(await page.locator('//*[@id="showmodel_id"]').isChecked()).toBeFalsy(); - // showcode - await page.locator('//*[@id="showcode"]').click(); - expect(await page.locator('//*[@id="showcode"]').isChecked()).toBeFalsy(); - // showmodel_id await page.locator('//*[@id="showmodel_id"]').click(); expect( diff --git a/src/frontend/tests/end-to-end/flowPage.spec.ts b/src/frontend/tests/end-to-end/flowPage.spec.ts index 7b509864f..78c7b3280 100644 --- a/src/frontend/tests/end-to-end/flowPage.spec.ts +++ b/src/frontend/tests/end-to-end/flowPage.spec.ts @@ -19,7 +19,7 @@ test.describe("Flow Page tests", () => { await page.waitForTimeout(2000); await page - .locator('//*[@id="custom_componentsCustomComponent"]') + .locator('//*[@id="utilitiesCustomComponent"]') .dragTo(page.locator('//*[@id="react-flow-id"]')); await page.mouse.up(); await page.mouse.down(); diff --git a/src/frontend/tests/end-to-end/intComponent.spec.ts b/src/frontend/tests/end-to-end/intComponent.spec.ts index 0cf355832..1acd958a7 100644 --- a/src/frontend/tests/end-to-end/intComponent.spec.ts +++ b/src/frontend/tests/end-to-end/intComponent.spec.ts @@ -8,32 +8,32 @@ test("IntComponent", async ({ page }) => { await page.waitForTimeout(2000); await page.getByPlaceholder("Search").click(); - await page.getByPlaceholder("Search").fill("getrequest"); + await page.getByPlaceholder("Search").fill("openai"); await page.waitForTimeout(2000); await page - .getByTestId("utilitiesGET Request") + .getByTestId("modelsOpenAI Model") .first() .dragTo(page.locator('//*[@id="react-flow-id"]')); await page.mouse.up(); await page.mouse.down(); - await page.getByTestId("int-input-timeout").click(); + await page.getByTestId("int-input-max_tokens").click(); await page - .getByTestId("int-input-timeout") + .getByTestId("int-input-max_tokens") .fill("123456789123456789123456789"); - let value = await page.getByTestId("int-input-timeout").inputValue(); + let value = await page.getByTestId("int-input-max_tokens").inputValue(); if (value != "123456789123456789123456789") { expect(false).toBeTruthy(); } - await page.getByTestId("int-input-timeout").click(); - await page.getByTestId("int-input-timeout").fill("0"); + await page.getByTestId("int-input-max_tokens").click(); + await page.getByTestId("int-input-max_tokens").fill("0"); - value = await page.getByTestId("int-input-timeout").inputValue(); + value = await page.getByTestId("int-input-max_tokens").inputValue(); if (value != "0") { expect(false).toBeTruthy(); @@ -42,35 +42,119 @@ test("IntComponent", async ({ page }) => { await page.getByTestId("more-options-modal").click(); await page.getByTestId("edit-button-modal").click(); - value = await page.getByTestId("edit-int-input-timeout").inputValue(); + value = await page.getByTestId("edit-int-input-max_tokens").inputValue(); if (value != "0") { expect(false).toBeTruthy(); } - await page.getByTestId("edit-int-input-timeout").click(); + await page.getByTestId("edit-int-input-max_tokens").click(); await page - .getByTestId("edit-int-input-timeout") + .getByTestId("edit-int-input-max_tokens") .fill("123456789123456789123456789"); - await page.locator('//*[@id="showheaders"]').click(); - expect(await page.locator('//*[@id="showheaders"]').isChecked()).toBeFalsy(); + await page.locator('//*[@id="showinput_value"]').click(); + expect( + await page.locator('//*[@id="showinput_value"]').isChecked() + ).toBeFalsy(); - await page.locator('//*[@id="showtimeout"]').click(); - expect(await page.locator('//*[@id="showtimeout"]').isChecked()).toBeFalsy(); + await page.locator('//*[@id="showmodel_kwargs"]').click(); + expect( + await page.locator('//*[@id="showmodel_kwargs"]').isChecked() + ).toBeTruthy(); - await page.locator('//*[@id="showurl"]').click(); - expect(await page.locator('//*[@id="showurl"]').isChecked()).toBeFalsy(); + await page.locator('//*[@id="showmodel_name"]').click(); + expect( + await page.locator('//*[@id="showmodel_name"]').isChecked() + ).toBeFalsy(); - await page.locator('//*[@id="showheaders"]').click(); - expect(await page.locator('//*[@id="showheaders"]').isChecked()).toBeTruthy(); + await page.locator('//*[@id="showopenai_api_base"]').click(); + expect( + await page.locator('//*[@id="showopenai_api_base"]').isChecked() + ).toBeFalsy(); - await page.locator('//*[@id="showurl"]').click(); - expect(await page.locator('//*[@id="showurl"]').isChecked()).toBeTruthy(); + await page.locator('//*[@id="showopenai_api_key"]').click(); + expect( + await page.locator('//*[@id="showopenai_api_key"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showstream"]').click(); + expect(await page.locator('//*[@id="showstream"]').isChecked()).toBeFalsy(); + + await page.locator('//*[@id="showtemperature"]').click(); + expect( + await page.locator('//*[@id="showtemperature"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showinput_value"]').click(); + expect( + await page.locator('//*[@id="showinput_value"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showmodel_kwargs"]').click(); + expect( + await page.locator('//*[@id="showmodel_kwargs"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showmodel_name"]').click(); + expect( + await page.locator('//*[@id="showmodel_name"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showopenai_api_base"]').click(); + expect( + await page.locator('//*[@id="showopenai_api_base"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showopenai_api_key"]').click(); + expect( + await page.locator('//*[@id="showopenai_api_key"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showstream"]').click(); + expect(await page.locator('//*[@id="showstream"]').isChecked()).toBeTruthy(); + + await page.locator('//*[@id="showtemperature"]').click(); + expect( + await page.locator('//*[@id="showtemperature"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showinput_value"]').click(); + expect( + await page.locator('//*[@id="showinput_value"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showmodel_kwargs"]').click(); + expect( + await page.locator('//*[@id="showmodel_kwargs"]').isChecked() + ).toBeTruthy(); + + await page.locator('//*[@id="showmodel_name"]').click(); + expect( + await page.locator('//*[@id="showmodel_name"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showopenai_api_base"]').click(); + expect( + await page.locator('//*[@id="showopenai_api_base"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showopenai_api_key"]').click(); + expect( + await page.locator('//*[@id="showopenai_api_key"]').isChecked() + ).toBeFalsy(); + + await page.locator('//*[@id="showstream"]').click(); + expect(await page.locator('//*[@id="showstream"]').isChecked()).toBeFalsy(); + + await page.locator('//*[@id="showtemperature"]').click(); + expect( + await page.locator('//*[@id="showtemperature"]').isChecked() + ).toBeFalsy(); await page.locator('//*[@id="saveChangesBtn"]').click(); - const plusButtonLocator = page.getByTestId("int-input-timeout"); + const plusButtonLocator = page.getByTestId("int-input-max_tokens"); const elementCount = await plusButtonLocator.count(); if (elementCount === 0) { expect(true).toBeTruthy(); @@ -84,7 +168,7 @@ test("IntComponent", async ({ page }) => { ).toBeTruthy(); const valueEditNode = await page - .getByTestId("edit-int-input-timeout") + .getByTestId("edit-int-input-max_tokens") .inputValue(); if (valueEditNode != "123456789123456789123456789") { @@ -92,19 +176,19 @@ test("IntComponent", async ({ page }) => { } await page.locator('//*[@id="saveChangesBtn"]').click(); - await page.getByTestId("int-input-timeout").click(); - await page.getByTestId("int-input-timeout").fill("3"); + await page.getByTestId("int-input-max_tokens").click(); + await page.getByTestId("int-input-max_tokens").fill("3"); - let value = await page.getByTestId("int-input-timeout").inputValue(); + let value = await page.getByTestId("int-input-max_tokens").inputValue(); if (value != "3") { expect(false).toBeTruthy(); } - await page.getByTestId("int-input-timeout").click(); - await page.getByTestId("int-input-timeout").fill("-3"); + await page.getByTestId("int-input-max_tokens").click(); + await page.getByTestId("int-input-max_tokens").fill("-3"); - value = await page.getByTestId("int-input-timeout").inputValue(); + value = await page.getByTestId("int-input-max_tokens").inputValue(); if (value != "0") { expect(false).toBeTruthy(); diff --git a/tests/test_initial_setup.py b/tests/test_initial_setup.py new file mode 100644 index 000000000..9b34f844d --- /dev/null +++ b/tests/test_initial_setup.py @@ -0,0 +1,84 @@ +from itertools import chain + +import pytest +from sqlalchemy import func +from sqlmodel import select + +from langflow.graph.graph.base import Graph +from langflow.graph.schema import ResultData +from langflow.initial_setup.setup import ( + STARTER_FOLDER_NAME, + create_or_update_starter_projects, + get_project_data, + load_starter_projects, +) +from langflow.services.database.models.flow.model import Flow +from langflow.services.deps import session_scope + + +def test_load_starter_projects(): + projects = load_starter_projects() + assert isinstance(projects, list) + assert all(isinstance(project, dict) for project in projects) + + +def test_get_project_data(): + projects = load_starter_projects() + for project in projects: + data = get_project_data(project) + assert all(d is not None for d in data) + + +def test_create_or_update_starter_projects(client): + with session_scope() as session: + # Run the function to create or update projects + create_or_update_starter_projects() + + # Get the number of projects returned by load_starter_projects + num_projects = len(load_starter_projects()) + + # Get the number of projects in the database + num_db_projects = session.exec( + select(func.count(Flow.id)).where(Flow.folder == STARTER_FOLDER_NAME) + ).one() + + # Check that the number of projects in the database is the same as the number of projects returned by load_starter_projects + assert num_db_projects == num_projects + + +@pytest.mark.asyncio +async def test_starter_project_can_run_successfully(client): + with session_scope() as session: + # Run the function to create or update projects + create_or_update_starter_projects() + + # Get the number of projects returned by load_starter_projects + num_projects = len(load_starter_projects()) + + # Get the number of projects in the database + num_db_projects = session.exec( + select(func.count(Flow.id)).where(Flow.folder == STARTER_FOLDER_NAME) + ).one() + + # Check that the number of projects in the database is the same as the number of projects returned by load_starter_projects + assert num_db_projects == num_projects + + # Get all the starter projects + projects = session.exec( + select(Flow).where(Flow.folder == STARTER_FOLDER_NAME) + ).all() + + graphs: list[Graph] = [ + (project.name, Graph.from_payload(project.data, flow_id=project.id)) + for project in projects + ] + assert len(graphs) == len(projects) + for name, graph in graphs: + outputs = await graph.run( + inputs={}, + outputs=[], + session_id="test", + ) + assert all( + isinstance(output, ResultData) for output in chain.from_iterable(outputs) + ), f"Project {name} error: {outputs}"