refactor: reorganize components and update PromptComponent with priority attribute (#8667)

* Update styleUtils.ts

* update to prompt component

* update to template

* update to mcp component

* update to smart function

* [autofix.ci] apply automated fixes

* update to templates

* fix sidebar

* change name

* update import

* update import

* update import

* [autofix.ci] apply automated fixes

* fix import

* fix ollama

* fix ruff

* refactor(agent): standardize memory handling and update chat history logic (#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <michael.fortman@datastax.com>

* fix prompt change

* feat(message): support sequencing of multiple streamable models (#8434)

* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests

* auto update

* update

* [autofix.ci] apply automated fixes

* fix openai import

* revert template changes

* test fixes

* update templates

* [autofix.ci] apply automated fixes

* fix tests

* fix order

* fix prompts import

* fix frontend tests

* fix frontend

* [autofix.ci] apply automated fixes

* add charmander

* [autofix.ci] apply automated fixes

* fix prompt frontend

* fix frontend

* test fix

* [autofix.ci] apply automated fixes

* change pokedex

* remove pokedex extra

* update template

* name fix

* update template

* mcp test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: cristhianzl <cristhian.lousa@gmail.com>
Co-authored-by: Yuqi Tang <yuqi.tang@datastax.com>
Co-authored-by: Mike Fortman <michael.fortman@datastax.com>
Co-authored-by: Gabriel Luiz Freitas Almeida <gabriel@langflow.org>
This commit is contained in:
Edwin Jose 2025-06-27 12:02:06 -05:00 committed by GitHub
commit 5ba8f91c9a
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
96 changed files with 234 additions and 218 deletions

View file

@ -2,13 +2,13 @@ from typing_extensions import TypedDict
from langflow.base.models.model import LCModelComponent
from langflow.components.amazon.amazon_bedrock_model import AmazonBedrockComponent
from langflow.components.languagemodels.anthropic import AnthropicModelComponent
from langflow.components.languagemodels.azure_openai import AzureChatOpenAIComponent
from langflow.components.languagemodels.google_generative_ai import GoogleGenerativeAIComponent
from langflow.components.anthropic.anthropic import AnthropicModelComponent
from langflow.components.azure.azure_openai import AzureChatOpenAIComponent
from langflow.components.google.google_generative_ai import GoogleGenerativeAIComponent
from langflow.components.languagemodels.groq import GroqModel
from langflow.components.languagemodels.nvidia import NVIDIAModelComponent
from langflow.components.languagemodels.openai_chat_model import OpenAIModelComponent
from langflow.components.languagemodels.sambanova import SambaNovaComponent
from langflow.components.nvidia.nvidia import NVIDIAModelComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.inputs.inputs import InputTypes, SecretStrInput
from langflow.template.field.base import Input
@ -89,7 +89,7 @@ def create_input_fields_dict(inputs: list[Input], prefix: str) -> dict[str, Inpu
def _get_google_generative_ai_inputs_and_fields():
try:
from langflow.components.languagemodels.google_generative_ai import GoogleGenerativeAIComponent
from langflow.components.google.google_generative_ai import GoogleGenerativeAIComponent
google_generative_ai_inputs = get_filtered_inputs(GoogleGenerativeAIComponent)
except ImportError as e:
@ -103,7 +103,7 @@ def _get_google_generative_ai_inputs_and_fields():
def _get_openai_inputs_and_fields():
try:
from langflow.components.languagemodels.openai_chat_model import OpenAIModelComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
openai_inputs = get_filtered_inputs(OpenAIModelComponent)
except ImportError as e:
@ -114,7 +114,7 @@ def _get_openai_inputs_and_fields():
def _get_azure_inputs_and_fields():
try:
from langflow.components.languagemodels.azure_openai import AzureChatOpenAIComponent
from langflow.components.azure.azure_openai import AzureChatOpenAIComponent
azure_inputs = get_filtered_inputs(AzureChatOpenAIComponent)
except ImportError as e:
@ -136,7 +136,7 @@ def _get_groq_inputs_and_fields():
def _get_anthropic_inputs_and_fields():
try:
from langflow.components.languagemodels.anthropic import AnthropicModelComponent
from langflow.components.anthropic.anthropic import AnthropicModelComponent
anthropic_inputs = get_filtered_inputs(AnthropicModelComponent)
except ImportError as e:
@ -147,7 +147,7 @@ def _get_anthropic_inputs_and_fields():
def _get_nvidia_inputs_and_fields():
try:
from langflow.components.languagemodels.nvidia import NVIDIAModelComponent
from langflow.components.nvidia.nvidia import NVIDIAModelComponent
nvidia_inputs = get_filtered_inputs(NVIDIAModelComponent)
except ImportError as e:

View file

@ -1,3 +1,4 @@
from .agent import AgentComponent
from .mcp_component import MCPToolsComponent
__all__ = ["AgentComponent"]
__all__ = ["AgentComponent", "MCPToolsComponent"]

View file

@ -77,7 +77,7 @@ class MCPToolsComponent(ComponentWithCache):
"tool",
]
display_name = "MCP Connection"
display_name = "MCP Tools"
description = "Connect to an MCP server to use its tools."
icon = "Mcp"
name = "MCPTools"

View file

@ -0,0 +1,7 @@
from .aiml import AIMLModelComponent
from .aiml_embeddings import AIMLEmbeddingsComponent
__all__ = [
"AIMLEmbeddingsComponent",
"AIMLModelComponent",
]

View file

@ -0,0 +1,5 @@
from .anthropic import AnthropicModelComponent
__all__ = [
"AnthropicModelComponent",
]

View file

@ -0,0 +1,7 @@
from .azure_openai import AzureChatOpenAIComponent
from .azure_openai_embeddings import AzureOpenAIEmbeddingsComponent
__all__ = [
"AzureChatOpenAIComponent",
"AzureOpenAIEmbeddingsComponent",
]

View file

@ -3,7 +3,6 @@ from .csv_to_data import CSVToDataComponent
from .directory import DirectoryComponent
from .file import FileComponent
from .json_to_data import JSONToDataComponent
from .mcp_component import MCPToolsComponent
from .news_search import NewsSearchComponent
from .rss import RSSReaderComponent
from .sql_executor import SQLComponent
@ -17,7 +16,6 @@ __all__ = [
"DirectoryComponent",
"FileComponent",
"JSONToDataComponent",
"MCPToolsComponent",
"NewsSearchComponent",
"RSSReaderComponent",
"SQLComponent",

View file

@ -1,6 +1,9 @@
from .astra_assistant_manager import AstraAssistantManager
from .astra_db import AstraDBChatMemory
from .astra_vectorize import AstraVectorizeComponent
from .astradb_cql import AstraDBCQLToolComponent
from .astradb_tool import AstraDBToolComponent
from .cassandra import CassandraChatMemory
from .create_assistant import AssistantsCreateAssistant
from .create_thread import AssistantsCreateThread
from .dotenv import Dotenv
@ -17,7 +20,10 @@ __all__ = [
"AssistantsRun",
"AstraAssistantManager",
"AstraDBCQLToolComponent",
"AstraDBChatMemory",
"AstraDBToolComponent",
"AstraVectorizeComponent",
"CassandraChatMemory",
"Dotenv",
"GetEnvVar",
]

View file

@ -1,35 +1,15 @@
from .aiml import AIMLEmbeddingsComponent
from .astra_vectorize import AstraVectorizeComponent
from .azure_openai import AzureOpenAIEmbeddingsComponent
from .cloudflare import CloudflareWorkersAIEmbeddingsComponent
from .cohere import CohereEmbeddingsComponent
from .google_generative_ai import GoogleGenerativeAIEmbeddingsComponent
from .huggingface_inference_api import HuggingFaceInferenceAPIEmbeddingsComponent
from .lmstudioembeddings import LMStudioEmbeddingsComponent
from .mistral import MistralAIEmbeddingsComponent
from .nvidia import NVIDIAEmbeddingsComponent
from .ollama import OllamaEmbeddingsComponent
from .openai import OpenAIEmbeddingsComponent
from .similarity import EmbeddingSimilarityComponent
from .text_embedder import TextEmbedderComponent
from .vertexai import VertexAIEmbeddingsComponent
from .watsonx import WatsonxEmbeddingsComponent
__all__ = [
"AIMLEmbeddingsComponent",
"AstraVectorizeComponent",
"AzureOpenAIEmbeddingsComponent",
"CloudflareWorkersAIEmbeddingsComponent",
"CohereEmbeddingsComponent",
"EmbeddingSimilarityComponent",
"GoogleGenerativeAIEmbeddingsComponent",
"HuggingFaceInferenceAPIEmbeddingsComponent",
"LMStudioEmbeddingsComponent",
"MistralAIEmbeddingsComponent",
"NVIDIAEmbeddingsComponent",
"OllamaEmbeddingsComponent",
"OpenAIEmbeddingsComponent",
"TextEmbedderComponent",
"VertexAIEmbeddingsComponent",
"WatsonxEmbeddingsComponent",
]

View file

@ -2,6 +2,8 @@ from .gmail import GmailLoaderComponent
from .google_bq_sql_executor import BigQueryExecutorComponent
from .google_drive import GoogleDriveComponent
from .google_drive_search import GoogleDriveSearchComponent
from .google_generative_ai import GoogleGenerativeAIComponent
from .google_generative_ai_embeddings import GoogleGenerativeAIEmbeddingsComponent
from .google_oauth_token import GoogleOAuthToken
__all__ = [
@ -9,5 +11,7 @@ __all__ = [
"GmailLoaderComponent",
"GoogleDriveComponent",
"GoogleDriveSearchComponent",
"GoogleGenerativeAIComponent",
"GoogleGenerativeAIEmbeddingsComponent",
"GoogleOAuthToken",
]

View file

@ -0,0 +1,7 @@
from .huggingface import HuggingFaceEndpointsComponent
from .huggingface_inference_api import HuggingFaceInferenceAPIEmbeddingsComponent
__all__ = [
"HuggingFaceEndpointsComponent",
"HuggingFaceInferenceAPIEmbeddingsComponent",
]

View file

@ -0,0 +1,4 @@
from .watsonx import WatsonxAIComponent
from .watsonx_embeddings import WatsonxEmbeddingsComponent
__all__ = ["WatsonxAIComponent", "WatsonxEmbeddingsComponent"]

View file

@ -1,43 +1,27 @@
from .aiml import AIMLModelComponent
from .anthropic import AnthropicModelComponent
from .azure_openai import AzureChatOpenAIComponent
from .baidu_qianfan_chat import QianfanChatEndpointComponent
from .cohere import CohereComponent
from .deepseek import DeepSeekModelComponent
from .google_generative_ai import GoogleGenerativeAIComponent
from .groq import GroqModel
from .huggingface import HuggingFaceEndpointsComponent
from .lmstudiomodel import LMStudioModelComponent
from .maritalk import MaritalkModelComponent
from .mistral import MistralAIModelComponent
from .novita import NovitaModelComponent
from .nvidia import NVIDIAModelComponent
from .ollama import ChatOllamaComponent
from .openai_chat_model import OpenAIModelComponent
from .openrouter import OpenRouterComponent
from .perplexity import PerplexityComponent
from .sambanova import SambaNovaComponent
from .vertexai import ChatVertexAIComponent
from .watsonx import WatsonxAIComponent
from .xai import XAIModelComponent
__all__ = [
"AIMLModelComponent",
"AnthropicModelComponent",
"AzureChatOpenAIComponent",
"ChatOllamaComponent",
"ChatVertexAIComponent",
"CohereComponent",
"DeepSeekModelComponent",
"GoogleGenerativeAIComponent",
"GroqModel",
"HuggingFaceEndpointsComponent",
"LMStudioModelComponent",
"MaritalkModelComponent",
"MistralAIModelComponent",
"NVIDIAModelComponent",
"NovitaModelComponent",
"OpenAIModelComponent",
"OpenRouterComponent",
"PerplexityComponent",
"QianfanChatEndpointComponent",

View file

@ -1,12 +1,8 @@
from .astra_db import AstraDBChatMemory
from .cassandra import CassandraChatMemory
from .mem0_chat_memory import Mem0MemoryComponent
from .redis import RedisIndexChatMemory
from .zep import ZepChatMemory
__all__ = [
"AstraDBChatMemory",
"CassandraChatMemory",
"Mem0MemoryComponent",
"RedisIndexChatMemory",
"ZepChatMemory",

View file

@ -1,11 +1,19 @@
import sys
from .nvidia import NVIDIAModelComponent
from .nvidia_embedding import NVIDIAEmbeddingsComponent
from .nvidia_ingest import NvidiaIngestComponent
from .nvidia_rerank import NvidiaRerankComponent
if sys.platform == "win32":
from .system_assist import NvidiaSystemAssistComponent
__all__ = ["NvidiaIngestComponent", "NvidiaRerankComponent", "NvidiaSystemAssistComponent"]
__all__ = [
"NVIDIAEmbeddingsComponent",
"NVIDIAModelComponent",
"NvidiaIngestComponent",
"NvidiaRerankComponent",
"NvidiaSystemAssistComponent",
]
else:
__all__ = ["NvidiaIngestComponent", "NvidiaRerankComponent"]
__all__ = ["NVIDIAEmbeddingsComponent", "NVIDIAModelComponent", "NvidiaIngestComponent", "NvidiaRerankComponent"]

View file

@ -0,0 +1,7 @@
from .ollama import ChatOllamaComponent
from .ollama_embeddings import OllamaEmbeddingsComponent
__all__ = [
"ChatOllamaComponent",
"OllamaEmbeddingsComponent",
]

View file

@ -0,0 +1,7 @@
from .openai import OpenAIEmbeddingsComponent
from .openai_chat_model import OpenAIModelComponent
__all__ = [
"OpenAIEmbeddingsComponent",
"OpenAIModelComponent",
]

View file

@ -14,6 +14,7 @@ from .message_to_data import MessageToDataComponent
from .parse_data import ParseDataComponent
from .parse_json_data import ParseJSONDataComponent
from .parser import ParserComponent
from .prompt import PromptComponent
from .python_repl_core import PythonREPLComponent
from .regex import RegexExtractorComponent
from .select_data import SelectDataComponent
@ -38,6 +39,7 @@ __all__ = [
"ParseDataFrameComponent",
"ParseJSONDataComponent",
"ParserComponent",
"PromptComponent",
"PythonREPLComponent",
"RegexExtractorComponent",
"SelectDataComponent",

View file

@ -16,7 +16,7 @@ if TYPE_CHECKING:
class LambdaFilterComponent(Component):
display_name = "Smart Function"
description = "Uses an LLM to generate a function for filtering or transforming structured data."
icon = "test-tube-diagonal"
icon = "square-function"
name = "Smart Function"
inputs = [

View file

@ -7,11 +7,12 @@ from langflow.template.utils import update_template_values
class PromptComponent(Component):
display_name: str = "Prompt"
display_name: str = "Prompt Template"
description: str = "Create a prompt template with dynamic variables."
icon = "braces"
trace_type = "prompt"
name = "Prompt"
name = "Prompt Template"
priority = 0 # Set priority to 0 to make it appear first
inputs = [
PromptInput(name="template", display_name="Template"),

View file

@ -1,3 +0,0 @@
from .prompt import PromptComponent
__all__ = ["PromptComponent"]

View file

@ -0,0 +1,7 @@
from .vertexai import ChatVertexAIComponent
from .vertexai_embeddings import VertexAIEmbeddingsComponent
__all__ = [
"ChatVertexAIComponent",
"VertexAIEmbeddingsComponent",
]

View file

@ -1237,8 +1237,6 @@
"group_outputs": false,
"method": "load_files_message",
"name": "message",
"options": null,
"required_inputs": null,
"selected": "Message",
"tool_mode": true,
"types": [

View file

@ -1790,4 +1790,4 @@
"tags": [
"agents"
]
}
}

View file

@ -1,6 +1,6 @@
from langflow.components.input_output import ChatInput, ChatOutput
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.prompts import PromptComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.processing import PromptComponent
from langflow.graph import Graph

View file

@ -2,9 +2,8 @@ from textwrap import dedent
from langflow.components.data import URLComponent
from langflow.components.input_output import ChatOutput, TextInputComponent
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.processing import ParserComponent
from langflow.components.prompts import PromptComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.processing import ParserComponent, PromptComponent
from langflow.graph import Graph

View file

@ -2,8 +2,8 @@ from langflow.components.crewai.crewai import CrewAIAgentComponent
from langflow.components.crewai.hierarchical_crew import HierarchicalCrewComponent
from langflow.components.crewai.hierarchical_task import HierarchicalTaskComponent
from langflow.components.input_output import ChatInput, ChatOutput
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.prompts import PromptComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.processing import PromptComponent
from langflow.components.tools import SearchAPIComponent, YfinanceToolComponent
from langflow.graph import Graph

View file

@ -1,7 +1,7 @@
from langflow.components.data import FileComponent
from langflow.components.input_output import ChatInput, ChatOutput
from langflow.components.models import LanguageModelComponent
from langflow.components.prompts import PromptComponent
from langflow.components.processing import PromptComponent
from langflow.graph import Graph

View file

@ -2,8 +2,8 @@ from langflow.components.crewai.crewai import CrewAIAgentComponent
from langflow.components.crewai.hierarchical_crew import HierarchicalCrewComponent
from langflow.components.crewai.hierarchical_task import HierarchicalTaskComponent
from langflow.components.input_output import ChatInput, ChatOutput
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.prompts import PromptComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.processing import PromptComponent
from langflow.components.tools import SearchAPIComponent
from langflow.graph import Graph

View file

@ -1,8 +1,8 @@
from langflow.components.helpers.memory import MemoryComponent
from langflow.components.input_output import ChatInput, ChatOutput
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.processing import PromptComponent
from langflow.components.processing.converter import TypeConverterComponent
from langflow.components.prompts import PromptComponent
from langflow.graph import Graph

View file

@ -1,8 +1,8 @@
from langflow.components.crewai.sequential_crew import SequentialCrewComponent
from langflow.components.crewai.sequential_task_agent import SequentialTaskAgentComponent
from langflow.components.input_output import ChatOutput, TextInputComponent
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.prompts import PromptComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.processing import PromptComponent
from langflow.components.tools import SearchAPIComponent
from langflow.graph import Graph

View file

@ -1,12 +1,11 @@
from textwrap import dedent
from langflow.components.data import FileComponent
from langflow.components.embeddings import OpenAIEmbeddingsComponent
from langflow.components.input_output import ChatInput, ChatOutput
from langflow.components.models import LanguageModelComponent
from langflow.components.processing import ParserComponent
from langflow.components.openai.openai import OpenAIEmbeddingsComponent
from langflow.components.processing import ParserComponent, PromptComponent
from langflow.components.processing.split_text import SplitTextComponent
from langflow.components.prompts import PromptComponent
from langflow.components.vectorstores import AstraDBVectorStoreComponent
from langflow.graph import Graph

View file

@ -4,7 +4,7 @@ import pytest
from astrapy import DataAPIClient
from langchain_astradb import AstraDBVectorStore, VectorServiceOptions
from langchain_core.documents import Document
from langflow.components.embeddings import OpenAIEmbeddingsComponent
from langflow.components.openai.openai import OpenAIEmbeddingsComponent
from langflow.components.vectorstores import AstraDBVectorStoreComponent
from langflow.schema.data import Data

View file

@ -6,7 +6,7 @@ from tests.integration.utils import run_single_component
# TODO: Add more tests for MCPToolsComponent
@pytest.mark.asyncio
async def test_mcp_component():
from langflow.components.data.mcp_component import MCPToolsComponent
from langflow.components.agents.mcp_component import MCPToolsComponent
inputs = {}

View file

@ -2,8 +2,8 @@ import os
import pytest
from langflow.components.helpers import OutputParserComponent
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.prompts import PromptComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.processing import PromptComponent
from tests.integration.utils import ComponentInputHandle, run_single_component

View file

@ -1,4 +1,4 @@
from langflow.components.prompts import PromptComponent
from langflow.components.processing import PromptComponent
from langflow.schema.message import Message
from tests.integration.utils import run_single_component

View file

@ -1,5 +1,5 @@
from langflow.components.input_output import ChatInput, ChatOutput
from langflow.components.prompts import PromptComponent
from langflow.components.processing import PromptComponent
from langflow.graph import Graph
from langflow.schema.message import Message

View file

@ -7,7 +7,7 @@ from langflow.base.tools.component_tool import ComponentToolkit
from langflow.components.data.sql_executor import SQLComponent
from langflow.components.input_output.chat_output import ChatOutput
from langflow.components.langchain_utilities import ToolCallingAgentComponent
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.tools.calculator import CalculatorToolComponent
from langflow.graph.graph.base import Graph
from pydantic import BaseModel

View file

@ -2,7 +2,7 @@ import os
import pytest
from langflow.components.langchain_utilities import ToolCallingAgentComponent
from langflow.components.languagemodels.openai_chat_model import OpenAIModelComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.tools.calculator import CalculatorToolComponent

View file

@ -2,7 +2,7 @@ import asyncio
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from langflow.components.data.mcp_component import MCPSseClient, MCPStdioClient, MCPToolsComponent
from langflow.components.agents.mcp_component import MCPSseClient, MCPStdioClient, MCPToolsComponent
from tests.base import ComponentTestBaseWithoutClient, VersionComponentMapping

View file

@ -2,7 +2,7 @@ from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from langchain_ollama import ChatOllama
from langflow.components.languagemodels.ollama import ChatOllamaComponent
from langflow.components.ollama.ollama import ChatOllamaComponent
from tests.base import ComponentTestBaseWithoutClient
@ -40,7 +40,7 @@ class TestChatOllamaComponent(ComponentTestBaseWithoutClient):
# Provide an empty list or the actual mapping if versioned files exist
return []
@patch("langflow.components.languagemodels.ollama.ChatOllama")
@patch("langflow.components.ollama.ollama.ChatOllama")
async def test_build_model(self, mock_chat_ollama, component_class, default_kwargs):
mock_instance = MagicMock()
mock_chat_ollama.return_value = mock_instance
@ -68,7 +68,7 @@ class TestChatOllamaComponent(ComponentTestBaseWithoutClient):
)
assert model == mock_instance
@patch("langflow.components.languagemodels.ollama.ChatOllama")
@patch("langflow.components.ollama.ollama.ChatOllama")
async def test_build_model_missing_base_url(self, mock_chat_ollama, component_class, default_kwargs):
# Make the mock raise an exception to simulate connection failure
mock_chat_ollama.side_effect = Exception("connection error")
@ -78,8 +78,8 @@ class TestChatOllamaComponent(ComponentTestBaseWithoutClient):
component.build_model()
@pytest.mark.asyncio
@patch("langflow.components.languagemodels.ollama.httpx.AsyncClient.post")
@patch("langflow.components.languagemodels.ollama.httpx.AsyncClient.get")
@patch("langflow.components.ollama.ollama.httpx.AsyncClient.post")
@patch("langflow.components.ollama.ollama.httpx.AsyncClient.get")
async def test_get_models_success(self, mock_get, mock_post):
component = ChatOllamaComponent()
mock_get_response = AsyncMock()
@ -107,7 +107,7 @@ class TestChatOllamaComponent(ComponentTestBaseWithoutClient):
assert mock_post.call_count == 2
@pytest.mark.asyncio
@patch("langflow.components.languagemodels.ollama.httpx.AsyncClient.get")
@patch("langflow.components.ollama.ollama.httpx.AsyncClient.get")
async def test_get_models_failure(self, mock_get):
import httpx
@ -147,7 +147,7 @@ class TestChatOllamaComponent(ComponentTestBaseWithoutClient):
assert updated_config["mirostat_eta"]["value"] == 0.2
assert updated_config["mirostat_tau"]["value"] == 10
@patch("langflow.components.languagemodels.ollama.httpx.AsyncClient.get")
@patch("langflow.components.ollama.ollama.httpx.AsyncClient.get")
@pytest.mark.asyncio
async def test_update_build_config_model_name(self, mock_get):
component = ChatOllamaComponent()

View file

@ -1,4 +1,4 @@
from langflow.components.languagemodels.huggingface import DEFAULT_MODEL, HuggingFaceEndpointsComponent
from langflow.components.huggingface.huggingface import DEFAULT_MODEL, HuggingFaceEndpointsComponent
from langflow.inputs.inputs import DictInput, DropdownInput, FloatInput, IntInput, SecretStrInput, SliderInput, StrInput

View file

@ -3,7 +3,7 @@ from unittest.mock import MagicMock, patch
import pytest
from langchain_openai import ChatOpenAI
from langflow.components.languagemodels.openai_chat_model import OpenAIModelComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from tests.base import ComponentTestBaseWithoutClient
@ -33,7 +33,7 @@ class TestOpenAIModelComponent(ComponentTestBaseWithoutClient):
# Provide an empty list or the actual mapping if versioned files exist
return []
@patch("langflow.components.languagemodels.openai_chat_model.ChatOpenAI")
@patch("langflow.components.openai.openai_chat_model.ChatOpenAI")
async def test_build_model(self, mock_chat_openai, component_class, default_kwargs):
mock_instance = MagicMock()
mock_chat_openai.return_value = mock_instance
@ -53,7 +53,7 @@ class TestOpenAIModelComponent(ComponentTestBaseWithoutClient):
)
assert model == mock_instance
@patch("langflow.components.languagemodels.openai_chat_model.ChatOpenAI")
@patch("langflow.components.openai.openai_chat_model.ChatOpenAI")
async def test_build_model_reasoning_model(self, mock_chat_openai, component_class, default_kwargs):
mock_instance = MagicMock()
mock_chat_openai.return_value = mock_instance
@ -78,7 +78,7 @@ class TestOpenAIModelComponent(ComponentTestBaseWithoutClient):
assert "temperature" not in kwargs
assert "seed" not in kwargs
@patch("langflow.components.languagemodels.openai_chat_model.ChatOpenAI")
@patch("langflow.components.openai.openai_chat_model.ChatOpenAI")
async def test_build_model_with_json_mode(self, mock_chat_openai, component_class, default_kwargs):
mock_instance = MagicMock()
mock_bound_instance = MagicMock()
@ -93,7 +93,7 @@ class TestOpenAIModelComponent(ComponentTestBaseWithoutClient):
mock_instance.bind.assert_called_once_with(response_format={"type": "json_object"})
assert model == mock_bound_instance
@patch("langflow.components.languagemodels.openai_chat_model.ChatOpenAI")
@patch("langflow.components.openai.openai_chat_model.ChatOpenAI")
async def test_build_model_no_api_key(self, mock_chat_openai, component_class, default_kwargs):
mock_instance = MagicMock()
mock_chat_openai.return_value = mock_instance
@ -105,7 +105,7 @@ class TestOpenAIModelComponent(ComponentTestBaseWithoutClient):
args, kwargs = mock_chat_openai.call_args
assert kwargs["api_key"] is None
@patch("langflow.components.languagemodels.openai_chat_model.ChatOpenAI")
@patch("langflow.components.openai.openai_chat_model.ChatOpenAI")
async def test_build_model_max_tokens_zero(self, mock_chat_openai, component_class, default_kwargs):
mock_instance = MagicMock()
mock_chat_openai.return_value = mock_instance

View file

@ -1,5 +1,5 @@
import pytest
from langflow.components.prompts import PromptComponent
from langflow.components.processing import PromptComponent
from tests.base import ComponentTestBaseWithClient

View file

@ -19,7 +19,7 @@ class TestChromaVectorStoreComponent(ComponentTestBaseWithoutClient):
@pytest.fixture
def default_kwargs(self, tmp_path: Path) -> dict[str, Any]:
"""Return the default kwargs for the component."""
from langflow.components.embeddings.openai import OpenAIEmbeddingsComponent
from langflow.components.openai.openai import OpenAIEmbeddingsComponent
if os.getenv("OPENAI_API_KEY") is None:
pytest.skip("OPENAI_API_KEY is not set")

View file

@ -21,7 +21,7 @@ class TestLocalDBComponent(ComponentTestBaseWithoutClient):
@pytest.fixture
def default_kwargs(self, tmp_path: Path) -> dict[str, Any]:
"""Return the default kwargs for the component."""
from langflow.components.embeddings.openai import OpenAIEmbeddingsComponent
from langflow.components.openai.openai import OpenAIEmbeddingsComponent
if os.getenv("OPENAI_API_KEY") is None:
pytest.skip("OPENAI_API_KEY is not set")

View file

@ -2,8 +2,8 @@ import re
import pytest
from langflow.components.input_output import ChatInput, ChatOutput
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.prompts import PromptComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.processing import PromptComponent
from langflow.graph.graph.base import Graph

View file

@ -3,9 +3,9 @@ import os
import pytest
from langflow.components.input_output import ChatInput, ChatOutput, TextOutputComponent
from langflow.components.input_output.text import TextInputComponent
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.logic.conditional_router import ConditionalRouterComponent
from langflow.components.prompts import PromptComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.processing import PromptComponent
from langflow.custom.custom_component.component import Component
from langflow.graph.graph.base import Graph
from langflow.graph.graph.utils import find_cycle_vertices

View file

@ -3,9 +3,9 @@ from typing import TYPE_CHECKING
import pytest
from langflow.components.helpers.memory import MemoryComponent
from langflow.components.input_output import ChatInput, ChatOutput
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.processing import PromptComponent
from langflow.components.processing.converter import TypeConverterComponent
from langflow.components.prompts import PromptComponent
from langflow.graph.graph.base import Graph
from langflow.graph.graph.constants import Finish
from langflow.graph.graph.state_model import create_state_model_from_graph

View file

@ -5,9 +5,9 @@ from typing import TYPE_CHECKING
import pytest
from langflow.components.helpers.memory import MemoryComponent
from langflow.components.input_output import ChatInput, ChatOutput
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.processing import PromptComponent
from langflow.components.processing.converter import TypeConverterComponent
from langflow.components.prompts import PromptComponent
from langflow.graph.graph.base import Graph
from langflow.graph.graph.constants import Finish
@ -131,7 +131,7 @@ def test_memory_chatbot_dump_components_and_edges(memory_chatbot_graph: Graph):
assert nodes[3]["data"]["type"] == "OpenAIModel"
assert nodes[3]["id"] == "openai"
assert nodes[4]["data"]["type"] == "Prompt"
assert nodes[4]["data"]["type"] == "Prompt Template"
assert nodes[4]["id"] == "prompt"
# Check edges

View file

@ -4,12 +4,11 @@ from textwrap import dedent
import pytest
from langflow.components.data import FileComponent
from langflow.components.embeddings import OpenAIEmbeddingsComponent
from langflow.components.input_output import ChatInput, ChatOutput
from langflow.components.languagemodels import OpenAIModelComponent
from langflow.components.processing import ParseDataComponent
from langflow.components.openai.openai import OpenAIEmbeddingsComponent
from langflow.components.openai.openai_chat_model import OpenAIModelComponent
from langflow.components.processing import ParseDataComponent, PromptComponent
from langflow.components.processing.split_text import SplitTextComponent
from langflow.components.prompts import PromptComponent
from langflow.components.vectorstores import AstraDBVectorStoreComponent
from langflow.graph.graph.base import Graph
from langflow.graph.graph.constants import Finish
@ -199,7 +198,7 @@ def test_vector_store_rag_dump_components_and_edges(ingestion_graph, rag_graph):
assert rag_nodes[4]["data"]["type"] == "ParseData"
assert rag_nodes[4]["id"] == "parse-data-123"
assert rag_nodes[5]["data"]["type"] == "Prompt"
assert rag_nodes[5]["data"]["type"] == "Prompt Template"
assert rag_nodes[5]["id"] == "prompt-123"
assert rag_nodes[6]["data"]["type"] == "AstraDB"
@ -259,7 +258,7 @@ def test_vector_store_rag_add(ingestion_graph: Graph, rag_graph: Graph):
{"id": "openai-123", "type": "OpenAIModel"},
{"id": "openai-embeddings-124", "type": "OpenAIEmbeddings"},
{"id": "parse-data-123", "type": "ParseData"},
{"id": "prompt-123", "type": "Prompt"},
{"id": "prompt-123", "type": "Prompt Template"},
{"id": "rag-vector-store-123", "type": "AstraDB"},
],
key=operator.itemgetter("id"),

View file

@ -143,7 +143,7 @@ async def test_get_all(client: AsyncClient, logged_in_headers):
files
) # Less or equal because we might have some files that don't have the dependencies installed
assert "ChatInput" in json_response["input_output"]
assert "Prompt" in json_response["prompts"]
assert "Prompt Template" in json_response["processing"]
assert "ChatOutput" in json_response["input_output"]

View file

@ -205,16 +205,17 @@ export const FILE_ICONS = {
export const SIDEBAR_CATEGORIES = [
{ display_name: "Saved", name: "saved_components", icon: "GradientSave" },
{ display_name: "I/O", name: "input_output", icon: "Cable" },
{ display_name: "Input / Output", name: "input_output", icon: "Cable" },
{ display_name: "Agents", name: "agents", icon: "Bot" },
{ display_name: "Models", name: "models", icon: "BrainCog" },
{ display_name: "Data", name: "data", icon: "Database" },
{ display_name: "Vector Stores", name: "vectorstores", icon: "Layers" },
{ display_name: "Processing", name: "processing", icon: "ListFilter" },
{ display_name: "Logic", name: "logic", icon: "ArrowRightLeft" },
{ display_name: "Helpers", name: "helpers", icon: "Wand2" },
{ display_name: "Inputs", name: "inputs", icon: "Download" },
{ display_name: "Outputs", name: "outputs", icon: "Upload" },
{ display_name: "Prompts", name: "prompts", icon: "braces" },
{ display_name: "Models", name: "models", icon: "BrainCog" },
{ display_name: "Data", name: "data", icon: "Database" },
{ display_name: "Processing", name: "processing", icon: "ListFilter" },
{ display_name: "Vector Stores", name: "vectorstores", icon: "Layers" },
{ display_name: "Agents", name: "agents", icon: "Bot" },
{ display_name: "Chains", name: "chains", icon: "Link" },
{ display_name: "Loaders", name: "documentloaders", icon: "Paperclip" },
{ display_name: "Link Extractors", name: "link_extractors", icon: "Link2" },
@ -224,8 +225,6 @@ export const SIDEBAR_CATEGORIES = [
{ display_name: "Text Splitters", name: "textsplitters", icon: "Scissors" },
{ display_name: "Toolkits", name: "toolkits", icon: "Package2" },
{ display_name: "Tools", name: "tools", icon: "Hammer" },
{ display_name: "Logic", name: "logic", icon: "ArrowRightLeft" },
{ display_name: "Helpers", name: "helpers", icon: "Wand2" },
];
export const SIDEBAR_BUNDLES = [
@ -236,28 +235,34 @@ export const SIDEBAR_BUNDLES = [
},
{ display_name: "Embeddings", name: "embeddings", icon: "Binary" },
{ display_name: "Memories", name: "memories", icon: "Cpu" },
{ display_name: "AI/ML", name: "aiml", icon: "AI/ML" },
{ display_name: "Anthropic", name: "anthropic", icon: "Anthropic" },
{ display_name: "Amazon", name: "amazon", icon: "Amazon" },
{ display_name: "Gmail", name: "gmail", icon: "Gmail" },
{ display_name: "Outlook", name: "outlook", icon: "Outlook" },
{ display_name: "GitHub", name: "github", icon: "Github" },
{
display_name: "Googlecalendar",
name: "googlecalendar",
icon: "Googlecalendar",
},
// Add apify
{ display_name: "Apify", name: "apify", icon: "Apify" },
{ display_name: "LangChain", name: "langchain_utilities", icon: "LangChain" },
{ display_name: "AgentQL", name: "agentql", icon: "AgentQL" },
{ display_name: "AssemblyAI", name: "assemblyai", icon: "AssemblyAI" },
{ display_name: "Azure", name: "azure", icon: "Azure" },
{
display_name: "DataStax",
name: "datastax",
icon: "AstraDB",
},
{ display_name: "Docling", name: "docling", icon: "Docling" },
{ display_name: "Olivya", name: "olivya", icon: "Olivya" },
{ display_name: "Gmail", name: "gmail", icon: "Gmail" },
{ display_name: "GitHub", name: "github", icon: "Github" },
{
display_name: "Googlecalendar",
name: "googlecalendar",
icon: "Googlecalendar",
},
{ display_name: "HuggingFace", name: "huggingface", icon: "HuggingFace" },
{ display_name: "IBM", name: "ibm", icon: "WatsonxAI" },
{ display_name: "LangWatch", name: "langwatch", icon: "Langwatch" },
{ display_name: "Olivya", name: "olivya", icon: "Olivya" },
{ display_name: "Outlook", name: "outlook", icon: "Outlook" },
{ display_name: "OpenAI", name: "openai", icon: "OpenAI" },
{ display_name: "Notion", name: "Notion", icon: "Notion" },
{ display_name: "Needle", name: "needle", icon: "Needle" },
{ display_name: "NVIDIA", name: "nvidia", icon: "NVIDIA" },
@ -284,6 +289,8 @@ export const SIDEBAR_BUNDLES = [
{ display_name: "Cleanlab", name: "cleanlab", icon: "Cleanlab" },
{ display_name: "Search", name: "search", icon: "Search" },
{ display_name: "Tavily", name: "tavily", icon: "TavilyIcon" },
{ display_name: "Ollama", name: "ollama", icon: "Ollama" },
{ display_name: "VertexAI", name: "vertexai", icon: "VertexAI" },
];
export const categoryIcons: Record<string, string> = {

View file

@ -56,8 +56,7 @@ test(
.isVisible();
});
await expect(page.getByTestId("disclosure-i/o")).toBeVisible();
await expect(page.getByTestId("disclosure-prompts")).toBeVisible();
await expect(page.getByTestId("disclosure-input / output")).toBeVisible();
await expect(page.getByTestId("disclosure-models")).toBeVisible();
await expect(page.getByTestId("disclosure-helpers")).toBeVisible();
await expect(page.getByTestId("disclosure-agents")).toBeVisible();
@ -73,7 +72,7 @@ test(
await expect(page.getByTestId("input_outputChat Input")).toBeVisible();
await expect(page.getByTestId("input_outputChat Output")).toBeVisible();
await expect(page.getByTestId("promptsPrompt")).toBeVisible();
await expect(page.getByTestId("processingPrompt Template")).toBeVisible();
await expect(page.getByTestId("langchain_utilitiesCSVAgent")).toBeVisible();
await expect(
page.getByTestId("langchain_utilitiesConversationChain"),
@ -97,7 +96,9 @@ test(
await expect(page.getByTestId("input_outputChat Input")).not.toBeVisible();
await expect(page.getByTestId("input_outputChat Output")).not.toBeVisible();
await expect(page.getByTestId("promptsPrompt")).not.toBeVisible();
await expect(
page.getByTestId("processingPrompt Template"),
).not.toBeVisible();
await expect(
page.getByTestId("agentsTool Calling Agent"),
).not.toBeVisible();

View file

@ -16,12 +16,12 @@ test(
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("openai");
await page.waitForSelector('[data-testid="languagemodelsOpenAI"]', {
await page.waitForSelector('[data-testid="openaiOpenAI"]', {
timeout: 1000,
});
await page
.getByTestId("languagemodelsOpenAI")
.getByTestId("openaiOpenAI")
.hover()
.then(async () => {
await page.getByTestId("add-component-button-openai").last().click();

View file

@ -58,7 +58,7 @@ test(
await page.keyboard.type("prompt");
// Verify disclosures open with new search
await expect(page.getByTestId("promptsPrompt")).toBeVisible();
await expect(page.getByTestId("processingPrompt Template")).toBeVisible();
await page.keyboard.press("Tab");
await page.keyboard.press("Tab");

View file

@ -41,10 +41,7 @@ withEventDeliveryModes(
await expect(stopButton).toBeHidden({ timeout: 120000 });
}
const output = await page
.getByTestId("div-chat-message")
.last()
.innerText();
const output = await page.getByTestId("div-chat-message").innerText();
expect(output).toContain("Charmander");
expect(output.length).toBeGreaterThan(100);
},

View file

@ -152,11 +152,11 @@ test(
//---------------------------------- PROMPT
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("prompt");
await page.waitForSelector('[data-testid="promptsPrompt"]', {
await page.waitForSelector('[data-testid="processingPrompt Template"]', {
timeout: 2000,
});
await page
.getByTestId("promptsPrompt")
.getByTestId("processingPrompt Template")
.dragTo(page.locator('//*[@id="react-flow-id"]'), {
targetPosition: { x: 350, y: 300 },
});
@ -164,14 +164,11 @@ test(
//---------------------------------- OPENAI
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("openai");
await page.waitForSelector(
'[data-testid="languagemodels_openai_draggable"]',
{
timeout: 2000,
},
);
await page.waitForSelector('[data-testid="openai_openai_draggable"]', {
timeout: 2000,
});
await page
.getByTestId("languagemodelsOpenAI")
.getByTestId("openaiOpenAI")
.dragTo(page.locator('//*[@id="react-flow-id"]'), {
targetPosition: { x: 500, y: 300 },
});
@ -261,7 +258,7 @@ test(
.click();
//quebrando aqui
await page
.getByTestId("handle-prompt-shownode-true_examples-left")
.getByTestId("handle-prompt template-shownode-true_examples-left")
.nth(0)
.click();
await page
@ -269,7 +266,7 @@ test(
.nth(1)
.click();
await page
.getByTestId("handle-prompt-shownode-false_examples-left")
.getByTestId("handle-prompt template-shownode-false_examples-left")
.nth(0)
.click();
await page
@ -277,11 +274,11 @@ test(
.nth(2)
.click();
await page
.getByTestId("handle-prompt-shownode-user_message-left")
.getByTestId("handle-prompt template-shownode-user_message-left")
.nth(0)
.click();
await page
.getByTestId("handle-prompt-shownode-prompt-right")
.getByTestId("handle-prompt template-shownode-prompt-right")
.first()
.click();
await page

View file

@ -34,7 +34,7 @@ test.skip(
await page.getByTestId("sidebar-search-input").fill("openai");
await page.waitForTimeout(1000);
await page
.getByTestId("languagemodelsOpenAI")
.getByTestId("openaiOpenAI")
.dragTo(page.locator('//*[@id="react-flow-id"]'));
await page.mouse.up();
await page.mouse.down();

View file

@ -14,12 +14,12 @@ test(
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("nvidia");
await page.waitForSelector('[data-testid="languagemodelsNVIDIA"]', {
await page.waitForSelector('[data-testid="nvidiaNVIDIA"]', {
timeout: 30000,
});
await page
.getByTestId("languagemodelsNVIDIA")
.getByTestId("nvidiaNVIDIA")
.hover()
.then(async () => {
// Wait for the API request to complete after clicking the add button

View file

@ -12,12 +12,12 @@ test("IntComponent", { tag: ["@release", "@workspace"] }, async ({ page }) => {
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("openai");
await page.waitForSelector('[data-testid="languagemodelsOpenAI"]', {
await page.waitForSelector('[data-testid="openaiOpenAI"]', {
timeout: 3000,
});
await page
.getByTestId("languagemodelsOpenAI")
.getByTestId("openaiOpenAI")
.first()
.dragTo(page.locator('//*[@id="react-flow-id"]'));

View file

@ -60,12 +60,12 @@ test(
await page.getByTestId("blank-flow").click();
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("prompt");
await page.waitForSelector('[data-testid="promptsPrompt"]', {
await page.waitForSelector('[data-testid="processingPrompt Template"]', {
timeout: 3000,
});
await page
.locator('//*[@id="promptsPrompt"]')
.locator('//*[@id="processingPrompt Template"]')
.dragTo(page.locator('//*[@id="react-flow-id"]'));
await page.mouse.up();
await page.mouse.down();
@ -145,12 +145,12 @@ test(
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("prompt");
await page.waitForSelector('[data-testid="promptsPrompt"]', {
await page.waitForSelector('[data-testid="processingPrompt Template"]', {
timeout: 3000,
});
await page
.locator('//*[@id="promptsPrompt"]')
.locator('//*[@id="processingPrompt Template"]')
.dragTo(page.locator('//*[@id="react-flow-id"]'));
await page.mouse.up();
await page.mouse.down();

View file

@ -17,12 +17,12 @@ test(
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("openai");
await page.waitForSelector('[data-testid="languagemodelsOpenAI"]', {
await page.waitForSelector('[data-testid="openaiOpenAI"]', {
timeout: 3000,
});
await page
.getByTestId("languagemodelsOpenAI")
.getByTestId("openaiOpenAI")
.dragTo(page.locator('//*[@id="react-flow-id"]'));
await page.mouse.up();
await page.mouse.down();

View file

@ -17,12 +17,12 @@ test(
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("ollama");
await page.waitForSelector('[data-testid="languagemodelsOllama"]', {
await page.waitForSelector('[data-testid="ollamaOllama"]', {
timeout: 3000,
});
await page
.getByTestId("languagemodelsOllama")
.getByTestId("ollamaOllama")
.dragTo(page.locator('//*[@id="react-flow-id"]'));
await page.mouse.up();
await page.mouse.down();

View file

@ -16,12 +16,12 @@ test(
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("prompt");
await page.waitForSelector('[data-testid="promptsPrompt"]', {
await page.waitForSelector('[data-testid="processingPrompt Template"]', {
timeout: 30000,
});
await page
.locator('//*[@id="promptsPrompt"]')
.locator('//*[@id="processingPrompt Template"]')
.dragTo(page.locator('//*[@id="react-flow-id"]'));
await page.mouse.up();
await page.mouse.down();

View file

@ -31,12 +31,12 @@ test(
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("NVIDIA");
await page.waitForSelector('[data-testid="languagemodelsNVIDIA"]', {
await page.waitForSelector('[data-testid="nvidiaNVIDIA"]', {
timeout: 3000,
});
await page
.getByTestId("languagemodelsNVIDIA")
.getByTestId("nvidiaNVIDIA")
.dragTo(page.locator('//*[@id="react-flow-id"]'));
await page.mouse.up();
await page.mouse.down();
@ -85,7 +85,7 @@ test(
await page.keyboard.press("Escape");
await page.locator('//*[@id="react-flow-id"]').click();
const lastNvidiaModel = page.getByTestId("languagemodelsNVIDIA").last();
const lastNvidiaModel = page.getByTestId("nvidiaNVIDIA").last();
await lastNvidiaModel.scrollIntoViewIfNeeded();
try {
@ -128,12 +128,12 @@ test(
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("NVIDIA");
await page.waitForSelector('[data-testid="languagemodelsNVIDIA"]', {
await page.waitForSelector('[data-testid="nvidiaNVIDIA"]', {
timeout: 3000,
});
await page
.getByTestId("languagemodelsNVIDIA")
.getByTestId("nvidiaNVIDIA")
.dragTo(page.locator('//*[@id="react-flow-id"]'));
await page.mouse.up();
await page.mouse.down();

View file

@ -56,7 +56,7 @@ test(
});
const disclosureTestIds = [
"disclosure-i/o",
"disclosure-input / output",
"disclosure-data",
"disclosure-models",
"disclosure-helpers",
@ -101,23 +101,15 @@ test(
await page.getByTestId("sidebar-search-input").click();
const visibleModelSpecsTestIds = [
"languagemodelsAIML",
"languagemodelsAnthropic",
"languagemodelsAzure OpenAI",
"languagemodelsCohere",
"languagemodelsGoogle Generative AI",
"languagemodelsGroq",
"languagemodelsHuggingFace",
"languagemodelsLM Studio",
"languagemodelsMaritalk",
"languagemodelsMistralAI",
"languagemodelsNVIDIA",
"languagemodelsOllama",
"languagemodelsOpenAI",
"languagemodelsPerplexity",
"languagemodelsQianfan",
"languagemodelsSambaNova",
"languagemodelsVertex AI",
"languagemodelsxAI",
];
await Promise.all(

View file

@ -16,12 +16,12 @@ test(
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("ollama");
await page.waitForSelector('[data-testid="languagemodelsOllama"]', {
await page.waitForSelector('[data-testid="ollamaOllama"]', {
timeout: 3000,
});
await page
.getByTestId("languagemodelsOllama")
.getByTestId("ollamaOllama")
.dragTo(page.locator('//*[@id="react-flow-id"]'));
await page.mouse.up();
await page.mouse.down();

View file

@ -200,14 +200,14 @@ test(
// Create a new flow with MCP component
await page.getByTestId("blank-flow").click();
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("mcp connection");
await page.getByTestId("sidebar-search-input").fill("mcp");
await page.waitForSelector('[data-testid="dataMCP Connection"]', {
await page.waitForSelector('[data-testid="agentsMCP Tools"]', {
timeout: 30000,
});
await page
.getByTestId("dataMCP Connection")
.getByTestId("agentsMCP Tools")
.dragTo(page.locator('//*[@id="react-flow-id"]'), {
targetPosition: { x: 0, y: 0 },
});

View file

@ -3,7 +3,7 @@ import { awaitBootstrapTest } from "../../utils/await-bootstrap-test";
import { zoomOut } from "../../utils/zoom-out";
test(
"user must be able to change mode of MCP connection without any issues",
"user must be able to change mode of MCP tools without any issues",
{ tag: ["@release", "@workspace", "@components"] },
async ({ page }) => {
await awaitBootstrapTest(page);
@ -13,14 +13,14 @@ test(
});
await page.getByTestId("blank-flow").click();
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("mcp connection");
await page.getByTestId("sidebar-search-input").fill("mcp tools");
await page.waitForSelector('[data-testid="dataMCP Connection"]', {
await page.waitForSelector('[data-testid="agentsMCP Tools"]', {
timeout: 30000,
});
await page
.getByTestId("dataMCP Connection")
.getByTestId("agentsMCP Tools")
.dragTo(page.locator('//*[@id="react-flow-id"]'), {
targetPosition: { x: 0, y: 0 },
});

View file

@ -17,19 +17,19 @@ test(
await page.getByTestId("sidebar-search-input").fill("prompt");
await page
.getByTestId("promptsPrompt")
.getByTestId("processingPrompt Template")
.hover()
.then(async () => {
await page.getByTestId("add-component-button-prompt").click();
await page.getByTestId("add-component-button-prompt-template").click();
});
await page.waitForSelector('[data-testid="title-Prompt"]', {
await page.waitForSelector('[data-testid="title-Prompt Template"]', {
timeout: 3000,
});
expect(await page.getByText("Toolset", { exact: true }).count()).toBe(0);
await page.getByTestId("title-Prompt").click();
await page.getByTestId("title-Prompt Template").click();
await page.keyboard.press("ControlOrMeta+Shift+m");
await page.waitForSelector('text="Toolset"', {
@ -39,7 +39,7 @@ test(
await page.getByText("Toolset", { exact: true }).count(),
).toBeGreaterThan(0);
await page.getByTestId("title-Prompt").click();
await page.getByTestId("title-Prompt Template").click();
await page.waitForSelector('[data-testid="code-button-modal"]', {
timeout: 3000,
@ -61,11 +61,11 @@ test(
// check if the response is 200
expect(customComponentResponse?.status()).toBe(200);
await page.waitForSelector('[data-testid="title-Prompt"]', {
await page.waitForSelector('[data-testid="title-Prompt Template"]', {
timeout: 3000,
});
await page.getByTestId("title-Prompt").click();
await page.getByTestId("title-Prompt Template").click();
await page.keyboard.press("ControlOrMeta+Shift+m");
expect(await page.getByText("Toolset", { exact: true }).count()).toBe(0);

View file

@ -14,12 +14,12 @@ test(
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("openai");
await page.waitForSelector('[data-testid="languagemodelsOpenAI"]', {
await page.waitForSelector('[data-testid="openaiOpenAI"]', {
timeout: 1000,
});
await page
.getByTestId("languagemodelsOpenAI")
.getByTestId("openaiOpenAI")
.hover()
.then(async () => {
await page.getByTestId("add-component-button-openai").last().click();

View file

@ -1,7 +1,6 @@
import { expect, test } from "@playwright/test";
import * as dotenv from "dotenv";
import path from "path";
import { adjustScreenView } from "../../utils/adjust-screen-view";
import { awaitBootstrapTest } from "../../utils/await-bootstrap-test";
import { initialGPTsetup } from "../../utils/initialGPTsetup";
@ -55,7 +54,7 @@ test(
await page.getByTestId("sidebar-search-input").fill("openai");
await page
.getByTestId("languagemodelsOpenAI")
.getByTestId("openaiOpenAI")
.dragTo(page.locator('//*[@id="react-flow-id"]'), {
targetPosition: { x: 100, y: 200 },
});

View file

@ -26,12 +26,12 @@ test(
await page.getByTestId("sidebar-search-input").click();
await page.getByTestId("sidebar-search-input").fill("ollama");
await page.waitForSelector('[data-testid="embeddingsOllama Embeddings"]', {
await page.waitForSelector('[data-testid="ollamaOllama Embeddings"]', {
timeout: 3000,
});
await page
.getByTestId("embeddingsOllama Embeddings")
.getByTestId("ollamaOllama Embeddings")
.hover()
.then(async () => {
await page