* Update model kwargs and temperature values
* Update keyboard shortcuts for advanced editing
* make Message field have no handles
* Update OpenAI API Key handling in OpenAIEmbeddingsComponent
* Remove unnecessary field_type key from CustomComponent class
* Update required field behavior in CustomComponent class
* Refactor AzureOpenAIModel.py: Removed unnecessary "required" attribute from input parameters
* Update BaiduQianfanChatModel and OpenAIModel configurations
* Fix range_spec step type validation
* Update RangeSpec step_type default value to "float"
* Fix Save debounce
* Update parameterUtils to use debounce instead of throttle
* Update input type options in schemas and graph base classes
* Refactor run_flow_with_caching endpoint to include simplified and experimental versions
* Add PythonFunctionComponent and test case for it
* Add nest_asyncio to fix event loop issue
* Refactor test_initial_setup.py to use RunOutputs instead of ResultData
* Remove unused code in test_endpoints.py
* Add asyncio loop to uvicorn command
* Refactor load_session method to handle coroutine result
* Fixed saving
* Fixed debouncing
* Add InputType and OutputType literals to schema.py
* Update input type in Graph class
* Add new schema for simplified API request
* Add delete_messages function and update test_successful_run assertions
* Add STREAM_INFO_TEXT constant to model components
* Add session_id to simplified_run_flow_with_caching endpoint
* Add field_typing import to OpenAIModel.py
* update starter projects
* Add constants for Langflow base module
* Update setup.py to include latest component versions
* Update Starter Examples
* sets starter_project fixture to Basic Prompting
* Refactor test_endpoints.py: Update test names and add new tests for different output types
* Update HuggingFace Spaces link and add image for dark mode
* Remove filepath reference
* Update Vertex params in base.py
* Add tests for different input types
* Add type annotations and improve test coverage
* Add duplicate space link to README
* Update HuggingFace Spaces badge in README
* Add Python 3.10 installation requirement to README
* Refactor flow running endpoints
* Refactor SimplifiedAPIRequest and add documentation for Tweaks
* Refactor input_request parameter in simplified_run_flow function
* Add support for retrieving specific component output
* Add custom Uvicorn worker for Langflow application
* Add asyncio loop to LangflowApplication initialization
* Update Makefile with new variables and start command
* Fix indentation in Makefile
* Refactor run_graph function to add support for running a JSON flow
* Refactor getChatInputField function and update API code
* Update HuggingFace Spaces documentation with duplication process
* Add asyncio event loop to uvicorn command
* Add installation of backend in start target
* udpate some starter projects
* Fix formatting in hugging-face-spaces.mdx
* Update installation instructions for Langflow
* set examples order
* Update start command in Makefile
* Add installation and usage instructions for Langflow
* Update Langflow installation and usage instructions
* Fix langflow command in README.md
* Fix broken link to HuggingFace Spaces guide
* Add new SVG assets for blog post, chat bot, and cloud docs
* Refactor example rendering in NewFlowModal
* Add new SVG file for short bio section
* Remove unused import and add new component
* Update title in usage.mdx
* Update HuggingFace Spaces heading in usage.mdx
* Update usage instructions in getting-started/usage.mdx
* Update cache option in usage documentation
* Remove 'advanced' flag from 'n_messages' parameter in MemoryComponent.py
* Refactor code to improve performance and readability
* Update project names and flow examples
* fix document qa example
* Remove commented out code in sidebars.js
* Delete unused documentation files
* Fix bug in login functionality
* Remove global variables from components
* Fix bug in login functionality
* fix modal returning to input
* Update max-width of chat message sender name
* Update styling for chat message component
* Refactor OpenAIEmbeddingsComponent signature
* Update usage.mdx file
* Update path in Makefile
* Add new migration and what's new documentation files
* Add new chapters and migration guides
* Update version to 0.0.13 in pyproject.toml
* new locks
* Update dependencies in pyproject.toml
* general fixes
* Update dependencies in pyproject.toml and poetry.lock files
* add padding to modal
* ✨ (undrawCards/index.tsx): update the SVG used for BasicPrompt component to undraw_short_bio_re_fmx0.svg to match the desired design
♻️ (undrawCards/index.tsx): adjust the width and height of the BasicPrompt SVG to 65% to improve the visual appearance
* Commented out components/data in sidebars.js
* Refactor component names in outputs.mdx
* Update embedded chat script URL
* Add data component and fix formatting in outputs component
* Update dependencies in poetry.lock and pyproject.toml
* Update dependencies in poetry.lock and pyproject.toml
* Refactor code to improve performance and readability
* Update dependencies in poetry.lock and pyproject.toml
* Fixed IO Modal updates
* Remove dead code at API Modal
* Fixed overflow at CodeTabsComponent tweaks page
* ✨ (NewFlowModal/index.tsx): update the name of the example from "Blog Writter" to "Blog Writer" for better consistency and clarity
* Update dependencies versions
* Update langflow-base to version 0.0.15 and fix setup_env script
* Update dependencies in pyproject.toml
* Lock dependencies in parallel
* Add logging statement to setup_app function
* Fix Ace not having type="module" and breaking build
* Update authentication settings for access token cookie
* Update package versions in package-lock.json
* Add scripts directory to Dockerfile
* Add setup_env command to build_and_run target
* Remove unnecessary make command in setup_env
* Remove unnecessary installation step in build_and_run
* Add debug configuration for CLI
* 🔧 chore(Makefile): refactor build_langflow target to use a separate script for updating dependencies and building
✨ feat(update_dependencies.py): add script to update pyproject.toml dependency version based on langflow-base version in src/backend/base/pyproject.toml
* Add number_of_results parameter to AstraDBSearchComponent
* Update HuggingFace Spaces links
* Remove duplicate imports in hugging-face-spaces.mdx
* Add number_of_results parameter to vector search components
* Fixed supabase not commited
* Revert "Fixed supabase not commited"
This reverts commit afb10a6262.
* Update duplicate-space.png image
* Delete unused files and components
* Add/update script to update dependencies
* Add .bak files to .gitignore
* Update version numbers and remove unnecessary dependencies
* Update langflow-base dependency path
* Add Text import to VertexAiModel.py
* Update langflow-base version to 0.0.16 and update dependencies
* Delete start projects and commit session in delete_start_projects function
* Refactor backend startup script to handle autologin option
* Update poetry installation script to include pipx update check
* Update pipx installation script for different operating systems
* Update Makefile to improve setup process
* Add error handling on streaming and fix streaming bug on error
* Added description to Blog Writer
* Sort base classes alphabetically
* Update duplicate-space.png image
* update position on langflow prompt chaining
* Add Langflow CLI and first steps documentation
* Add exception handling for missing 'content' field in search_with_vector_store method
* Remove unused import and update type hinting
* fix bug on egdes after creating group component
* Refactor APIRequest class and update model imports
* Remove unused imports and fix formatting issues
* Refactor reactflowUtils and styleUtils
* Add CLI documentation to getting-started/cli.mdx
* Add CLI usage instructions
* Add ZoomableImage component to first-steps.mdx
* Update CLI and first steps documentation
* Remove duplicate import and add new imports for ThemedImage and useBaseUrl
* Update Langflow CLI documentation link
* Remove first-steps.mdx and update index.mdx and sidebars.js
* Update Docusaurus dependencies
* Add AstraDB RAG Flow guide
* Remove unused imports
* Remove unnecessary import statement
* Refactor guide for better readability
* Add data component documentation
* Update component headings and add prompt template
* Fix logging level and version display
* Add datetime import and buffer for alembic log
* Update flow names in NewFlowModal and documentation
* Add starter projects to sidebars.js
* Fix error handling in DirectoryReader class
* Handle exception when loading components in setup.py
* Update version numbers in pyproject.toml files
* Update build_langflow_base and build_langflow_backup in Makefile
* Added docs
* Update dependencies and build process
* Add Admonition component for API Key documentation
* Update API endpoint in async-api.mdx
* Remove async-api guidelines
* Fix UnicodeDecodeError in DirectoryReader
* Update dependency version and fix encoding issues
* Add conditional build and publish for base and main projects
* Update version to 1.0.0a2 in pyproject.toml
* Remove duplicate imports and unnecessary code in custom-component.mdx
* Fix poetry lock command in Makefile
* Update package versions in pyproject.toml
* Remove unused components and update imports
* 📦 chore(pre-release-base.yml): add pre-release workflow for base project
📦 chore(pre-release-langflow.yml): add pre-release workflow for langflow project
* Add ChatLiteLLMModelComponent to models package
* Add frontend installation and build steps
* Add Dockerfile for building and pushing base image
* Add emoji package and nest-asyncio dependency
* 📝 (components.mdx): update margin style of ZoomableImage to improve spacing
📝 (features.mdx): update margin style of ZoomableImage to improve spacing
📝 (login.mdx): update margin style of ZoomableImage to improve spacing
* Fix module import error in validate.py
* Fix error message in directory_reader.py
* Update version import and handle ImportError
* Add cryptography and langchain-openai dependencies
* Update poetry installation and remove poetry-monorepo-dependency-plugin
* Update workflow and Dockerfile for Langflow base pre-release
* Update display names and descriptions for AstraDB components
* Update installation instructions for Langflow
* Update Astra DB links and remove unnecessary imports
* Rename AstraDB
* Add new components and images
* Update HuggingFace Spaces URLs
* Update Langflow documentation and add new starter projects
* Update flow name to "Basic Prompting (Hello, world!)" in relevant files
* Update Basic Prompting flow name to "Ahoy World!"
* Remove HuggingFace Spaces documentation
* Add new files and update sidebars.js
* Remove async-tasks.mdx and update sidebars.js
* Update starter project URLs
* Enable migration of global variables
* Update OpenAIEmbeddings deployment and model
* 📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (inputs.mdx): add margin to image style to improve spacing and center alignment
📝 (rag-with-astradb.mdx): add margin to image styles to improve spacing and readability
* Update welcome message in index.mdx
* Add global variable feature to Langflow documentation
* Reorganized sidebar categories
* Update migration documentation
* Refactor SplitTextComponent class to accept inputs of type Record and Text
* Adjust embeddings docs
* ✨ (cardComponent/index.tsx): add a minimum height to the card component to ensure consistent layout and prevent content from overlapping when the card is empty or has minimal content
* Update flow name from "Ahoy World!" to "Hello, world!"
* Update documentation for embeddings, models, and vector stores
* Update CreateRecordComponent and parameterUtils.ts
* Add documentation for Text and Record types
* Remove commented lines in sidebars.js
* Add run_flow_from_json function to load.py
* Update Langflow package to run flow from JSON file
* Fix type annotations and import errors
* Refactor tests and fix test data
---------
Co-authored-by: Rodrigo Nader <rodrigosilvanader@gmail.com>
Co-authored-by: anovazzi1 <otavio2204@gmail.com>
Co-authored-by: Lucas Oliveira <lucas.edu.oli@hotmail.com>
Co-authored-by: carlosrcoelho <carlosrodrigo.coelho@gmail.com>
Co-authored-by: cristhianzl <cristhian.lousa@gmail.com>
Co-authored-by: Matheus <jacquesmats@gmail.com>
444 lines
16 KiB
Python
444 lines
16 KiB
Python
import copy
|
|
import json
|
|
import pickle
|
|
from typing import Type, Union
|
|
|
|
import pytest
|
|
|
|
from langflow.graph import Graph
|
|
from langflow.graph.edge.base import Edge
|
|
from langflow.graph.graph.utils import (
|
|
find_last_node,
|
|
process_flow,
|
|
set_new_target_handle,
|
|
ungroup_node,
|
|
update_source_handle,
|
|
update_target_handle,
|
|
update_template,
|
|
)
|
|
from langflow.graph.vertex.base import Vertex
|
|
from langflow.initial_setup.setup import load_starter_projects
|
|
from langflow.utils.payload import get_root_vertex
|
|
|
|
# Test cases for the graph module
|
|
|
|
# now we have three types of graph:
|
|
# BASIC_EXAMPLE_PATH, COMPLEX_EXAMPLE_PATH, OPENAPI_EXAMPLE_PATH
|
|
|
|
|
|
@pytest.fixture
|
|
def sample_template():
|
|
return {
|
|
"field1": {"proxy": {"field": "some_field", "id": "node1"}},
|
|
"field2": {"proxy": {"field": "other_field", "id": "node2"}},
|
|
}
|
|
|
|
|
|
@pytest.fixture
|
|
def sample_nodes():
|
|
return [
|
|
{
|
|
"id": "node1",
|
|
"data": {"node": {"template": {"some_field": {"show": True, "advanced": False, "name": "Name1"}}}},
|
|
},
|
|
{
|
|
"id": "node2",
|
|
"data": {
|
|
"node": {
|
|
"template": {
|
|
"other_field": {
|
|
"show": False,
|
|
"advanced": True,
|
|
"display_name": "DisplayName2",
|
|
}
|
|
}
|
|
}
|
|
},
|
|
},
|
|
{
|
|
"id": "node3",
|
|
"data": {"node": {"template": {"unrelated_field": {"show": True, "advanced": True}}}},
|
|
},
|
|
]
|
|
|
|
|
|
def get_node_by_type(graph, node_type: Type[Vertex]) -> Union[Vertex, None]:
|
|
"""Get a node by type"""
|
|
return next((node for node in graph.vertices if isinstance(node, node_type)), None)
|
|
|
|
|
|
def test_graph_structure(basic_graph):
|
|
assert isinstance(basic_graph, Graph)
|
|
assert len(basic_graph.vertices) > 0
|
|
assert len(basic_graph.edges) > 0
|
|
for node in basic_graph.vertices:
|
|
assert isinstance(node, Vertex)
|
|
for edge in basic_graph.edges:
|
|
assert isinstance(edge, Edge)
|
|
source_vertex = basic_graph.get_vertex(edge.source_id)
|
|
target_vertex = basic_graph.get_vertex(edge.target_id)
|
|
assert source_vertex in basic_graph.vertices
|
|
assert target_vertex in basic_graph.vertices
|
|
|
|
|
|
def test_circular_dependencies(basic_graph):
|
|
assert isinstance(basic_graph, Graph)
|
|
|
|
def check_circular(node, visited):
|
|
visited.add(node)
|
|
neighbors = basic_graph.get_vertices_with_target(node)
|
|
for neighbor in neighbors:
|
|
if neighbor in visited:
|
|
return True
|
|
if check_circular(neighbor, visited.copy()):
|
|
return True
|
|
return False
|
|
|
|
for node in basic_graph.vertices:
|
|
assert not check_circular(node, set())
|
|
|
|
|
|
def test_invalid_node_types():
|
|
graph_data = {
|
|
"nodes": [
|
|
{
|
|
"id": "1",
|
|
"data": {
|
|
"node": {
|
|
"base_classes": ["BaseClass"],
|
|
"template": {
|
|
"_type": "InvalidNodeType",
|
|
},
|
|
},
|
|
},
|
|
},
|
|
],
|
|
"edges": [],
|
|
}
|
|
with pytest.raises(Exception):
|
|
Graph(graph_data["nodes"], graph_data["edges"])
|
|
|
|
|
|
def test_get_vertices_with_target(basic_graph):
|
|
"""Test getting connected nodes"""
|
|
assert isinstance(basic_graph, Graph)
|
|
# Get root node
|
|
root = get_root_vertex(basic_graph)
|
|
assert root is not None
|
|
connected_nodes = basic_graph.get_vertices_with_target(root.id)
|
|
assert connected_nodes is not None
|
|
|
|
|
|
def test_get_node_neighbors_basic(basic_graph):
|
|
"""Test getting node neighbors"""
|
|
|
|
assert isinstance(basic_graph, Graph)
|
|
# Get root node
|
|
root = get_root_vertex(basic_graph)
|
|
assert root is not None
|
|
neighbors = basic_graph.get_vertex_neighbors(root)
|
|
assert neighbors is not None
|
|
assert isinstance(neighbors, dict)
|
|
# Root Node is an Agent, it requires an LLMChain and tools
|
|
# We need to check if there is a Chain in the one of the neighbors'
|
|
# data attribute in the type key
|
|
assert any("ConversationBufferMemory" in neighbor.data["type"] for neighbor, val in neighbors.items() if val)
|
|
|
|
assert any("OpenAI" in neighbor.data["type"] for neighbor, val in neighbors.items() if val)
|
|
|
|
|
|
def test_get_node(basic_graph):
|
|
"""Test getting a single node"""
|
|
node_id = basic_graph.vertices[0].id
|
|
node = basic_graph.get_vertex(node_id)
|
|
assert isinstance(node, Vertex)
|
|
assert node.id == node_id
|
|
|
|
|
|
def test_build_nodes(basic_graph):
|
|
"""Test building nodes"""
|
|
|
|
assert len(basic_graph.vertices) == len(basic_graph._vertices)
|
|
for node in basic_graph.vertices:
|
|
assert isinstance(node, Vertex)
|
|
|
|
|
|
def test_build_edges(basic_graph):
|
|
"""Test building edges"""
|
|
assert len(basic_graph.edges) == len(basic_graph._edges)
|
|
for edge in basic_graph.edges:
|
|
assert isinstance(edge, Edge)
|
|
assert isinstance(edge.source_id, str)
|
|
assert isinstance(edge.target_id, str)
|
|
|
|
|
|
def test_get_root_vertex(client, basic_graph, complex_graph):
|
|
"""Test getting root node"""
|
|
assert isinstance(basic_graph, Graph)
|
|
root = get_root_vertex(basic_graph)
|
|
assert root is not None
|
|
assert isinstance(root, Vertex)
|
|
assert root.data["type"] == "TimeTravelGuideChain"
|
|
# For complex example, the root node is a ZeroShotAgent too
|
|
assert isinstance(complex_graph, Graph)
|
|
root = get_root_vertex(complex_graph)
|
|
assert root is not None
|
|
assert isinstance(root, Vertex)
|
|
assert root.data["type"] == "ZeroShotAgent"
|
|
|
|
|
|
def test_validate_edges(basic_graph):
|
|
"""Test validating edges"""
|
|
|
|
assert isinstance(basic_graph, Graph)
|
|
# all edges should be valid
|
|
assert all(edge.valid for edge in basic_graph.edges)
|
|
|
|
|
|
def test_matched_type(basic_graph):
|
|
"""Test matched type attribute in Edge"""
|
|
assert isinstance(basic_graph, Graph)
|
|
# all edges should be valid
|
|
assert all(edge.valid for edge in basic_graph.edges)
|
|
# all edges should have a matched_type attribute
|
|
assert all(hasattr(edge, "matched_type") for edge in basic_graph.edges)
|
|
# The matched_type attribute should be in the source_types attr
|
|
assert all(edge.matched_type in edge.source_types for edge in basic_graph.edges)
|
|
|
|
|
|
def test_build_params(basic_graph):
|
|
"""Test building params"""
|
|
|
|
assert isinstance(basic_graph, Graph)
|
|
# all edges should be valid
|
|
assert all(edge.valid for edge in basic_graph.edges)
|
|
# all edges should have a matched_type attribute
|
|
assert all(hasattr(edge, "matched_type") for edge in basic_graph.edges)
|
|
# The matched_type attribute should be in the source_types attr
|
|
assert all(edge.matched_type in edge.source_types for edge in basic_graph.edges)
|
|
# Get the root node
|
|
root = get_root_vertex(basic_graph)
|
|
# Root node is a TimeTravelGuideChain
|
|
# which requires an llm and memory
|
|
assert root is not None
|
|
assert isinstance(root.params, dict)
|
|
assert "llm" in root.params
|
|
assert "memory" in root.params
|
|
|
|
|
|
# def test_wrapper_node_build(openapi_graph):
|
|
# wrapper_node = get_node_by_type(openapi_graph, WrapperVertex)
|
|
# assert wrapper_node is not None
|
|
# built_object = wrapper_node.build()
|
|
# assert built_object is not None
|
|
|
|
|
|
def test_find_last_node(grouped_chat_json_flow):
|
|
grouped_chat_data = json.loads(grouped_chat_json_flow).get("data")
|
|
nodes, edges = grouped_chat_data["nodes"], grouped_chat_data["edges"]
|
|
last_node = find_last_node(nodes, edges)
|
|
assert last_node is not None # Replace with the actual expected value
|
|
assert last_node["id"] == "LLMChain-pimAb" # Replace with the actual expected value
|
|
|
|
|
|
def test_ungroup_node(grouped_chat_json_flow):
|
|
grouped_chat_data = json.loads(grouped_chat_json_flow).get("data")
|
|
group_node = grouped_chat_data["nodes"][2] # Assuming the first node is a group node
|
|
base_flow = copy.deepcopy(grouped_chat_data)
|
|
ungroup_node(group_node["data"], base_flow)
|
|
# after ungroup_node is called, the base_flow and grouped_chat_data should be different
|
|
assert base_flow != grouped_chat_data
|
|
# assert node 2 is not a group node anymore
|
|
assert base_flow["nodes"][2]["data"]["node"].get("flow") is None
|
|
# assert the edges are updated
|
|
assert len(base_flow["edges"]) > len(grouped_chat_data["edges"])
|
|
assert base_flow["edges"][0]["source"] == "ConversationBufferMemory-kUMif"
|
|
assert base_flow["edges"][0]["target"] == "LLMChain-2P369"
|
|
assert base_flow["edges"][1]["source"] == "PromptTemplate-Wjk4g"
|
|
assert base_flow["edges"][1]["target"] == "LLMChain-2P369"
|
|
assert base_flow["edges"][2]["source"] == "ChatOpenAI-rUJ1b"
|
|
assert base_flow["edges"][2]["target"] == "LLMChain-2P369"
|
|
|
|
|
|
def test_process_flow(grouped_chat_json_flow):
|
|
grouped_chat_data = json.loads(grouped_chat_json_flow).get("data")
|
|
|
|
processed_flow = process_flow(grouped_chat_data)
|
|
assert processed_flow is not None
|
|
assert isinstance(processed_flow, dict)
|
|
assert "nodes" in processed_flow
|
|
assert "edges" in processed_flow
|
|
|
|
|
|
def test_process_flow_one_group(one_grouped_chat_json_flow):
|
|
grouped_chat_data = json.loads(one_grouped_chat_json_flow).get("data")
|
|
# There should be only one node
|
|
assert len(grouped_chat_data["nodes"]) == 1
|
|
# Get the node, it should be a group node
|
|
group_node = grouped_chat_data["nodes"][0]
|
|
node_data = group_node["data"]["node"]
|
|
assert node_data.get("flow") is not None
|
|
template_data = node_data["template"]
|
|
assert any("openai_api_key" in key for key in template_data.keys())
|
|
# Get the openai_api_key dict
|
|
openai_api_key = next(
|
|
(template_data[key] for key in template_data.keys() if "openai_api_key" in key),
|
|
None,
|
|
)
|
|
assert openai_api_key is not None
|
|
assert openai_api_key["value"] == "test"
|
|
|
|
processed_flow = process_flow(grouped_chat_data)
|
|
assert processed_flow is not None
|
|
assert isinstance(processed_flow, dict)
|
|
assert "nodes" in processed_flow
|
|
assert "edges" in processed_flow
|
|
|
|
# Now get the node that has ChatOpenAI in its id
|
|
chat_openai_node = next((node for node in processed_flow["nodes"] if "ChatOpenAI" in node["id"]), None)
|
|
assert chat_openai_node is not None
|
|
assert chat_openai_node["data"]["node"]["template"]["openai_api_key"]["value"] == "test"
|
|
|
|
|
|
def test_process_flow_vector_store_grouped(vector_store_grouped_json_flow):
|
|
grouped_chat_data = json.loads(vector_store_grouped_json_flow).get("data")
|
|
nodes = grouped_chat_data["nodes"]
|
|
assert len(nodes) == 4
|
|
# There are two group nodes in this flow
|
|
# One of them is inside the other totalling 7 nodes
|
|
# 4 nodes grouped, one of these turns into 1 normal node and 1 group node
|
|
# This group node has 2 nodes inside it
|
|
|
|
processed_flow = process_flow(grouped_chat_data)
|
|
assert processed_flow is not None
|
|
processed_nodes = processed_flow["nodes"]
|
|
assert len(processed_nodes) == 7
|
|
assert isinstance(processed_flow, dict)
|
|
assert "nodes" in processed_flow
|
|
assert "edges" in processed_flow
|
|
edges = processed_flow["edges"]
|
|
# Expected keywords in source and target fields
|
|
expected_keywords = [
|
|
{"source": "VectorStoreInfo", "target": "VectorStoreAgent"},
|
|
{"source": "ChatOpenAI", "target": "VectorStoreAgent"},
|
|
{"source": "OpenAIEmbeddings", "target": "Chroma"},
|
|
{"source": "Chroma", "target": "VectorStoreInfo"},
|
|
{"source": "WebBaseLoader", "target": "RecursiveCharacterTextSplitter"},
|
|
{"source": "RecursiveCharacterTextSplitter", "target": "Chroma"},
|
|
]
|
|
|
|
for idx, expected_keyword in enumerate(expected_keywords):
|
|
for key, value in expected_keyword.items():
|
|
assert (
|
|
value in edges[idx][key].split("-")[0]
|
|
), f"Edge {idx}, key {key} expected to contain {value} but got {edges[idx][key]}"
|
|
|
|
|
|
def test_update_template(sample_template, sample_nodes):
|
|
# Making a deep copy to keep original sample_nodes unchanged
|
|
nodes_copy = copy.deepcopy(sample_nodes)
|
|
update_template(sample_template, nodes_copy)
|
|
|
|
# Now, validate the updates.
|
|
node1_updated = next((n for n in nodes_copy if n["id"] == "node1"), None)
|
|
node2_updated = next((n for n in nodes_copy if n["id"] == "node2"), None)
|
|
node3_updated = next((n for n in nodes_copy if n["id"] == "node3"), None)
|
|
|
|
assert node1_updated["data"]["node"]["template"]["some_field"]["show"] is True
|
|
assert node1_updated["data"]["node"]["template"]["some_field"]["advanced"] is False
|
|
assert node1_updated["data"]["node"]["template"]["some_field"]["display_name"] == "Name1"
|
|
|
|
assert node2_updated["data"]["node"]["template"]["other_field"]["show"] is False
|
|
assert node2_updated["data"]["node"]["template"]["other_field"]["advanced"] is True
|
|
assert node2_updated["data"]["node"]["template"]["other_field"]["display_name"] == "DisplayName2"
|
|
|
|
# Ensure node3 remains unchanged
|
|
assert node3_updated == sample_nodes[2]
|
|
|
|
|
|
# Test `update_target_handle`
|
|
def test_update_target_handle_proxy():
|
|
new_edge = {
|
|
"data": {
|
|
"targetHandle": {
|
|
"type": "some_type",
|
|
"proxy": {"id": "some_id", "field": ""},
|
|
}
|
|
}
|
|
}
|
|
g_nodes = [{"id": "some_id", "data": {"node": {"flow": None}}}]
|
|
group_node_id = "group_id"
|
|
updated_edge = update_target_handle(new_edge, g_nodes, group_node_id)
|
|
assert updated_edge["data"]["targetHandle"] == new_edge["data"]["targetHandle"]
|
|
|
|
|
|
# Test `set_new_target_handle`
|
|
def test_set_new_target_handle():
|
|
proxy_id = "proxy_id"
|
|
new_edge = {"target": None, "data": {"targetHandle": {}}}
|
|
target_handle = {"type": "type_1", "proxy": {"field": "field_1"}}
|
|
node = {
|
|
"data": {
|
|
"node": {
|
|
"flow": True,
|
|
"template": {"field_1": {"proxy": {"field": "new_field", "id": "new_id"}}},
|
|
}
|
|
}
|
|
}
|
|
set_new_target_handle(proxy_id, new_edge, target_handle, node)
|
|
assert new_edge["target"] == "proxy_id"
|
|
assert new_edge["data"]["targetHandle"]["fieldName"] == "field_1"
|
|
assert new_edge["data"]["targetHandle"]["proxy"] == {
|
|
"field": "new_field",
|
|
"id": "new_id",
|
|
}
|
|
|
|
|
|
# Test `update_source_handle`
|
|
def test_update_source_handle():
|
|
new_edge = {"source": None, "data": {"sourceHandle": {"id": None}}}
|
|
flow_data = {
|
|
"nodes": [{"id": "some_node"}, {"id": "last_node"}],
|
|
"edges": [{"source": "some_node"}],
|
|
}
|
|
updated_edge = update_source_handle(new_edge, flow_data["nodes"], flow_data["edges"])
|
|
assert updated_edge["source"] == "last_node"
|
|
assert updated_edge["data"]["sourceHandle"]["id"] == "last_node"
|
|
|
|
|
|
@pytest.mark.asyncio
|
|
async def test_pickle_graph(json_vector_store):
|
|
starter_projects = load_starter_projects()
|
|
data = starter_projects[0]["data"]
|
|
graph = Graph.from_payload(data)
|
|
assert isinstance(graph, Graph)
|
|
pickled = pickle.dumps(graph)
|
|
assert pickled is not None
|
|
unpickled = pickle.loads(pickled)
|
|
assert unpickled is not None
|
|
|
|
|
|
@pytest.mark.asyncio
|
|
async def test_pickle_each_vertex(json_vector_store):
|
|
starter_projects = load_starter_projects()
|
|
data = starter_projects[0]["data"]
|
|
graph = Graph.from_payload(data)
|
|
assert isinstance(graph, Graph)
|
|
for vertex in graph.vertices:
|
|
await vertex.build()
|
|
pickled = pickle.dumps(vertex)
|
|
assert pickled is not None
|
|
unpickled = pickle.loads(pickled)
|
|
assert unpickled is not None
|
|
|
|
|
|
@pytest.mark.asyncio
|
|
async def test_build_ordering(complex_graph_with_groups):
|
|
sorted_vertices = complex_graph_with_groups.sort_vertices(stop_component_id="ChatInput-Ay8QQ")
|
|
assert sorted_vertices == [
|
|
"ChatInput-Ay8QQ",
|
|
"RecordsAsText-vkx2A",
|
|
"FileLoader-Vo1Cq",
|
|
]
|
|
|
|
sorted_vertices = complex_graph_with_groups.sort_vertices()
|