🐛 fix(flows.py): remove unused import statement to improve code cleanliness and maintainability

🐛 fix(flows.py): change Flow.from_orm() to Flow.model_validate() to ensure data integrity and validation
🐛 fix(users.py): remove unused import statements to improve code cleanliness and maintainability
🐛 fix(users.py): change User.from_orm() to User.model_validate() to ensure data integrity and validation
🐛 fix(LLMChain.py): remove unused import statements to improve code cleanliness and maintainability
🐛 fix(LLMChain.py): remove unnecessary line breaks to improve code readability
🐛 fix(base.py): remove unused import statements to improve code cleanliness and maintainability
🐛 fix(base.py): remove unnecessary line breaks to improve code readability
🐛 fix(base.py): fix condition to append vertex_id to top_level_vertices to avoid appending non-string values
🐛 fix(vertex/base.py): add parent_node_id attribute to Vertex class to support hierarchical graph structures
🐛 fix(base.py): remove unused import statements to improve code cleanliness and maintainability

🚀 feat(GroupTest): add a new node for a simple chat with a custom prompt template and conversational memory buffer

ℹ️ This commit adds a new node to the GroupTest project. The node is a genericNode with the following properties:
- Width: 384
- Height: 621
- ID: ChatOpenAI-rUJ1b
- Type: genericNode
- Position: x: 170.87326389541306, y: 465.8628482073749
- Data:
  - Type: ChatOpenAI
  - Node:
    - Template:
      - Callbacks:
        - Required: false
        - Placeholder: ""
        - Show: false
        - Multiline: false
        - Password: false
        - Name: callbacks
        - Advanced: false
        - Dynamic: false
        - Info: ""
        - Type: langchain.callbacks.base.BaseCallbackHandler
        - List: true
      - Cache:
        - Required: false
        - Placeholder: ""
        - Show: false
        - Multiline: false
        - Password: false
        - Name: cache
        - Advanced: false
        - Dynamic: false
        - Info: ""
        - Type: bool
        - List: false
      - Client:
        - Required: false
        - Placeholder: ""
        - Show: false
        - Multiline: false
        - Password: false
        - Name: client
        - Advanced: false
        - Dynamic: false
        - Info: ""
        - Type: Any
        - List: false
      - Max Retries:
        - Required: false
        - Placeholder: ""
        - Show: false
        - Multiline: false
        - Value: 6
        - Password: false
        - Name: max_retries
        - Advanced: false
        - Dynamic: false
        - Info: ""
        - Type: int
        - List: false
      - Max Tokens:
        - Required: false
        - Placeholder: ""
        - Show: true
        - Multiline: false
        - Password: true
        - Name: max_tokens
        - Advanced: false
        - Dynamic: false
        - Info: ""
        - Type: int
        - List: false

🔧 chore: fix formatting issue in code
📝 docs: update documentation link for `OpenAI` Chat large language models API

🔧 chore: update prompt template configuration in LLMChain node
📝 docs: add documentation link for PromptTemplate in the description

📝 chore(grouped_chat.json): add grouped_chat.json test data file

This commit adds the `grouped_chat.json` file to the `tests/data` directory. The file contains a JSON object representing grouped chat data. This file is necessary for testing and will be used in the test suite.

📝 chore(one_group_chat.json): add one_group_chat.json test data file

This commit adds the one_group_chat.json file, which contains a simple chat with a custom prompt template and conversational memory buffer. This file is used for testing purposes.

🔧 chore: update node configuration for ConversationBufferMemory, ChatOpenAI, and LLMChain
📝 docs: update documentation links for ConversationBufferMemory and LLMChain

🔧 fix: update prompt template in LLMChain to include conversation history and text input variables
🔧 fix: update ConversationBufferMemory node to include description and documentation link

🎨 style: format and organize code for better readability and maintainability

🆕 feat(Vector Store): add Vector Store agent and Vector Store Info node

The Vector Store agent allows querying a Vector Store. It can be used to construct an agent from a Vector Store. The Vector Store Info node provides information about a Vector Store.

The Vector Store agent and Vector Store Info node are added to support the functionality of querying a Vector Store.

🔧 chore: update configuration options in the OpenAI API client

The configuration options in the OpenAI API client have been updated. This commit includes changes to the following options:

- `max_tokens`: Removed the `required` flag and set `show` to `true`
- `metadata`: Set `show` to `false`
- `model_kwargs`: Set `show` to `true` and `advanced` to `true`
- `model_name`: Added options `gpt-3.5-turbo-0613`, `gpt-3.5-turbo`, `gpt-3.5-turbo-16k-0613`, `gpt-3.5-turbo-16k`, `gpt-4-0613`, `gpt-4-32k-0613`, `gpt-4`, `gpt-4-32k`
- `n`: Removed the `show` flag
- `openai_api_base`: Added `display_name` as "OpenAI API Base" and updated `info` with additional details
- `openai_api_key`: Removed the `required` flag and set `show` to `true`
- `openai_organization`: Removed the `show` flag
- `openai_proxy`: Removed the `show` flag
- `request_timeout`: Removed the `show` flag
- `streaming`: Removed the `show` flag
- `tags`: Removed the `show` flag
- `temperature`: Removed the `show` flag
- `tiktoken_model_name`: Removed the `show` flag
- `verbose`: Removed the `show` flag

🔧 chore: update configuration for ChatOpenAI and Chroma nodes

The configuration for the ChatOpenAI and Chroma nodes has been updated. This includes changes to the allowed_special, disallowed_special, chunk_size, client, deployment, embedding_ctx_length, and max_retries properties. These changes were made to improve the functionality and performance of the nodes.

🔧 chore(config): update OpenAIEmbeddings-YwSvx configuration options

The OpenAIEmbeddings-YwSvx configuration options have been updated to include new fields and values. This commit updates the configuration file to reflect these changes.

🔧 chore(config): update configuration options for OpenAIEmbeddings and Chroma

🔧 chore(config): update configuration options for OpenAIEmbeddings and Chroma to improve flexibility and customization

🔧 chore: update configuration options for RecursiveCharacterTextSplitter and WebBaseLoader in flow

The configuration options for RecursiveCharacterTextSplitter and WebBaseLoader in the flow have been updated. The changes include:

- Persist Directory - Chroma: The persist directory option for Chroma has been modified.
- Search Kwargs - Chroma: The search kwargs option for Chroma has been modified.
- Chunk Overlap - RecursiveCharacterTextSplitter: The chunk overlap option for RecursiveCharacterTextSplitter has been modified.
- Chunk Size - RecursiveCharacterTextSplitter: The chunk size option for RecursiveCharacterTextSplitter has been modified.
- Separator Type - RecursiveCharacterTextSplitter: The separator type option for RecursiveCharacterTextSplitter has been modified.
- Separator - RecursiveCharacterTextSplitter: The separator option for RecursiveCharacterTextSplitter has been modified.
- Metadata - WebBaseLoader: The metadata option for WebBaseLoader has been modified.
- Web Page - WebBaseLoader: The web page option for WebBaseLoader has been modified.

🔧 chore(OpenAIEmbeddings): update OpenAIEmbeddings configuration options

The OpenAIEmbeddings node configuration options have been updated to include the following changes:
- `allowed_special` and `disallowed_special` now accept a list of values instead of a single value
- `chunk_size` now accepts an integer value
- `deployment` now accepts a string value
- `embedding_ctx_length` now accepts an integer value
- `headers` now supports multiline values
- `max_retries` now accepts an integer value
- `model` now accepts a string value
- `model_kwargs` now accepts code input
- `openai_api_base` now accepts a password input
- `openai_api_key` now accepts a password input
- `openai_api_type` now accepts a password input
- `openai_api_version` now accepts a password input
- `openai_organization` has been removed from the configuration options

🔧 chore: update OpenAIEmbeddings configuration options in the UI

The OpenAIEmbeddings configuration options in the UI have been updated to include the following changes:
- Added the `openai_organization` option to specify the OpenAI organization.
- Added the `openai_proxy` option to configure the OpenAI proxy.
- Added the `request_timeout` option to set the request timeout.
- Added the `show_progress_bar` option to control the visibility of the progress bar.
- Changed the `tiktoken_model_name` option to be a password field.
- Updated the documentation link for OpenAIEmbeddings.

This commit updates the configuration options to improve the usability and functionality of the OpenAIEmbeddings module in the UI.

🔧 chore: clean up unused code and remove unnecessary fields in the configuration file
📝 docs: update documentation link for the Chroma vectorstore module

🔧 chore: update configuration options for RecursiveCharacterTextSplitter in flow

The configuration options for the RecursiveCharacterTextSplitter node in the flow have been updated. The following changes were made:

- `chunk_size` option: The default value has been changed to 1000.
- `separator_type` option: The available options have been updated to include "Text", "cpp", "go", "html", "java", "js", "latex", "markdown", "php", "proto", "python", "rst", "ruby", "rust", "scala", "sol", and "swift".
- `separators` option: The default value has been changed to ".".

These changes were made to improve the usability and flexibility of the RecursiveCharacterTextSplitter node in the flow.

📝 chore(vector_store_grouped.json): add vector_store_grouped.json test data file

🔀 chore(vector_store_grouped.json): add vector_store_grouped.json test data file

🔨 refactor(test_graph.py): reformat import statements and improve code readability
🔨 refactor(test_prompts_template.py): change dynamic attribute to True for input variables, output parser, partial variables, template, and validate template
🔨 refactor(test_template.py): reformat import statements and remove duplicate import of BaseModel
🔨 refactor(test_template.py): update value for options in format_dict test
This commit is contained in:
Gabriel Luiz Freitas Almeida 2023-12-12 16:46:41 -03:00
commit 18b4e33062
17 changed files with 1382 additions and 56 deletions

View file

@ -5,13 +5,14 @@ from uuid import UUID
import orjson
from fastapi import APIRouter, Depends, File, HTTPException, UploadFile
from fastapi.encoders import jsonable_encoder
from sqlmodel import Session, select
from langflow.api.utils import remove_api_keys, validate_is_component
from langflow.api.v1.schemas import FlowListCreate, FlowListRead
from langflow.services.auth.utils import get_current_active_user
from langflow.services.database.models.flow import Flow, FlowCreate, FlowRead, FlowUpdate
from langflow.services.database.models.user.model import User
from langflow.services.deps import get_session, get_settings_service
from sqlmodel import Session, select
# build router
router = APIRouter(prefix="/flows", tags=["Flows"])
@ -122,7 +123,7 @@ def create_flows(
db_flows = []
for flow in flow_list.flows:
flow.user_id = current_user.id
db_flow = Flow.from_orm(flow)
db_flow = Flow.model_validate(flow)
session.add(db_flow)
db_flows.append(db_flow)
session.commit()

View file

@ -1,6 +1,11 @@
from uuid import UUID
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy import func
from sqlalchemy.exc import IntegrityError
from sqlmodel import Session, select
from sqlmodel.sql.expression import SelectOfScalar
from langflow.api.v1.schemas import UsersResponse
from langflow.services.auth.utils import (
get_current_active_superuser,
@ -11,10 +16,6 @@ from langflow.services.auth.utils import (
from langflow.services.database.models.user import User, UserCreate, UserRead, UserUpdate
from langflow.services.database.models.user.crud import get_user_by_id, update_user
from langflow.services.deps import get_session, get_settings_service
from sqlalchemy import func
from sqlalchemy.exc import IntegrityError
from sqlmodel import Session, select
from sqlmodel.sql.expression import SelectOfScalar
router = APIRouter(tags=["Users"], prefix="/users")
@ -28,7 +29,7 @@ def add_user(
"""
Add a new user to the database.
"""
new_user = User.from_orm(user)
new_user = User.model_validate(user)
try:
new_user.password = get_password_hash(user.password)
new_user.is_active = settings_service.auth_settings.NEW_USER_IS_ACTIVE

View file

@ -3,12 +3,7 @@ from typing import Callable, Optional, Union
from langchain.chains import LLMChain
from langflow import CustomComponent
from langflow.field_typing import (
BaseLanguageModel,
BaseMemory,
BasePromptTemplate,
Chain,
)
from langflow.field_typing import BaseLanguageModel, BaseMemory, BasePromptTemplate, Chain
class LLMChainComponent(CustomComponent):

View file

@ -60,8 +60,8 @@ class Edge:
self.source_id = state["source_id"]
self.target_id = state["target_id"]
self.target_param = state["target_param"]
self.source_handle = state["source_handle"]
self.target_handle = state["target_handle"]
self.source_handle = state.get("source_handle")
self.target_handle = state.get("target_handle")
def validate_edge(self, source, target) -> None:
# Validate that the outputs of the source node are valid inputs

View file

@ -7,8 +7,7 @@ from langflow.graph.edge.base import Edge
from langflow.graph.graph.constants import lazy_load_vertex_dict
from langflow.graph.graph.utils import process_flow
from langflow.graph.vertex.base import Vertex
from langflow.graph.vertex.types import (FileToolVertex, LLMVertex,
ToolkitVertex)
from langflow.graph.vertex.types import FileToolVertex, LLMVertex, ToolkitVertex
from langflow.interface.tools.constants import FILE_TOOLS
from langflow.utils import payload
@ -28,7 +27,8 @@ class Graph:
self.top_level_vertices = []
for vertex in self._vertices:
if vertex_id := vertex.get("id"):
self.top_level_vertices.append(vertex_id)
if isinstance(vertex_id, str):
self.top_level_vertices.append(vertex_id)
self._graph_data = process_flow(self.raw_graph_data)
self._vertices = self._graph_data["nodes"]

View file

@ -35,6 +35,7 @@ class Vertex:
self.artifacts: Dict[str, Any] = {}
self.task_id: Optional[str] = None
self.is_task = is_task
self.parent_node_id: Optional[str] = self._data.get("parent_node_id")
self.params = params or {}
@property

View file

@ -1,28 +1,18 @@
from typing import Dict, List, Optional
from langchain.agents.load_tools import (
_EXTRA_LLM_TOOLS,
_EXTRA_OPTIONAL_TOOLS,
_LLM_TOOLS,
)
from langchain.tools.python.tool import PythonInputs
from langchain.agents.load_tools import _EXTRA_LLM_TOOLS, _EXTRA_OPTIONAL_TOOLS, _LLM_TOOLS
from langchain_experimental.tools.python.tool import PythonInputs
from langflow.custom import customs
from langflow.interface.base import LangChainTypeCreator
from langflow.interface.tools.constants import (
ALL_TOOLS_NAMES,
CUSTOM_TOOLS,
FILE_TOOLS,
OTHER_TOOLS,
)
from langflow.interface.tools.constants import ALL_TOOLS_NAMES, CUSTOM_TOOLS, FILE_TOOLS, OTHER_TOOLS
from langflow.interface.tools.util import get_tool_params
from langflow.services.deps import get_settings_service
from langflow.template.field.base import TemplateField
from langflow.template.template.base import Template
from langflow.utils import util
from langflow.utils.util import build_template_from_class
from langflow.utils.logger import logger
from langflow.utils.util import build_template_from_class
TOOL_INPUTS = {
"str": TemplateField(
@ -167,10 +157,7 @@ class ToolCreator(LangChainTypeCreator):
tool_params = {**tool_params, **self.type_to_loader_dict[name]["params"]}
template_dict = template.to_dict()
if (
"args_schema" in template_dict
and template_dict.get("args_schema").get("value") == PythonInputs
):
if "args_schema" in template_dict and template_dict.get("args_schema").get("value") == PythonInputs:
template_dict["args_schema"]["value"] = ""
return {
"template": util.format_dict(template_dict),

View file

@ -29,7 +29,7 @@ def create_api_key(session: Session, api_key_create: ApiKeyCreate, user_id: UUID
session.add(api_key)
session.commit()
session.refresh(api_key)
unmasked = UnmaskedApiKeyRead.from_orm(api_key)
unmasked = UnmaskedApiKeyRead.model_validate(api_key)
unmasked.api_key = generated_api_key
return unmasked

View file

@ -1,8 +1,11 @@
import traceback
from typing import Any, Callable, Optional, Tuple
import anyio
from langflow.services.task.backends.base import TaskBackend
from loguru import logger
from langflow.services.task.backends.base import TaskBackend
class AnyIOTaskResult:
def __init__(self, scope):
@ -10,6 +13,7 @@ class AnyIOTaskResult:
self._status = "PENDING"
self._result = None
self._exception = None
self._traceback = None
@property
def status(self) -> str:
@ -17,6 +21,10 @@ class AnyIOTaskResult:
return "FAILURE" if self._exception is not None else "SUCCESS"
return self._status
@property
def traceback(self) -> Optional[str]:
return self._traceback
@property
def result(self) -> Any:
return self._result
@ -29,6 +37,7 @@ class AnyIOTaskResult:
self._result = await func(*args, **kwargs)
except Exception as e:
self._exception = e
self._traceback = traceback.format_exc()
finally:
self._status = "DONE"

View file

@ -1,14 +1,14 @@
import re
import inspect
import importlib
import inspect
import re
from functools import wraps
from typing import List, Optional, Dict, Any, Union
from typing import Any, Dict, List, Optional, Union
from docstring_parser import parse
from langchain.schema import Document
from langflow.template.frontend_node.constants import FORCE_SHOW_FIELDS
from langflow.utils import constants
from langchain.schema import Document
def remove_ansi_escape_codes(text):
@ -171,7 +171,9 @@ def get_base_classes(cls):
"""Get the base classes of a class.
These are used to determine the output of the nodes.
"""
if bases := cls.__bases__:
if hasattr(cls, "__bases__") and cls.__bases__:
bases = cls.__bases__
result = []
for base in bases:
if any(type in base.__module__ for type in ["pydantic", "abc"]):
@ -252,6 +254,7 @@ def format_dict(dictionary: Dict[str, Any], class_name: Optional[str] = None) ->
_type = remove_optional_wrapper(_type)
_type = check_list_type(_type, value)
_type = replace_mapping_with_dict(_type)
_type = get_type_from_union_literal(_type)
value["type"] = get_formatted_type(key, _type)
value["show"] = should_show_field(value, key)

View file

@ -1,4 +1,5 @@
import json
# we need to import tmpdir
import tempfile
from contextlib import contextmanager, suppress
@ -150,6 +151,24 @@ def json_flow():
return f.read()
@pytest.fixture
def grouped_chat_json_flow():
with open(pytest.GROUPED_CHAT_EXAMPLE_PATH, "r") as f:
return f.read()
@pytest.fixture
def one_grouped_chat_json_flow():
with open(pytest.ONE_GROUPED_CHAT_EXAMPLE_PATH, "r") as f:
return f.read()
@pytest.fixture
def vector_store_grouped_json_flow():
with open(pytest.VECTOR_STORE_GROUPED_EXAMPLE_PATH, "r") as f:
return f.read()
@pytest.fixture
def json_flow_with_prompt_and_history():
with open(pytest.BASIC_CHAT_WITH_PROMPT_AND_HISTORY, "r") as f:

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

View file

@ -1,3 +1,4 @@
import copy
import json
import os
import pickle
@ -11,14 +12,18 @@ from langchain.llms.fake import FakeListLLM
from langflow.graph import Graph
from langflow.graph.edge.base import Edge
from langflow.graph.graph.utils import (find_last_node, process_flow,
set_new_target_handle, ungroup_node,
update_source_handle,
update_target_handle, update_template)
from langflow.graph.graph.utils import (
find_last_node,
process_flow,
set_new_target_handle,
ungroup_node,
update_source_handle,
update_target_handle,
update_template,
)
from langflow.graph.utils import UnbuiltObject
from langflow.graph.vertex.base import Vertex
from langflow.graph.vertex.types import (FileToolVertex, LLMVertex,
ToolkitVertex)
from langflow.graph.vertex.types import FileToolVertex, LLMVertex, ToolkitVertex
from langflow.processing.process import get_result_and_thought
from langflow.utils.payload import get_root_vertex

View file

@ -22,7 +22,7 @@ def test_prompt_template(client: TestClient, logged_in_headers):
template = prompt["template"]
assert template["input_variables"] == {
"required": True,
"dynamic": False,
"dynamic": True,
"placeholder": "",
"show": False,
"multiline": False,
@ -37,7 +37,7 @@ def test_prompt_template(client: TestClient, logged_in_headers):
assert template["output_parser"] == {
"required": False,
"dynamic": False,
"dynamic": True,
"placeholder": "",
"show": False,
"multiline": False,
@ -52,7 +52,7 @@ def test_prompt_template(client: TestClient, logged_in_headers):
assert template["partial_variables"] == {
"required": False,
"dynamic": False,
"dynamic": True,
"placeholder": "",
"show": False,
"multiline": False,
@ -67,7 +67,7 @@ def test_prompt_template(client: TestClient, logged_in_headers):
assert template["template"] == {
"required": True,
"dynamic": False,
"dynamic": True,
"placeholder": "",
"show": True,
"multiline": True,
@ -98,7 +98,7 @@ def test_prompt_template(client: TestClient, logged_in_headers):
assert template["validate_template"] == {
"required": False,
"dynamic": False,
"dynamic": True,
"placeholder": "",
"show": False,
"multiline": False,

View file

@ -2,6 +2,8 @@ import importlib
from typing import Dict, List, Optional
import pytest
from pydantic import BaseModel
from langflow.utils.constants import CHAT_OPENAI_MODELS, OPENAI_MODELS
from langflow.utils.util import (
build_template_from_class,
@ -10,7 +12,6 @@ from langflow.utils.util import (
get_base_classes,
get_default_factory,
)
from pydantic import BaseModel
# Dummy classes for testing purposes
@ -235,7 +236,7 @@ def test_format_dict():
"password": False,
"multiline": False,
"options": CHAT_OPENAI_MODELS,
"value": "gpt-3.5-turbo-0613",
"value": "gpt-4-1106-preview",
},
}
assert format_dict(input_dict, "OpenAI") == expected_output_openai