feat: add message_output and refactor LCAgentComponent (#2755)

* feat(agent.py): add support for handling message responses in LCAgentComponent to improve agent functionality and interaction with messages

* feat: add ToolEnabledLanguageModel type alias to constants.py

This commit adds a new type alias `ToolEnabledLanguageModel` to the `constants.py` file in the `field_typing` module. This type alias is used to define the type of language models that have tooling enabled. It includes the `BaseLanguageModel`, `BaseLLM`, and `BaseChatModel` types. This change enhances the typing capabilities of the codebase and improves the clarity of the code.

* feat: update agent.py to include support for ToolEnabledLanguageModel

This commit modifies the `agent.py` file to include support for the `ToolEnabledLanguageModel` in the `LCToolsAgentComponent` class. The `ToolEnabledLanguageModel` is now added as an input type in the `HandleInput` section. This change enhances the functionality of the agent by allowing it to work with language models that have tooling enabled. The addition of this input type improves the flexibility and versatility of the codebase.

* feat: add support for ToolEnabledLanguageModel in LCAgentComponent

This commit modifies the `agent.py` file to include support for the `ToolEnabledLanguageModel` in the `LCAgentComponent` class. The `ToolEnabledLanguageModel` is now added as an input type in the `HandleInput` section. This change enhances the functionality of the agent by allowing it to work with language models that have tooling enabled. The addition of this input type improves the flexibility and versatility of the codebase.

* feat: add AgentAsyncHandler for handling callbacks from Agents

This commit adds the `AgentAsyncHandler` class to handle callbacks from langchain in the `callback.py` file. The `AgentAsyncHandler` is an async callback handler that can be used to handle various events such as tool start, tool end, agent action, and agent finish. This change enhances the functionality of the codebase by providing a convenient way to handle callbacks from langchain and log the events if a log function is provided.

* chore: add field_serializer decorator to Log class for message serialization

This commit adds the `field_serializer` decorator to the `Log` class in the `schema.py` file. The decorator is used to serialize the `message` attribute of the `Log` class, ensuring that all nested objects are properly serialized. This change improves the serialization process and enhances the functionality of the codebase.

* feat: Fix issue with logs in LangSmithTracer

This commit fixes an issue in the `LangSmithTracer` class where logs were not being properly serialized. The `add_metadata` method now converts logs to dictionaries using the `model_dump` method if they are not already dictionaries. This ensures that all logs are correctly serialized and improves the functionality of the codebase.

* feat: Add support for Pydantic V1 models in Log serialization

* fix: Update LCAgentComponent to handle list results in result variable

This commit modifies the `LCAgentComponent` class in the `agent.py` file to handle list results in the `result` variable. If the `result` is a list, it is joined into a single string using the `text` attribute of each result dictionary. This change improves the functionality of the codebase by ensuring that the `result` variable is always a string, which is expected by the `Message` class.

* feat: Add "name" parameter to AgentAsyncHandler methods

This commit adds a "name" parameter to the `on_tool_start`, `on_tool_end`, `on_agent_action`, and `on_agent_finish` methods of the `AgentAsyncHandler` class in the `callback.py` file. The "name" parameter allows for specifying a name for the event, which can be useful for logging and tracking purposes. This change enhances the functionality of the codebase by providing more flexibility in handling callbacks and improves the clarity of the code.

* feat: Update AgentAsyncHandler constructor to support logging multiple loggable types

This commit updates the `AgentAsyncHandler` constructor in the `callback.py` file to support logging multiple loggable types. The `log_function` parameter now accepts a callable that can handle either a single `LoggableType` or a list of `LoggableType` objects along with a string representing the event name. This change enhances the flexibility of the codebase by allowing for more versatile logging options and improves the clarity of the code.

* refactor(callback.py): update type hints in AgentAsyncHandler constructor and methods for better readability and accuracy
refactor(callback.py): simplify on_tool_end method by using **kwargs for flexibility and consistency with other methods
This commit is contained in:
Gabriel Luiz Freitas Almeida 2024-07-17 18:38:40 -03:00 committed by GitHub
commit 2c3e93bd87
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
5 changed files with 187 additions and 9 deletions

View file

@ -1,20 +1,26 @@
from abc import abstractmethod
from typing import List
from typing import List, Optional, Union, cast
from langchain.agents import AgentExecutor, BaseMultiActionAgent, BaseSingleActionAgent
from langchain.agents.agent import RunnableAgent
from langchain.agents import AgentExecutor
from langchain_core.messages import BaseMessage
from langchain_core.runnables import Runnable
from langflow.base.agents.callback import AgentAsyncHandler
from langflow.base.agents.utils import data_to_messages
from langflow.custom import Component
from langflow.inputs import BoolInput, IntInput, HandleInput
from langflow.inputs.inputs import InputTypes
from langflow.field_typing import Text, Tool
from langflow.inputs.inputs import DataInput, InputTypes
from langflow.io import BoolInput, HandleInput, IntInput, MessageTextInput
from langflow.schema import Data
from langflow.schema.message import Message
from langflow.template import Output
class LCAgentComponent(Component):
trace_type = "agent"
_base_inputs: List[InputTypes] = [
MessageTextInput(name="input_value", display_name="Input"),
BoolInput(
name="handle_parsing_errors",
display_name="Handle Parse Errors",
@ -37,8 +43,24 @@ class LCAgentComponent(Component):
outputs = [
Output(display_name="Agent", name="agent", method="build_agent"),
Output(display_name="Response", name="response", method="message_response"),
]
async def message_response(self) -> Message:
agent = self.build_agent()
result = await self.run_agent(
agent=agent,
inputs=self.input_value,
tools=self.tools,
message_history=self.chat_history,
handle_parsing_errors=self.handle_parsing_errors,
)
if isinstance(result, list):
result = "\n".join([result_dict["text"] for result_dict in result])
message = Message(text=result, sender="Machine")
self.status = message
return message
def _validate_outputs(self):
required_output_methods = ["build_agent"]
output_names = [output.name for output in self.outputs]
@ -65,6 +87,33 @@ class LCAgentComponent(Component):
}
return {**base, "agent_executor_kwargs": agent_kwargs}
async def run_agent(
self,
agent: Union[Runnable, BaseSingleActionAgent, BaseMultiActionAgent, AgentExecutor],
inputs: str,
tools: List[Tool],
message_history: Optional[List[Data]] = None,
handle_parsing_errors: bool = True,
) -> Text:
if isinstance(agent, AgentExecutor):
runnable = agent
else:
runnable = AgentExecutor.from_agent_and_tools(
agent=agent, # type: ignore
tools=tools,
verbose=True,
handle_parsing_errors=handle_parsing_errors,
)
input_dict: dict[str, str | list[BaseMessage]] = {"input": inputs}
if message_history:
input_dict["chat_history"] = data_to_messages(message_history)
result = await runnable.ainvoke(input_dict, config={"callbacks": [AgentAsyncHandler(self.log)]})
self.status = result
if "output" not in result:
raise ValueError("Output key not found in result. Tried 'output'.")
return cast(str, result.get("output"))
class LCToolsAgentComponent(LCAgentComponent):
_base_inputs = LCAgentComponent._base_inputs + [
@ -74,7 +123,13 @@ class LCToolsAgentComponent(LCAgentComponent):
input_types=["Tool", "BaseTool"],
is_list=True,
),
HandleInput(name="llm", display_name="Language Model", input_types=["LanguageModel"], required=True),
HandleInput(
name="llm",
display_name="Language Model",
input_types=["LanguageModel", "ToolEnabledLanguageModel"],
required=True,
),
DataInput(name="chat_history", display_name="Chat History", is_list=True),
]
def build_agent(self) -> AgentExecutor:

View file

@ -0,0 +1,103 @@
from typing import Any, Callable, Concatenate, Dict, List
from uuid import UUID
from langchain.callbacks.base import AsyncCallbackHandler
from langchain_core.agents import AgentAction, AgentFinish
from langflow.schema.log import LoggableType
class AgentAsyncHandler(AsyncCallbackHandler):
"""Async callback handler that can be used to handle callbacks from langchain."""
def __init__(self, log_function: Callable[Concatenate[LoggableType | list[LoggableType], ...], None] | None = None):
self.log_function = log_function
async def on_tool_start(
self,
serialized: Dict[str, Any],
input_str: str,
*,
run_id: UUID,
parent_run_id: UUID | None = None,
tags: List[str] | None = None,
metadata: Dict[str, Any] | None = None,
inputs: Dict[str, Any] | None = None,
**kwargs: Any,
) -> None:
if self.log_function is None:
return
self.log_function(
{
"type": "tool_start",
"serialized": serialized,
"input_str": input_str,
"run_id": run_id,
"parent_run_id": parent_run_id,
"tags": tags,
"metadata": metadata,
"inputs": inputs,
**kwargs,
},
name="Tool Start",
)
async def on_tool_end(self, output: Any, *, run_id: UUID, parent_run_id: UUID | None = None, **kwargs: Any) -> None:
if self.log_function is None:
return
self.log_function(
{
"type": "tool_end",
"output": output,
"run_id": run_id,
"parent_run_id": parent_run_id,
**kwargs,
},
name="Tool End",
)
async def on_agent_action(
self,
action: AgentAction,
*,
run_id: UUID,
parent_run_id: UUID | None = None,
tags: List[str] | None = None,
**kwargs: Any,
) -> None:
if self.log_function is None:
return
self.log_function(
{
"type": "agent_action",
"action": action,
"run_id": run_id,
"parent_run_id": parent_run_id,
"tags": tags,
**kwargs,
},
name="Agent Action",
)
async def on_agent_finish(
self,
finish: AgentFinish,
*,
run_id: UUID,
parent_run_id: UUID | None = None,
tags: List[str] | None = None,
**kwargs: Any,
) -> None:
if self.log_function is None:
return
self.log_function(
{
"type": "agent_finish",
"finish": finish,
"run_id": run_id,
"parent_run_id": parent_run_id,
"tags": tags,
**kwargs,
},
name="Agent Finish",
)

View file

@ -21,6 +21,7 @@ from langflow.schema.message import Message
NestedDict: TypeAlias = Dict[str, Union[str, Dict]]
LanguageModel = TypeVar("LanguageModel", BaseLanguageModel, BaseLLM, BaseChatModel)
ToolEnabledLanguageModel = TypeVar("ToolEnabledLanguageModel", BaseLanguageModel, BaseLLM, BaseChatModel)
Retriever = TypeVar(
"Retriever",
BaseRetriever,

View file

@ -120,7 +120,8 @@ class LangSmithTracer(BaseTracer):
raw_outputs = outputs
processed_outputs = self._convert_to_langchain_types(outputs)
if logs:
child.add_metadata(self._convert_to_langchain_types({"logs": {log.get("name"): log for log in logs}}))
logs_dicts = [log if isinstance(log, dict) else log.model_dump() for log in logs]
child.add_metadata(self._convert_to_langchain_types({"logs": {log.get("name"): log for log in logs_dicts}}))
child.add_metadata(self._convert_to_langchain_types({"outputs": raw_outputs}))
child.end(outputs=processed_outputs, error=self._error_to_string(error))
if error:

View file

@ -1,9 +1,27 @@
from typing_extensions import TypedDict
from pydantic import BaseModel, field_serializer
from pydantic.v1 import BaseModel as V1BaseModel
from langflow.schema.log import LoggableType
class Log(TypedDict):
class Log(BaseModel):
name: str
message: LoggableType
type: str
@field_serializer("message")
def serialize_message(self, value):
# We need to make sure everything inside the message has been serialized
if isinstance(value, dict):
return {key: self.serialize_message(value[key]) for key in value}
if isinstance(value, list):
return [self.serialize_message(item) for item in value]
# To json is for LangChain Serializable objects
if hasattr(value, "dict") and isinstance(value, V1BaseModel):
# This is for Pydantic V1 models
return value.dict()
if hasattr(value, "to_json"):
return value.to_json()
if isinstance(value, BaseModel):
return value.model_dump(exclude_none=True)
return value