The abstractmethod decorator is redundant in the CustomChain and CustomAgentExecutor classes as the methods they decorate are already defined as abstract in the parent classes. Removing these decorators improves code readability and reduces clutter.
This commit makes the `function_name` and `initialize` methods abstract in the `CustomChain` and `CustomAgentExecutor` classes. This is done to enforce the implementation of these methods in the subclasses of these classes. This change improves the code quality and readability by making the code more explicit and easier to understand.
🐛 fix(interface/loading.py): fix custom_node instantiation to handle classes without initialize method
✨ feat(template/frontend_node/prompts.py): change type_name to match class name
✨ feat(template/frontend_node/tools.py): change type_name to match class name
🔥 chore(test_agents_template.py): remove test_agents_settings and update initialize_agent test
The vertex_type assignment in the Vertex class was not handling uppercase template types correctly. This has been fixed to handle both uppercase and lowercase types. The custom_node instantiation in the instantiate_class function was not handling classes without an initialize method correctly. This has been fixed to instantiate the class directly if the initialize method is not present. The type_name in the ZeroShotPromptNode and PythonFunctionToolNode classes have been changed to match the class name. The test_agents_settings test has been removed as it is no longer necessary and the initialize_agent test has been updated to match the new AgentInitializer class name.
🔨 refactor(base.py): add CustomAgentExecutor class and move CustomChain to base.py
🔨 refactor(custom_lists.py): update reference to CustomChain to CustomAgentExecutor
The CustomChain class has been renamed to CustomAgentExecutor to better reflect its purpose. The class has been moved to base.py and a new CustomAgentExecutor class has been added to custom.py. The reference to CustomChain in custom_lists.py has been updated to CustomAgentExecutor. These changes improve the semantics of the code and make it easier to understand the purpose of the classes.
✨ feat(langflow): add new chains to config.yaml and custom chains to interface/chains/custom.py
The following chains were added to the config.yaml file: RetrievalQA, RetrievalQAWithSourcesChain, QAWithSourcesChain, ConversationalRetrievalChain, and CombineDocsChain. These chains were added to improve the functionality of the application and provide more options for users.
In addition, custom chains were added to the interface/chains/custom.py file. The CombineDocsChain was added to allow users to combine multiple documents into a single document for use in the question answering chains. The QA_CHAIN_TYPES constant was also added to the frontend_node/constants.py file to provide a list of available question answering chain types.
🔀 refactor(custom_lists.py): merge CUSTOM_AGENTS and CUSTOM_CHAINS into CUSTOM_NODES
🔀 refactor(loading.py): add instantiate_chains function to instantiate chains
🔀 refactor(base.py): add CustomChain class
The changes add support for a new chain called load_qa_chain to CUSTOM_NODES. CUSTOM_AGENTS and CUSTOM_CHAINS are merged into CUSTOM_NODES. A new function called instantiate_chains is added to instantiate chains. A new class called CustomChain is added to the base.py file. This class is used to define a custom chain.
✨ feat(frontend_node): add extra fields to MemoryFrontendNode and ChainFrontendNode
The unused code in custom.py has been removed. The MemoryFrontendNode and ChainFrontendNode classes have been updated to include additional fields that are required for their respective templates. The MemoryFrontendNode now has a return_messages field, and the ChainFrontendNode now has a memory field. These fields are optional and can be toggled on or off as required.
This commit only updates the version number of the package in the pyproject.toml file. The version number is updated to 0.0.84. This is a chore commit as it does not add any new features or fix any bugs, but it is necessary to keep track of the package version.
The conditional statement in line 292 was not properly checking for undefined and null values, which could lead to unexpected behavior. The fix ensures that the statement checks for all falsy values, including undefined and null.
The PythonFunction tool has been added to the list of available tools in the config.yaml file. This allows the backend to use Python functions as part of the language processing pipeline.
The Makefile has been updated to include the `install_backend` command as a dependency of the `backend` target. This ensures that the backend dependencies are installed before running the backend server.
The code was updated to add a null check for the name variable before checking if it contains the string "azure". This prevents a potential runtime error if the name variable is null.
🚀 feat(loading.py): add support for PythonFunction node type
🚀 feat(constants.py): add PythonFunction to CUSTOM_TOOLS
🚀 feat(custom.py): add PythonFunction class
🚀 feat(frontend_node/tools.py): add PythonFunctionNode class
🧪 test(test_custom_types.py): add test for PythonFunction class
🧪 test(test_llms_template.py): comment out tests for AzureOpenAI and AzureChatOpenAI
The changes add support for a new node type, PythonFunction, which allows users to define a Python function to be executed. The node type is added to CUSTOM_NODES in customs.py, and support for the node type is added to loading.py. The node type is also added to CUSTOM_TOOLS in constants.py, and the PythonFunction class is added to custom.py. The PythonFunctionNode class is added to frontend_node/tools.py. Tests for the new PythonFunction class are added to test_custom_types.py. Tests for AzureOpenAI and AzureChatOpenAI are commented out in test_llms_template.py.
The API endpoint URLs have been updated to include the version number to improve the API's versioning and maintainability. The changes were made to the server.ts file and the tests that use the API endpoints.
🐛 fix(tests): update API endpoint paths in test files
The API endpoint paths in the test files were outdated and have been updated to reflect the current API version. This ensures that the tests are running against the correct endpoints and that the tests are up-to-date with the current API version.
🐛 fix(frontend): add missing api/v1 prefix to WebSocket URL
🐛 fix(frontend): add missing api/v1 prefix to Vite proxy target
The API routes, WebSocket URL, and Vite proxy target were missing the "api/v1" prefix, causing the frontend to not be able to communicate with the backend. This commit adds the missing prefix to all three locations to fix the issue.
🔨 refactor(custom.py, loading.py, prompts/custom.py, run.py): update import statements to use extract_input_variables_from_prompt from interface.utils module
🔨 refactor(run.py): remove unused imports and functions
🔨 refactor(utils.py): add type hinting to extract_input_variables_from_prompt function and remove unused imports
The extract_input_variables_from_prompt function has been moved to the interface.utils module to improve code organization. The import statements in the affected modules have been updated to reflect this change. Unused imports and functions have been removed from the run.py module. Type hinting has been added to the extract_input_variables_from_prompt function in the interface.utils module.
🚀 feat(processing): add processing module with get_result_and_steps and fix_memory_inputs functions
The processing module was added to the project with two functions: get_result_and_steps and fix_memory_inputs. The get_result_and_steps function extracts the result and thought from a LangChain object and returns them. The fix_memory_inputs function checks if a LangChain object has a memory attribute and if that memory key exists in the object's input variables. If not, it gets a possible new memory key using the get_memory_key function and updates the memory keys using the update_memory_keys function.
🚀 feat(utils.py): import extract_input_variables_from_prompt from langflow.interface.utils
The `from_payload` class method is added to the `Graph` class to create a graph from a payload. This method takes a dictionary as input and returns a `Graph` object. The `extract_input_variables_from_prompt` function is imported from `langflow.interface.utils` to extract input variables from a prompt. This function is used in other parts of the codebase to extract input variables from prompts.
✨ feat(utils.py): add process_graph function to process graph data and generate result and thought
The ChatManager class manages active connections and chat history. The ChatHistory class manages the chat history for a client. The process_graph function processes graph data and generates a result and thought. This function is used in the ChatManager class to generate a response back to the frontend.
This commit adds new API endpoints for chat, validation, and version. The chat endpoint is a websocket endpoint for chat. The validation endpoint has three sub-endpoints for validating code, prompt, and node. The version endpoint returns the version of LangFlow.
The base.py file contains the following classes and functions:
- CacheResponse: a pydantic BaseModel that represents a response containing a dictionary of data
- Code: a pydantic BaseModel that represents a code string
- Prompt: a pydantic BaseModel that represents a prompt template string
- CodeValidationResponse: a pydantic BaseModel that represents a response containing the validation results of code
- PromptValidationResponse: a pydantic BaseModel that represents a response containing the validation results of a prompt
- validate_prompt: a function that validates a prompt template string and returns a PromptValidationResponse object
- check_input_variables: a function that checks if input variables contain invalid characters and returns a list of fixed input variables
The callback.py file contains the following classes:
- AsyncStreamingLLMCallbackHandler: an AsyncCallbackHandler that handles streaming LLM responses asynchronously
- StreamingLLMCallbackHandler: a BaseCallbackHandler that handles streaming LLM responses
These files were added to provide support for Langflow's backend API.
The API now has versioning, with the prefix "/api/v1". The router has been restructured to include the chat, endpoints, and validate routers. This improves the organization of the code and makes it easier to add new routers in the future.
The routers for the langflow API have been moved to a single file for better organization and maintainability. The routers have been imported and included in the main.py file using the new file. A new health check endpoint has been added to the API to check the status of the application.
Added pytest configuration options to the pyproject.toml file. The minimum version of pytest is set to 6.0, the '-ra' option is added to addopts to show all test results, testpaths are set to include both 'tests' and 'integration' directories, console output style is set to 'progress', and DeprecationWarning is ignored. log_cli is set to true to enable logging of pytest output to the console.
The version number in the pyproject.toml file has been updated from 0.0.82 to 0.0.83. This is a chore commit as it does not introduce any new features or fix any bugs, but rather updates the version number to reflect the changes made in the package.
- Added `--public` to `lcserve_push` target to make sure it is
accessible to everyone (already done in `dev` branch)
- Changed `langchain-serve` trigger to `main` branch as the release it
done from main