The FlowListRead schema is added to support reading a list of flows with their styles. The SQLModelSerializable base model is added to support serialization of SQLModel objects to JSON using orjson. This improves performance and reduces memory usage.
🐛 fix(flow.py): add optional style relationship to Flow model
The style relationship is now optional to allow for flows without styles. This is achieved by setting the uselist parameter of the sa_relationship_kwargs to False.
✨ feat(flow.py): add FlowReadWithStyle and FlowUpdate models
The FlowReadWithStyle model is added to support reading a flow with its style. The FlowUpdate model is added to support updating a flow.
The FlowStyle model is added to the project, which represents the style of a flow. It has a color and an emoji field, and a foreign key to the Flow model. The CRUD classes are also added to the file, which are FlowStyleCreate, FlowStyleRead, and FlowStyleUpdate. These classes are used to create, read, and update FlowStyle instances respectively.
The imports for the deleted FlowStyle model are removed from flow_styles.py. The comments for the FlowStyleCreate class are updated to reflect the fields it contains.
✨ feat(router.py): add new routers for flows and flow styles
🔧 refactor(__init__.py): add new routers to __all__ list
🔧 refactor(conftest.py): update import statement for get_session function
The unused code and endpoints related to flows have been removed from the database.py file. New routers for flows and flow styles have been added to the router.py file. The __all__ list in the __init__.py file has been updated to include the new routers. The import statement for the get_session function in the conftest.py file has been updated to reflect the new location of the function.
The orjson library is added as a dependency to improve the performance of JSON serialization. This will help to reduce the time taken to serialize and deserialize JSON data, which is especially important in high-performance applications.
The Makefile has been updated to include the `install_backend` command as a dependency of the `backend` target. This ensures that the backend dependencies are installed before running the backend server.
The API endpoint URLs have been updated to include the version number to improve the API's versioning and maintainability. The changes were made to the server.ts file and the tests that use the API endpoints.
🐛 fix(tests): update API endpoint paths in test files
The API endpoint paths in the test files were outdated and have been updated to reflect the current API version. This ensures that the tests are running against the correct endpoints and that the tests are up-to-date with the current API version.
🐛 fix(frontend): add missing api/v1 prefix to WebSocket URL
🐛 fix(frontend): add missing api/v1 prefix to Vite proxy target
The API routes, WebSocket URL, and Vite proxy target were missing the "api/v1" prefix, causing the frontend to not be able to communicate with the backend. This commit adds the missing prefix to all three locations to fix the issue.
🔨 refactor(custom.py, loading.py, prompts/custom.py, run.py): update import statements to use extract_input_variables_from_prompt from interface.utils module
🔨 refactor(run.py): remove unused imports and functions
🔨 refactor(utils.py): add type hinting to extract_input_variables_from_prompt function and remove unused imports
The extract_input_variables_from_prompt function has been moved to the interface.utils module to improve code organization. The import statements in the affected modules have been updated to reflect this change. Unused imports and functions have been removed from the run.py module. Type hinting has been added to the extract_input_variables_from_prompt function in the interface.utils module.
🚀 feat(processing): add processing module with get_result_and_steps and fix_memory_inputs functions
The processing module was added to the project with two functions: get_result_and_steps and fix_memory_inputs. The get_result_and_steps function extracts the result and thought from a LangChain object and returns them. The fix_memory_inputs function checks if a LangChain object has a memory attribute and if that memory key exists in the object's input variables. If not, it gets a possible new memory key using the get_memory_key function and updates the memory keys using the update_memory_keys function.
🚀 feat(utils.py): import extract_input_variables_from_prompt from langflow.interface.utils
The `from_payload` class method is added to the `Graph` class to create a graph from a payload. This method takes a dictionary as input and returns a `Graph` object. The `extract_input_variables_from_prompt` function is imported from `langflow.interface.utils` to extract input variables from a prompt. This function is used in other parts of the codebase to extract input variables from prompts.
✨ feat(utils.py): add process_graph function to process graph data and generate result and thought
The ChatManager class manages active connections and chat history. The ChatHistory class manages the chat history for a client. The process_graph function processes graph data and generates a result and thought. This function is used in the ChatManager class to generate a response back to the frontend.
This commit adds new API endpoints for chat, validation, and version. The chat endpoint is a websocket endpoint for chat. The validation endpoint has three sub-endpoints for validating code, prompt, and node. The version endpoint returns the version of LangFlow.
The base.py file contains the following classes and functions:
- CacheResponse: a pydantic BaseModel that represents a response containing a dictionary of data
- Code: a pydantic BaseModel that represents a code string
- Prompt: a pydantic BaseModel that represents a prompt template string
- CodeValidationResponse: a pydantic BaseModel that represents a response containing the validation results of code
- PromptValidationResponse: a pydantic BaseModel that represents a response containing the validation results of a prompt
- validate_prompt: a function that validates a prompt template string and returns a PromptValidationResponse object
- check_input_variables: a function that checks if input variables contain invalid characters and returns a list of fixed input variables
The callback.py file contains the following classes:
- AsyncStreamingLLMCallbackHandler: an AsyncCallbackHandler that handles streaming LLM responses asynchronously
- StreamingLLMCallbackHandler: a BaseCallbackHandler that handles streaming LLM responses
These files were added to provide support for Langflow's backend API.
The API now has versioning, with the prefix "/api/v1". The router has been restructured to include the chat, endpoints, and validate routers. This improves the organization of the code and makes it easier to add new routers in the future.
The routers for the langflow API have been moved to a single file for better organization and maintainability. The routers have been imported and included in the main.py file using the new file. A new health check endpoint has been added to the API to check the status of the application.
Added pytest configuration options to the pyproject.toml file. The minimum version of pytest is set to 6.0, the '-ra' option is added to addopts to show all test results, testpaths are set to include both 'tests' and 'integration' directories, console output style is set to 'progress', and DeprecationWarning is ignored. log_cli is set to true to enable logging of pytest output to the console.
Description:
This pull request introduces a new feature that installs the shadTooltip
library into the project. Additionally, it enhances the tooltip
functionality by grouping the tooltips based on their associated edge
classes.
Changes Made:
Added the shadTooltip library to the project dependencies.
Implemented logic to group tooltips based on their respective edge
classes.
Updated the tooltip rendering code to display grouped tooltips on the
edges.
### Description
This pull request introduces an enhancement to the existing application
by adding persistence to the dark mode feature. Currently, when the page
is refreshed, the dark mode setting reverts to the default light mode.
With this enhancement, the dark mode state will be maintained even after
refreshing the page.
### Changes Made
1. Added a new setting in the application to store the user's preference
for dark mode.
2. Implemented functionality to persist the dark mode preference in
local storage.
3. Modified the page initialization logic to retrieve the dark mode
preference from local storage and apply it on page load.
This commit refactors the FrontendNode class by extracting two methods to handle specific field values related to models and API keys. The _handle_model_specific_field_values method handles the options and is_list properties for fields related to models, while the _handle_api_key_specific_field_values method handles the display_name and required properties for fields related to API keys. This improves the readability and maintainability of the code.
✨ feat(flow.py): add validator to ensure flow field is a valid JSON object with required fields
The flow field in the FlowBase model has been changed from a string to a dictionary to allow for JSON data. A validator has been added to ensure that the flow field is a valid JSON object with the required fields. The tests have been updated to reflect these changes.
There are still some rough edges due to underlying langchain and
openai API limitations, e.g. hwchase17/langchain#3769 and
openai/openai-python#411. Notably, you can't use the Azure and
non-Azure node types in the same server, since there's global openai
configuration needed to choose between the two. So it's probably best
to still leave the Azure node types commented out in the default
config. But with this PR, if you uncomment those nodes and start the
server with OPENAI_API_TYPE=azure, you will have working Azure nodes.
✨ feat(database.py): add default argument to json.dumps to handle datetime objects
🚨 test(database.py): add tests for batch flow creation, file upload, and file download
The fix in database.py handles the case where the data dictionary does not contain the "flows" key. This is important because the code assumes that the "flows" key is present and will raise an exception if it is not. The fix adds a check to see if the "flows" key is present and if not, it creates a new FlowListCreate object with the data as a list of FlowCreate objects.
The feature in database.py adds a default argument to the json.dumps function to handle datetime objects. This is important because the default json encoder does not handle datetime objects and will raise an exception if it encounters one.
The tests in test_database.py cover the batch creation of flows, uploading a file containing flows, and downloading a file containing flows. These tests ensure that the endpoints are working as expected and that the data is being handled correctly.
🚀 feat(flowstyle.py): add FlowStyle model
🚀 feat(flowstyle.py): add FlowStyleCreate and FlowStyleRead models
🐛 fix(settings.py): correct typo in database_url variable name
The Flow model now has a relationship to the FlowStyle model, which allows for the creation of a FlowStyle object that is associated with a Flow object. The FlowStyle model is a new model that contains the color and emoji fields, which are used to style the Flow object. The FlowStyleCreate and FlowStyleRead models are used to create and read FlowStyle objects respectively. The typo in the database_url variable name has been corrected to ensure that the application connects to the correct database.
The order of the class definitions in the file has been changed to match the order of their usage in the code. This improves the readability of the code and makes it easier to understand the relationships between the classes. No functionality has been changed.
This commit simply imports the flatten_list function from the graph.utils module to be used in the AgentVertex class. This improves the readability of the code and reduces the number of lines of code.
The return statement in TextSplitterVertex was improved to be more readable by adding a new line before the Documents field. This makes it easier to read and understand the output of the function.