This release marks a significant milestone for our project with the
integration of the Langflow Store, enabling users to share and utilize a
wide array of shared components. We've also introduced the ability to
group multiple components into a single entity, streamlining the
organization and management of complex workflows.
Key Features:
- Langflow Store Integration: Share and use shared components with ease,
fostering a collaborative environment.
- Grouped Components: Combine multiple components into one for better
organization and usability.
- Enhanced UI/UX: Numerous improvements to the user interface and user
experience, including loading animations, modal management, and UI
consistency.
- Performance and Security: Dependency updates and refactoring for
improved performance, security, and maintainability.
- Error Handling: Robust error handling mechanisms for API key issues,
component retrieval, and more.
- Backend and Frontend Refinements: Backend services and frontend
components have been optimized for better performance and user
experience.
We've also addressed various bugs and made improvements across the
board, including memory leak fixes, UI enhancements, and codebase
optimizations. The addition of new features like the GPT-4 Vision
Preview option and the first steps into handling credentials a bit
better. Some features will be implemented fully in the coming weeks.
A heartfelt thank you to all our contributors, especially @lucaseduoli ,
@igorrCarvalho, @anovazzi1, @Cristhianzl ,@merrygoround-of-life,
@kandakji, @onesolpark , @ysekiy, @rm-- , @brylie , @Lanznx, @mrab72 ,
@gladson for their significant commits and dedication to the project.
Your contributions have been invaluable in bringing this release to
fruition.
🐛 fix(flows.py): change Flow.from_orm() to Flow.model_validate() to ensure data integrity and validation
🐛 fix(users.py): remove unused import statements to improve code cleanliness and maintainability
🐛 fix(users.py): change User.from_orm() to User.model_validate() to ensure data integrity and validation
🐛 fix(LLMChain.py): remove unused import statements to improve code cleanliness and maintainability
🐛 fix(LLMChain.py): remove unnecessary line breaks to improve code readability
🐛 fix(base.py): remove unused import statements to improve code cleanliness and maintainability
🐛 fix(base.py): remove unnecessary line breaks to improve code readability
🐛 fix(base.py): fix condition to append vertex_id to top_level_vertices to avoid appending non-string values
🐛 fix(vertex/base.py): add parent_node_id attribute to Vertex class to support hierarchical graph structures
🐛 fix(base.py): remove unused import statements to improve code cleanliness and maintainability
🚀 feat(GroupTest): add a new node for a simple chat with a custom prompt template and conversational memory buffer
ℹ️ This commit adds a new node to the GroupTest project. The node is a genericNode with the following properties:
- Width: 384
- Height: 621
- ID: ChatOpenAI-rUJ1b
- Type: genericNode
- Position: x: 170.87326389541306, y: 465.8628482073749
- Data:
- Type: ChatOpenAI
- Node:
- Template:
- Callbacks:
- Required: false
- Placeholder: ""
- Show: false
- Multiline: false
- Password: false
- Name: callbacks
- Advanced: false
- Dynamic: false
- Info: ""
- Type: langchain.callbacks.base.BaseCallbackHandler
- List: true
- Cache:
- Required: false
- Placeholder: ""
- Show: false
- Multiline: false
- Password: false
- Name: cache
- Advanced: false
- Dynamic: false
- Info: ""
- Type: bool
- List: false
- Client:
- Required: false
- Placeholder: ""
- Show: false
- Multiline: false
- Password: false
- Name: client
- Advanced: false
- Dynamic: false
- Info: ""
- Type: Any
- List: false
- Max Retries:
- Required: false
- Placeholder: ""
- Show: false
- Multiline: false
- Value: 6
- Password: false
- Name: max_retries
- Advanced: false
- Dynamic: false
- Info: ""
- Type: int
- List: false
- Max Tokens:
- Required: false
- Placeholder: ""
- Show: true
- Multiline: false
- Password: true
- Name: max_tokens
- Advanced: false
- Dynamic: false
- Info: ""
- Type: int
- List: false
🔧 chore: fix formatting issue in code
📝 docs: update documentation link for `OpenAI` Chat large language models API
🔧 chore: update prompt template configuration in LLMChain node
📝 docs: add documentation link for PromptTemplate in the description
📝 chore(grouped_chat.json): add grouped_chat.json test data file
This commit adds the `grouped_chat.json` file to the `tests/data` directory. The file contains a JSON object representing grouped chat data. This file is necessary for testing and will be used in the test suite.
📝 chore(one_group_chat.json): add one_group_chat.json test data file
This commit adds the one_group_chat.json file, which contains a simple chat with a custom prompt template and conversational memory buffer. This file is used for testing purposes.
🔧 chore: update node configuration for ConversationBufferMemory, ChatOpenAI, and LLMChain
📝 docs: update documentation links for ConversationBufferMemory and LLMChain
🔧 fix: update prompt template in LLMChain to include conversation history and text input variables
🔧 fix: update ConversationBufferMemory node to include description and documentation link
🎨 style: format and organize code for better readability and maintainability
🆕 feat(Vector Store): add Vector Store agent and Vector Store Info node
The Vector Store agent allows querying a Vector Store. It can be used to construct an agent from a Vector Store. The Vector Store Info node provides information about a Vector Store.
The Vector Store agent and Vector Store Info node are added to support the functionality of querying a Vector Store.
🔧 chore: update configuration options in the OpenAI API client
The configuration options in the OpenAI API client have been updated. This commit includes changes to the following options:
- `max_tokens`: Removed the `required` flag and set `show` to `true`
- `metadata`: Set `show` to `false`
- `model_kwargs`: Set `show` to `true` and `advanced` to `true`
- `model_name`: Added options `gpt-3.5-turbo-0613`, `gpt-3.5-turbo`, `gpt-3.5-turbo-16k-0613`, `gpt-3.5-turbo-16k`, `gpt-4-0613`, `gpt-4-32k-0613`, `gpt-4`, `gpt-4-32k`
- `n`: Removed the `show` flag
- `openai_api_base`: Added `display_name` as "OpenAI API Base" and updated `info` with additional details
- `openai_api_key`: Removed the `required` flag and set `show` to `true`
- `openai_organization`: Removed the `show` flag
- `openai_proxy`: Removed the `show` flag
- `request_timeout`: Removed the `show` flag
- `streaming`: Removed the `show` flag
- `tags`: Removed the `show` flag
- `temperature`: Removed the `show` flag
- `tiktoken_model_name`: Removed the `show` flag
- `verbose`: Removed the `show` flag
🔧 chore: update configuration for ChatOpenAI and Chroma nodes
The configuration for the ChatOpenAI and Chroma nodes has been updated. This includes changes to the allowed_special, disallowed_special, chunk_size, client, deployment, embedding_ctx_length, and max_retries properties. These changes were made to improve the functionality and performance of the nodes.
🔧 chore(config): update OpenAIEmbeddings-YwSvx configuration options
The OpenAIEmbeddings-YwSvx configuration options have been updated to include new fields and values. This commit updates the configuration file to reflect these changes.
🔧 chore(config): update configuration options for OpenAIEmbeddings and Chroma
🔧 chore(config): update configuration options for OpenAIEmbeddings and Chroma to improve flexibility and customization
🔧 chore: update configuration options for RecursiveCharacterTextSplitter and WebBaseLoader in flow
The configuration options for RecursiveCharacterTextSplitter and WebBaseLoader in the flow have been updated. The changes include:
- Persist Directory - Chroma: The persist directory option for Chroma has been modified.
- Search Kwargs - Chroma: The search kwargs option for Chroma has been modified.
- Chunk Overlap - RecursiveCharacterTextSplitter: The chunk overlap option for RecursiveCharacterTextSplitter has been modified.
- Chunk Size - RecursiveCharacterTextSplitter: The chunk size option for RecursiveCharacterTextSplitter has been modified.
- Separator Type - RecursiveCharacterTextSplitter: The separator type option for RecursiveCharacterTextSplitter has been modified.
- Separator - RecursiveCharacterTextSplitter: The separator option for RecursiveCharacterTextSplitter has been modified.
- Metadata - WebBaseLoader: The metadata option for WebBaseLoader has been modified.
- Web Page - WebBaseLoader: The web page option for WebBaseLoader has been modified.
🔧 chore(OpenAIEmbeddings): update OpenAIEmbeddings configuration options
The OpenAIEmbeddings node configuration options have been updated to include the following changes:
- `allowed_special` and `disallowed_special` now accept a list of values instead of a single value
- `chunk_size` now accepts an integer value
- `deployment` now accepts a string value
- `embedding_ctx_length` now accepts an integer value
- `headers` now supports multiline values
- `max_retries` now accepts an integer value
- `model` now accepts a string value
- `model_kwargs` now accepts code input
- `openai_api_base` now accepts a password input
- `openai_api_key` now accepts a password input
- `openai_api_type` now accepts a password input
- `openai_api_version` now accepts a password input
- `openai_organization` has been removed from the configuration options
🔧 chore: update OpenAIEmbeddings configuration options in the UI
The OpenAIEmbeddings configuration options in the UI have been updated to include the following changes:
- Added the `openai_organization` option to specify the OpenAI organization.
- Added the `openai_proxy` option to configure the OpenAI proxy.
- Added the `request_timeout` option to set the request timeout.
- Added the `show_progress_bar` option to control the visibility of the progress bar.
- Changed the `tiktoken_model_name` option to be a password field.
- Updated the documentation link for OpenAIEmbeddings.
This commit updates the configuration options to improve the usability and functionality of the OpenAIEmbeddings module in the UI.
🔧 chore: clean up unused code and remove unnecessary fields in the configuration file
📝 docs: update documentation link for the Chroma vectorstore module
🔧 chore: update configuration options for RecursiveCharacterTextSplitter in flow
The configuration options for the RecursiveCharacterTextSplitter node in the flow have been updated. The following changes were made:
- `chunk_size` option: The default value has been changed to 1000.
- `separator_type` option: The available options have been updated to include "Text", "cpp", "go", "html", "java", "js", "latex", "markdown", "php", "proto", "python", "rst", "ruby", "rust", "scala", "sol", and "swift".
- `separators` option: The default value has been changed to ".".
These changes were made to improve the usability and flexibility of the RecursiveCharacterTextSplitter node in the flow.
📝 chore(vector_store_grouped.json): add vector_store_grouped.json test data file
🔀 chore(vector_store_grouped.json): add vector_store_grouped.json test data file
🔨 refactor(test_graph.py): reformat import statements and improve code readability
🔨 refactor(test_prompts_template.py): change dynamic attribute to True for input variables, output parser, partial variables, template, and validate template
🔨 refactor(test_template.py): reformat import statements and remove duplicate import of BaseModel
🔨 refactor(test_template.py): update value for options in format_dict test