This pull request adds a new feature to the flow editor that allows
users to easily import example flows from the
[logspace-ai/langflow_examples](https://github.com/logspace-ai/langflow_examples)
repository on GitHub. The feature is accessible via the import example
button
Clicking on the "Import Examples" button opens a dialog box that
displays a list of available example flows from the GitHub repository.
Users can select one example to import, and the flow editor will
automatically add the selected flow to the user's current project.
This feature saves users time and effort by providing a convenient way
to explore and utilize pre-built flows.
Additionally, this feature promotes collaboration and community
involvement by encouraging users to contribute their own flows to the
repository for others to use and benefit from.
This change adds error handling to catch a specific exception that may occur when processing documents with the ChromaDB library. If there are not enough documents for indexing, the error message will suggest reducing the chunk size in TextSplitter.
Add a new file `callback.py` that contains a new class `StreamingLLMCallbackHandler` that inherits from `AsyncCallbackHandler`. This class handles streaming LLM responses. It has a constructor that takes a `websocket` parameter and sets it as an instance variable. It also has an `on_llm_new_token` method that takes a `token` parameter and sends a `ChatResponse` object to the `websocket` instance variable.
Update `chat_manager.py` to import the new `StreamingLLMCallbackHandler` class. Add a new function `try_setting_streaming_options` that takes a `langchain_object` and a `websocket` parameter. This function checks if the `llm` attribute of the `langchain_object` is an instance of `OpenAI`, `ChatOpenAI`, `AzureOpenAI`, or `AzureChatOpenAI`. If it is, it sets the