feat: voice mode (#4642)

* WIP

* works

* stereo

* ui v0

* unnecessary import

* update steps in voice ws

* [autofix.ci] apply automated fixes

* unused

* merge

* [autofix.ci] apply automated fixes

* cleanly handle missing OPENAI key

* ruff

* [autofix.ci] apply automated fixes

* fix genericIconComponent path

* update for recent async fixes

* accidentally commited html file

* better prompt and threading

* client barge-in detection

* fmt

* error handling

* fixed VAD with 24 to 16Hz resampling

* comment out debug file

* better_vad

* lock

* router

* [autofix.ci] apply automated fixes

* mcp fixes

* timeout fix

* global variable exception handling

* don't close the websocket

* fix double send bug

* fix double send bug

* response.output_item event type typo

* voice_mode logging

* 📝 (constants.py): Add "copy_field" attribute to FIELD_FORMAT_ATTRIBUTES list
📝 (webhook.py): Add "copy_field" attribute to MultilineInput component
📝 (input_mixin.py): Add "copy_field" attribute to BaseInputMixin class
📝 (inputs.py): Add "copy_field" attribute to StrInput class
📝 (template/field/base.py): Add "copy_field" attribute to Input class
🚀 (NodeDescription/index.tsx): Remove default placeholder text for emptyPlaceholder prop
 (copyFieldAreaComponent/index.tsx): Add new component for handling copy field functionality
♻️ (strRenderComponent/index.tsx): Refactor component to include CopyFieldAreaComponent when copy_field attribute is present in template data

*  (NodeDescription/index.tsx): refactor renderedDescription useMemo to improve readability and maintainability
♻️ (GenericNode/index.tsx): refactor code to improve readability and maintainability, and optimize rendering logic

* 📝 (webhook.py): Add cURL field to WebhookComponent for better integration with external systems
📝 (graph/base.py): Add logging of vertex build information in Graph class for debugging purposes
📝 (NodeInputField/index.tsx): Add nodeInformationMetadata to NodeInputField for better tracking of node information
📝 (copyFieldAreaComponent/index.tsx): Refactor CopyFieldAreaComponent to handle different types of values, including webhooks
📝 (strRenderComponent/index.tsx): Add WebhookFieldComponent to handle webhook type in StrRenderComponent
📝 (tableNodeCellRender/index.tsx): Add nodeInformationMetadata to TableNodeCellRender for better tracking of node information
📝 (textAreaComponent/index.tsx): Add support for webhook format in TextAreaComponent for better integration with webhooks
📝 (webhookFieldComponent/index.tsx): Add WebhookFieldComponent to handle webhook type in ParameterRenderComponent
📝 (custom-parameter.tsx): Add nodeInformationMetadata to CustomParameterComponent for better tracking of node information
📝 (get-curl-code.tsx): Add support for different formats in getCurlWebhookCode for generating cURL commands
📝 (textAreaModal/index.tsx): Add onCloseModal callback to ComponentTextModal for better handling of modal closing
📝 (index.ts): Add type field to APIClassType for better typing of API classes

*  (index.tsx): Add a button to generate a token in the WebhookFieldComponent for improved user experience and functionality. Update the structure of the component to include the new button and styling adjustments.

* [autofix.ci] apply automated fixes

*  (generate-token-dialog.tsx): add GenerateTokenDialog component to handle token generation in webhookFieldComponent
📝 (index.tsx): import and use GenerateTokenDialog component in WebhookFieldComponent for token generation functionality

*  (frontend): introduce new feature to create API keys with customizable modal properties
🔧 (frontend): add modalProps object to customize modal title, description, input label, input placeholder, button text, generated key message, and show icon flag

* add pool interval variable and tests

* 📝 (NodeOutputfield): Remove unused ScanEyeIcon component
 (validate-webhook.ts): Add function to validate webhook data before processing
♻️ (use-get-builds-pooling-mutation): Refactor to set flow pool based on current flow
🔧 (content-render.tsx): Add data-testid attribute to api key input element
🔧 (webhookComponent.spec.ts): Refactor test to use waitForRequest for monitoring build requests

* [autofix.ci] apply automated fixes

* 🔧 (backend): rename webhook_pooling_interval to webhook_polling_interval for consistency
🔧 (frontend): update references to webhook_pooling_interval to webhook_polling_interval for consistency

* vad + dummy check

* 📝 (frontend): Update import paths and remove unused imports for better code organization and maintainability
🔧 (frontend): Refactor background styles in components to use constants for consistency and easier theming
🚀 (frontend): Add custom SecretKeyModalButton component for better modularity and reusability

* 📝 (use-get-api-keys.ts): add a TODO comment to request API key from DSLF endpoint for future implementation.

* 📝 (input_mixin.py): Remove copy_field attribute from BaseInputMixin as it is no longer needed
♻️ (inputs.py): Remove copy_field attribute from StrInput class as it is no longer needed
♻️ (inputs.py): Set copy_field attribute to False in MultilineInput class to ensure consistency
♻️ (template/field/base.py): Remove copy_field attribute from Input class as it is no longer needed
📝 (textAreaComponent/index.tsx): Replace hardcoded value "CURL_WEBHOOK" with constant WEBHOOK_VALUE for better readability and maintainability

* 🐛 (base.py): fix issue where flow_id could be None by defaulting to an empty string if flow_id is None

* 🔧 (secret-key-modal.tsx): Remove unused SecretKeyModalButton component
🔧 (get-modal-props.tsx): Remove unused getModalPropsApiKey function and related imports and constants

* 📝 (langflow): add noqa comments to suppress linting rule A005 for specific files in the io, logging, and socket modules

*  (frontend): Add voice assistant feature to chat input component
🔧 (frontend): Refactor import path for VoiceAssistant component
🔧 (frontend): Refactor class name for button in upload-file-button component
🔧 (frontend): Refactor class name for button in voice-button component
🔧 (frontend): Refactor class name for button in applies.css
🔧 (frontend): Refactor class name for button in styleUtils.ts

*  (api-key-popup.tsx): Add a new component ApiKeyPopup to enable users to enter their OpenAI API key for voice transcription
♻️ (index.tsx): Refactor VoiceAssistant component to check for the presence of OpenAI API key before starting voice transcription and show ApiKeyPopup component if key is missing
🔧 (apiKeyModal/index.tsx): Remove the obsolete APIKeyModal component as it is no longer needed after implementing ApiKeyPopup in the voice assistant feature

* merge fix

*  (Voice Assistant): Introduce voice transcription feature powered by OpenAI. Add components and hooks for handling audio recording, processing, and WebSocket communication. Implement functionality to start, stop recording, play audio chunks, handle WebSocket messages, and initialize audio context. Add support for entering API key for OpenAI.

*  (voice-assistant.tsx): Add voice assistant feature to the chat input component for recording and processing audio input
🔧 (use-post-voice.tsx): Remove unused file use-post-voice.tsx from the project
♻️ (use-handle-websocket-message.ts): Refactor useHandleWebsocketMessage function to improve readability and remove unnecessary console logs
♻️ (use-initialize-audio.ts): Refactor useInitializeAudio function to handle audio context creation and resume more efficiently
 (use-interrupt-playback.ts): Add useInterruptPlayback function to handle interrupting audio playback
 (use-start-conversation.ts): Add useStartConversation function to initiate a conversation using a WebSocket connection
📝 (chat-input.tsx): Update import path for VoiceAssistant component to match the new file structure

* ♻️ (use-interrupt-playback.ts): refactor useInterruptPlayback hook to clear audio queue, stop playback, and send stop message to audio processor if it exists

* 🔧 (gitattributes): add *.raw file extension as binary to prevent git from modifying the file contents

* lazy load components

* TRACE logging

* new voice_mode ws endpoint and elevenlabs mode

* unique voice

* text modality

* stream elevenlabs

* ws and sentence chunking

* offload tts to new thread

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* add eleven labs changes on FE

* 🐛 (use-handle-websocket-message.ts): Fix logic to handle error messages and show API key modal when api_key_missing error occurs
♻️ (use-handle-websocket-message.ts): Refactor error handling logic to improve readability and maintainability
📝 (voice-assistant.tsx): Add console log for showApiKeyModal variable to debug an issue

* api_key_missing error includes key_name

*  (use-handle-websocket-message.ts): update status message when API key is missing to provide more context
🔧 (voice-assistant.tsx): add support for saving and handling API key using new API endpoints and alert messages
🔧 (voice-assistant.tsx): update logic to check for existing API key and handle saving new API key with appropriate error handling

* 🔧 (gitignore): remove unnecessary whitespace at the end of the file
🔧 (output_audio.raw): delete output_audio.raw file as it is no longer needed

* 🐛 (use-handle-websocket-message.ts): Fix logic to show API key modal only if hasOpenAIAPIKey is false
🐛 (voice-assistant.tsx): Fix missing hasOpenAIAPIKey parameter in VoiceAssistant component to prevent runtime errors

* [autofix.ci] apply automated fixes

* modalities

* send all openai events and homogenize eleven_client name

* 🔧 (voice_mode.py): Refactor get_or_create_elevenlabs_client function to accept user_id and session parameters for better flexibility and reusability
🔧 (voice-assistant.tsx): Add local storage utility functions to handle saving and retrieving audio settings for voice assistant
🔧 (voiceStore.ts): Create a new store to manage voice-related data such as available voices and providers
🔧 (audio-settings-dialog.tsx): Introduce a new component to manage audio settings for the voice assistant, including provider and voice selection
🔧 (settings-voice-button.tsx): Implement a button component to open the audio settings modal for the voice assistant

* unnecessary barge in queue and elif fix

* queue_service, remove old ws endpoint, and vad tweaks

* traceback and better system prompt

* [autofix.ci] apply automated fixes

* merge

* build_flow fix

* fix flow_and_stream args

* mcp flow_and_stream

* build_flow_and_stream

* retry prompt and conversation_id

* [autofix.ci] apply automated fixes

* prompt

* switch to openai and elevenlabs & add token based auth

* 📝 (voice_mode.py): Add import statements for new modules and classes
📝 (voice_mode.py): Add WebSocket endpoint parameter for session_id
📝 (voice_mode.py): Add logic to save message to database in WebSocket endpoint
 (use-get-messages-mutation.ts): Add function to fetch messages data from API
♻️ (use-start-conversation.ts): Update WebSocket URL to include session_id
♻️ (use-start-recording.ts): Add function to handle fetching messages after recording
♻️ (streamProcessor.ts): Add error handling for already registered processor
♻️ (voice-assistant.tsx): Add function to trigger fetching messages after recording
📝 (new-modal.tsx): Set and update current session ID in modal
📝 (utilityStore.ts): Add currentSessionId state and setter function
📝 (index.ts): Add type definition for currentSessionId in utility store

* [autofix.ci] apply automated fixes

*  (voice_mode.py): add functionality to save user input messages to the database during a conversation in the voice mode feature. This allows for better tracking and analysis of user interactions.

* [autofix.ci] apply automated fixes

* continue to support ws without session_id

* intercept and replace response.done when elevenlabs. and better logging

* 🐛 (voice_mode.py): Fix issue where message was not being saved to the database in certain cases
🐛 (use-start-recording.ts): Fix bug where handleGetMessagesMutation was not being called in all scenarios

* close openai ws when client ws closes

* [autofix.ci] apply automated fixes

* rename elevenlabs event type

* move session to global, improve event error logging

* [autofix.ci] apply automated fixes

* session update merging

* key check fix

* fix session update forwarding to openai

* [autofix.ci] apply automated fixes

* 📝 (chat-view.tsx): Add chat history display functionality to show messages in the chat view component and handle file uploads and drag and drop functionality.

*  (inputGlobalComponent): Add GeneralDeleteConfirmationModal and GeneralGlobalVariableModal components for reusability and better code organization
📝 (audio-settings-dialog.tsx): Add userElevenLabsApiKey prop to SettingsVoiceModal component for ElevenLabs API key configuration
🔧 (delete-confirmation-modal.tsx): Create GeneralDeleteConfirmationModal component for reusability in delete confirmation modals
🔧 (global-variable-modal.tsx): Create GeneralGlobalVariableModal component for reusability in global variable modals

* 🐛 (voice_mode.py): Fix error saving message to database and add transcript extraction function
🔧 (constants.ts): Update calculation for LANGFLOW_ACCESS_TOKEN_EXPIRE_SECONDS to use environment variable with fallback value
♻️ (audio-settings-dialog.tsx): Refactor imports and remove unused code, improve logic for setting voice provider and voice selection
🔧 (vite.config.mts): Add dotenv package for loading environment variables, update configuration to use environment variables for BACKEND_URL, ACCESS_TOKEN_EXPIRE_SECONDS, CI, and ELEVENLABS_API_KEY

* [autofix.ci] apply automated fixes

* handle BrokenResourceError

* handle BrokenResourceError

* voice UI/UX improvements

*  (voice-assistant.spec.ts): add test for voice assistant feature to ensure it is visible and interactive on the page

*  (frontend): add feature flag for enabling voice assistant functionality
📝 (input-wrapper.tsx): conditionally render VoiceButton based on ENABLE_VOICE_ASSISTANT flag to control visibility of voice assistant feature

*  (button.tsx): Add new button variant 'outlineAmber' for better visual consistency
🔧 (feature-flags.ts): Enable voice assistant feature by setting ENABLE_VOICE_ASSISTANT to true
🔧 (voice-assistant.tsx): Update logic to initialize audio only if hasOpenAIAPIKey is true
🔧 (voice-assistant.tsx): Update styling for voice assistant container and recording indicator
🔧 (voice-assistant.tsx): Update button variant to 'outlineAmber' for settings icon
📝 (index.css): Add new CSS variables for accent-amber and red-foreground colors for better theming
📝 (tailwind.config.mjs): Add 'red-foreground' color variable to Tailwind CSS config for consistency

* 🐛 (feature-flags.ts): fix ENABLE_VOICE_ASSISTANT flag to be set to false instead of true

* 🔧 (voice_mode.py): add global dictionary to store queues for each session and track active message processing tasks
🔧 (voice_mode.py): add message processing queue to ensure ordered processing of messages in the database

*  (frontend): enable voice assistant feature flag
♻️ (frontend): refactor code to use functional update pattern in useBarControls hook
🐛 (frontend): fix logic in VoiceAssistant component to properly handle recording and initialization of audio assistant

* [autofix.ci] apply automated fixes

*  (audio-settings-dialog.tsx): add LanguageSelect component to allow users to select preferred language for speech recognition
📝 (audio-settings-dialog.tsx): add documentation for ALL_LANGUAGES constant and SettingsVoiceModalProps interface
🔧 (language-select.tsx): create LanguageSelect component for selecting preferred language for speech recognition
🔧 (use-start-recording.ts): add autoGainControl and sampleRate options for better audio recording quality
🔧 (use-start-recording.ts): set fftSize property on analyserRef for improved audio analysis
🔧 (use-start-recording.ts): pass preferredLanguage to input_audio_buffer.append event for language identification
🔧 (voice-assistant.tsx): add support for setting and saving preferred language for speech recognition

* UI adjustments

* Small fixes for client/LF session update merges

* [autofix.ci] apply automated fixes

* 🔧 (use-bar-controls.ts): Add useRef import to fix missing dependency in useBarControls hook
🔧 (use-bar-controls.ts): Add animationFrameRef and timeDataRef to manage animation and sound detection
🔧 (use-bar-controls.ts): Add support for sound detection using analyserRef and setSoundDetected
🔧 (use-bar-controls.ts): Update useEffect dependencies for better performance
🔧 (voice-assistant.tsx): Add soundDetected state to manage sound detection in VoiceAssistant component
🔧 (voice-assistant.tsx): Pass analyserRef and setSoundDetected to useBarControls hook in VoiceAssistant component
🔧 (chat-message.tsx): Update className logic to handle isAudioMessage condition in ChatMessage component

* 🔧 (use-bar-controls.ts): refactor useBarControls hook to improve readability and maintainability by introducing useRef for baseHeights, lastRandomizeTime, and minHeight, and optimizing the sound detection logic.

*  (use-get-messages-polling.ts): Add new file for handling messages polling functionality
♻️ (use-bar-controls.ts): Refactor useBarControls hook to make setSoundDetected optional
♻️ (use-start-recording.ts): Remove handleGetMessagesMutation function call
♻️ (voice-assistant.tsx): Refactor VoiceAssistant component to use useGetMessagesPollingMutation instead of useGetMessagesMutation and manage soundDetected state using useVoiceStore
📝 (chat-message.tsx): Remove unused import and type ChatMessageType
🔧 (voiceStore.ts): Add soundDetected state and setSoundDetected function to voiceStore
🔧 (voice.types.ts): Add soundDetected boolean type and setSoundDetected function to VoiceStoreType

* 🔧 (frontend): add Button component import to improve code organization
🔧 (frontend): add support for saving OpenAI API key in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponentType interface for flexibility
🔧 (frontend): add commandWidth prop to InputComponent in InputComponent index file
🔧 (frontend): add commandWidth prop to CustomInputPopover in popover index file
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in SettingsVoiceModal component
🔧 (frontend): add commandWidth prop to InputComponent in Settings

* Update session instructions with more clear categories. Change how default LF session and client sessions are merged.

*  (chat-view.tsx): Add support for sound detection in chat view component to enhance user experience and interaction with the application.

* [autofix.ci] apply automated fixes

*  (audio-settings-dialog.tsx): Add debounce functionality to handleOpenAIKeyChange to improve performance and reduce API calls
🔧 (audio-worklet-processor.js): Add audio worklet processor for voice activity detection to enhance voice assistant functionality
♻️ (use-start-recording.ts): Refactor useStartRecording hook to only send non-silent audio data to the server for processing
🔧 (voice-assistant.tsx): Remove unnecessary import and setShowSettingsModal call in handleClickSaveOpenAIApiKey to clean up code

* 🐛 (audio-worklet-processor.js): Adjust noise and activation thresholds for better sensitivity and accuracy in speech detection
🐛 (audio-worklet-processor.js): Scale RMS volume to match use-bar-controls.ts scale for consistent threshold comparison
🐛 (use-bar-controls.ts): Update threshold for sound detection to align with scaled volume values for accurate detection

* 🔧 (use-start-recording.ts): refactor useStartRecording hook to simplify audio data handling and sending process to WebSocket server

* 🐛 (use-bar-controls.ts): adjust sound detection threshold to improve accuracy and responsiveness

* [autofix.ci] apply automated fixes

* 🐛 (voice-assistant.tsx): Update condition to initialize audio only if not recording, has API key, and settings modal is not shown
🐛 (chat-view.tsx): Fix syntax error in rendering new chat section in ChatView component

* ♻️ (chat-view.tsx): remove unnecessary parentheses in ChatView component to improve code readability

* 🐛 (voice-assistant.tsx): fix useEffect dependency array to include showSettingsModal to prevent unnecessary re-renders
♻️ (voice-assistant.tsx): remove duplicate onClick handler for showSettingsModal to improve code readability and maintainability

* 🐛 (voice-assistant.tsx): fix voice assistant settings button not closing audio input when clicked

* 🔧 (audio-settings-dialog.tsx): Remove unused 'open' prop from SettingsVoiceModal component
♻️ (voice-assistant.tsx): Refactor logic to handle audio recording and settings modal visibility
🔧 (voice-assistant.tsx): Replace AudioSettingsDialog component with SettingsVoiceModal component in VoiceAssistant component

* 🐛 (audio-settings-dialog.tsx): reduce debounce timeout from 2000ms to 1000ms for faster response to user input
🐛 (audio-settings-dialog.tsx): adjust alignOffset value in DropdownMenuContent component to -54 for better alignment on the UI

*  (button.tsx): add medium size variant to button component
🔧 (audio-settings-dialog.tsx): add isEditingOpenAIKey boolean prop and setIsEditingOpenAIKey function prop to SettingsVoiceModal component
🔧 (voice-assistant.tsx): add isEditingOpenAIKey state and setIsEditingOpenAIKey function to VoiceAssistant component, update handleSaveApiKey function to handle editing OpenAI API key

* 🐛 (audio-settings-dialog.tsx): Fix issue where the "Save" button text was not updating correctly based on editing state
📝 (voice-assistant.tsx): Add missing semicolon to the code to prevent syntax error

* 🐛 (audio-settings-dialog.tsx): Fix issue with setIsEditingOpenAIKey function call
♻️ (use-bar-controls.ts): Refactor code to properly initialize and handle audio analyzer
 (voice-assistant.tsx): Introduce logic to properly initialize audio and start recording when voice assistant is activated

* 🐛 (audio-settings-dialog.tsx): fix typo in setIsEditingOpenAIKey function call to properly close the editing of OpenAI key when modal is closed

* 🔧 (audio-settings-dialog.tsx): change onClick handler function name from setOpen to onOpenChangeDropdownMenu for clarity and consistency

* lint fixes

* ruff fixes

* [autofix.ci] apply automated fixes

*  (test_voice_mode.py): import pathlib module for improved file path handling
♻️ (test_voice_mode.py): refactor code to use newer numpy random Generator for better random number generation
📝 (test_voice_mode.py): update comments for clarity and consistency in test functions

* 🔧 (components.py): refactor global variables to use a class for managing component cache
🔧 (components.py): replace os module with pathlib for file path operations
🔧 (components.py): refactor global variables to use a class for managing fully loaded components
🔧 (components.py): refactor function to write audio bytes to file using a helper function

* 🔧 Refactor voice_utils.py: Improve error handling in write_audio_to_file function

*  (voice-select.tsx): add unique key prop to SelectItem component to avoid React warning and improve performance

* [autofix.ci] apply automated fixes

*  (audio-settings-dialog.tsx): Update onClick function to setIsEditingOpenAIKey instead of onOpenChangeDropdownMenu for better clarity
📝 (microphone-select.tsx): Update text from "Input" to "Audio Input" for better user understanding
♻️ (voice-assistant.tsx): Refactor handleSaveApiKey function to handle both OpenAI and ElevenLabs API keys, improving code readability and maintainability. Remove unnecessary checks for already saved API keys.

* 📝 (voice_utils.py): add docstring to resample_24k_to_16k function for clarity and documentation purposes
📝 (voice_utils.py): add input validation to ensure frame_24k_bytes is exactly 960 bytes before resampling
📝 (test_voice_mode.py): refactor test_webrtcvad_with_real_data to generate synthetic audio data for testing instead of reading from a file
📝 (test_voice_mode.py): update comments and assertions for clarity and accuracy in speech detection testing

* [autofix.ci] apply automated fixes

* 🐛 (feature-flags.ts): fix duplicate declaration of ENABLE_VOICE_ASSISTANT variable

* cache config by session_id

* [autofix.ci] apply automated fixes

*  (audio-settings-dialog.tsx): Add conditional rendering for MicrophoneSelect component based on the presence of microphones array to prevent rendering when no microphones are available
🐛 (microphone-select.tsx): Add optional chaining to navigator.mediaDevices calls to prevent errors when navigator or mediaDevices are null or undefined
🐛 (use-start-recording.ts): Add optional chaining to navigator.mediaDevices calls to prevent errors when navigator or mediaDevices are null or undefined

* ♻️ (voice_mode.py): refactor openai_realtime_session attribute to explicitly define its type as a dictionary with string keys and any values for better code clarity and type safety

* 📝 (voice_mode.py): add Optional import from typing module to improve type hinting
♻️ (voice_mode.py): make ElevenLabs package optional to avoid errors if not installed and provide a fallback mechanism

* [autofix.ci] apply automated fixes

* ♻️ (voice_mode.py): refactor ElevenLabs and ApiError classes to include type ignore comments to suppress redefinition warnings

* [autofix.ci] apply automated fixes

* merge

* event logger to debug

* 🐛 (language-select.tsx): Fix potential null reference error when accessing lang object properties
🐛 (voice-select.tsx): Fix potential null reference error when accessing voice object properties
🐛 (stringManipulation.ts): Fix potential null reference error when calling toLowerCase and toUpperCase functions

* fix openai voices

* error handling

* frontend socket state

* [autofix.ci] apply automated fixes

* add session to event logging

* add elevenlabs to base package

* 🐛 (audio-settings-dialog.tsx): Fix rendering issue with MicrophoneSelect component
🐛 (microphone-select.tsx): Fix potential null pointer exception in rendering microphone label
🐛 (voice-assistant.tsx): Fix issue with toggling recording functionality to properly handle microphone stream and tracks

* 🐛 (voice-assistant.tsx): fix optional chaining for microphoneRef to prevent potential null pointer errors

* [autofix.ci] apply automated fixes

* 🐛 (use-start-conversation.ts): fix variable name from sessionId to currentSessionId for clarity and consistency
🔧 (use-start-conversation.ts): refactor WebSocket connection URL to use current host, port, and protocol for better flexibility and compatibility

* alternating messages

* mypy

* [autofix.ci] apply automated fixes

*  (userSettings.spec.ts): add additional wait times to improve stability and reliability of tests

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: cristhianzl <cristhian.lousa@gmail.com>
Co-authored-by: nfreybler <nfreybler@nvidia.com>
This commit is contained in:
Sebastián Estévez 2025-03-19 20:05:55 -04:00 committed by GitHub
commit 5493ac7b0b
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
85 changed files with 5020 additions and 588 deletions

1
.gitattributes vendored
View file

@ -33,3 +33,4 @@ Dockerfile text
*.svg binary
*.csv binary
*.wav binary
*.raw binary

2
.gitignore vendored
View file

@ -276,4 +276,4 @@ src/frontend/temp
.history
.dspy_cache/
*.db
*.db

View file

@ -108,6 +108,8 @@ dependencies = [
"crewai==0.102.0",
"mcp>=0.9.1",
"uv>=0.5.7",
"webrtcvad>=2.0.10",
"scipy>=1.14.1",
"ag2>=0.1.0",
"scrapegraph-py>=1.12.0",
"pydantic-ai>=0.0.19",
@ -160,6 +162,9 @@ dev = [
"hypothesis>=6.123.17",
"locust>=2.32.9",
"pytest-rerunfailures>=15.0",
"scrapegraph-py>=1.10.2",
"pydantic-ai>=0.0.19",
"elevenlabs>=1.52.0",
"faker>=37.0.0",
]

View file

@ -9,12 +9,14 @@ from langflow.api.v1 import (
flows_router,
folders_router,
login_router,
mcp_router,
monitor_router,
starter_projects_router,
store_router,
users_router,
validate_router,
variables_router,
voice_mode_router,
)
from langflow.api.v2 import files_router as files_router_v2
@ -43,6 +45,8 @@ router_v1.include_router(files_router)
router_v1.include_router(monitor_router)
router_v1.include_router(folders_router)
router_v1.include_router(starter_projects_router)
router_v1.include_router(voice_mode_router)
router_v1.include_router(mcp_router)
router_v2.include_router(files_router_v2)

View file

@ -12,6 +12,7 @@ from langflow.api.v1.store import router as store_router
from langflow.api.v1.users import router as users_router
from langflow.api.v1.validate import router as validate_router
from langflow.api.v1.variable import router as variables_router
from langflow.api.v1.voice_mode import router as voice_mode_router
__all__ = [
"api_key_router",
@ -28,4 +29,5 @@ __all__ = [
"users_router",
"validate_router",
"variables_router",
"voice_mode_router",
]

View file

@ -526,6 +526,19 @@ async def build_vertex_stream(
raise HTTPException(status_code=500, detail="Error building Component") from exc
async def build_flow_and_stream(flow_id, inputs, background_tasks, current_user):
queue_service = get_queue_service()
build_response = await build_flow(
flow_id=flow_id,
inputs=inputs,
background_tasks=background_tasks,
current_user=current_user,
queue_service=queue_service,
)
job_id = build_response["job_id"]
return await get_build_events(job_id, queue_service)
@router.post("/build_public_tmp/{flow_id}/flow")
async def build_public_tmp(
*,

View file

@ -10,7 +10,7 @@ from uuid import UUID, uuid4
import pydantic
from anyio import BrokenResourceError
from fastapi import APIRouter, Depends, Request
from fastapi import APIRouter, Depends, HTTPException, Request
from fastapi.responses import StreamingResponse
from mcp import types
from mcp.server import NotificationOptions, Server
@ -18,12 +18,17 @@ from mcp.server.sse import SseServerTransport
from sqlmodel import select
from starlette.background import BackgroundTasks
from langflow.api.v1.chat import build_flow
from langflow.api.v1.chat import build_flow_and_stream
from langflow.api.v1.schemas import InputValueRequest
from langflow.helpers.flow import json_schema_from_flow
from langflow.services.auth.utils import get_current_active_user
from langflow.services.database.models import Flow, User
from langflow.services.deps import get_db_service, get_session, get_settings_service, get_storage_service
from langflow.services.deps import (
get_db_service,
get_session,
get_settings_service,
get_storage_service,
)
from langflow.services.storage.utils import build_content_type_from_extension
logger = logging.getLogger(__name__)
@ -45,6 +50,20 @@ if False:
logger.debug("MCP module loaded - debug logging enabled")
class MCPConfig:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._instance.enable_progress_notifications = None
return cls._instance
def get_mcp_config():
return MCPConfig()
router = APIRouter(prefix="/mcp", tags=["mcp"])
server = Server("langflow-mcp-server")
@ -177,10 +196,12 @@ async def handle_list_tools():
@server.call_tool()
async def handle_call_tool(
name: str, arguments: dict, *, enable_progress_notifications: bool = Depends(get_enable_progress_notifications)
) -> list[types.TextContent]:
async def handle_call_tool(name: str, arguments: dict) -> list[types.TextContent]:
"""Handle tool execution requests."""
mcp_config = get_mcp_config()
if mcp_config.enable_progress_notifications is None:
settings_service = get_settings_service()
mcp_config.enable_progress_notifications = settings_service.settings.mcp_server_enable_progress_notifications
try:
session = await anext(get_session())
background_tasks = BackgroundTasks()
@ -196,7 +217,7 @@ async def handle_call_tool(
processed_inputs = dict(arguments)
# Initial progress notification
if enable_progress_notifications and (progress_token := server.request_context.meta.progressToken):
if mcp_config.enable_progress_notifications and (progress_token := server.request_context.meta.progressToken):
await server.request_context.session.send_progress_notification(
progress_token=progress_token, progress=0.0, total=1.0
)
@ -207,7 +228,7 @@ async def handle_call_tool(
)
async def send_progress_updates():
if not (enable_progress_notifications and server.request_context.meta.progressToken):
if not (mcp_config.enable_progress_notifications and server.request_context.meta.progressToken):
return
try:
@ -220,7 +241,7 @@ async def handle_call_tool(
await asyncio.sleep(1.0)
except asyncio.CancelledError:
# Send final 100% progress
if enable_progress_notifications:
if mcp_config.enable_progress_notifications:
await server.request_context.session.send_progress_notification(
progress_token=progress_token, progress=1.0, total=1.0
)
@ -228,17 +249,16 @@ async def handle_call_tool(
db_service = get_db_service()
collected_results = []
async with db_service.with_session() as async_session:
async with db_service.with_session():
try:
progress_task = asyncio.create_task(send_progress_updates())
try:
response = await build_flow(
response = await build_flow_and_stream(
flow_id=UUID(name),
inputs=input_request,
background_tasks=background_tasks,
current_user=current_user,
session=async_session,
)
async for line in response.body_iterator:
@ -276,7 +296,7 @@ async def handle_call_tool(
except Exception as e:
context = server.request_context
# Send error progress if there's an exception
if enable_progress_notifications and (progress_token := context.meta.progressToken):
if mcp_config.enable_progress_notifications and (progress_token := context.meta.progressToken):
await server.request_context.session.send_progress_notification(
progress_token=progress_token, progress=1.0, total=1.0
)
@ -346,4 +366,8 @@ async def handle_sse(request: Request, current_user: Annotated[User, Depends(get
@router.post("/")
async def handle_messages(request: Request):
await sse.handle_post_message(request.scope, request.receive, request._send)
try:
await sse.handle_post_message(request.scope, request.receive, request._send)
except BrokenResourceError as e:
logger.info("MCP Server disconnected")
raise HTTPException(status_code=404, detail=f"MCP Server disconnected, error: {e}") from e

View file

@ -0,0 +1,947 @@
import asyncio
import base64
import json
import os
# For sync queue and thread
import queue
import threading
import traceback
import uuid
from collections import defaultdict
from datetime import datetime, timezone
from typing import Any
from uuid import UUID, uuid4
import numpy as np
import requests
import sqlalchemy.exc
import webrtcvad
import websockets
from cryptography.fernet import InvalidToken
from elevenlabs.client import ElevenLabs
from fastapi import APIRouter, BackgroundTasks, Security
from sqlalchemy import select
from starlette.websockets import WebSocket, WebSocketDisconnect
from langflow.api.utils import CurrentActiveUser, DbSession
from langflow.api.v1.chat import build_flow_and_stream
from langflow.api.v1.schemas import InputValueRequest
from langflow.logging import logger
from langflow.memory import aadd_messagetables
from langflow.schema.properties import Properties
from langflow.services.auth.utils import api_key_header, api_key_query, api_key_security, get_current_user_by_jwt
from langflow.services.database.models.flow.model import Flow
from langflow.services.database.models.message.model import MessageTable
from langflow.services.deps import get_variable_service, session_scope
from langflow.utils.voice_utils import (
BYTES_PER_24K_FRAME,
VAD_SAMPLE_RATE_16K,
resample_24k_to_16k,
)
router = APIRouter(prefix="/voice", tags=["Voice"])
SILENCE_THRESHOLD = 0.1
PREFIX_PADDING_MS = 100
SILENCE_DURATION_MS = 100
AUDIO_SAMPLE_THRESHOLD = 100
SESSION_INSTRUCTIONS = """
Your instructions will be divided into three mutually exclusive sections: "Permanent", "Default", and "Additional".
"Permanent" instructions are to never be overrided, superceded or otherwise ignored.
"Default" instructions are provided by default. They may never override "Permanent"
or "Additional" instructions, and they may likewise be superceded by those same other rules.
"Additional" instructions may be empty. When relevant, they override "Default" instructions,
but never "Permanent" instructions.
[PERMANENT] The following instructions are to be considered "Permanent"
* When the user's query necessitates use of one of the enumerated tools, call the execute_flow
function to assist, and pass in the user's entire query as the input parameter, and use that
to craft your responses.
* No other function is allowed to be registered besides the execute_flow function
[DEFAULT] The following instructions are to be considered only "Default"
* Converse with the user to assist with their question.
* Never provide URLs in repsonses, but you may use URLs in tool calls or when processing those
URLs' content.
* Always (and I mean *always*) let the user know before you call a function that you will be
doing so.
* Always update the user with the required information, when the function returns.
* Unless otherwise requested, only summarize the return results. Do not repeat everything.
* Always call the function again when requested, regardless of whether execute_flow previously
succeeded or failed.
* Never provide URLs in repsonses, but you may use URLs in tool calls or when processing those
URLs' content.
[ADDITIONAL] The following instructions are to be considered only "Additional"
"""
class VoiceConfig:
def __init__(self, session_id: str):
self.session_id = session_id
self.use_elevenlabs = False
self.elevenlabs_voice = "JBFqnCBsd6RMkjVDRZzb"
self.elevenlabs_model = "eleven_multilingual_v2"
self.elevenlabs_client = None
self.elevenlabs_key = None
self.barge_in_enabled = False
self.default_openai_realtime_session = {
"modalities": ["text", "audio"],
"instructions": SESSION_INSTRUCTIONS,
"voice": "echo",
"temperature": 0.8,
"input_audio_format": "pcm16",
"output_audio_format": "pcm16",
"turn_detection": {
"type": "server_vad",
"threshold": SILENCE_THRESHOLD,
"prefix_padding_ms": PREFIX_PADDING_MS,
"silence_duration_ms": SILENCE_DURATION_MS,
},
"input_audio_transcription": {"model": "whisper-1"},
"tools": [],
"tool_choice": "auto",
}
self.openai_realtime_session: dict[str, Any] = {}
def get_session_dict(self):
"""Return a copy of the default session dictionary with current settings."""
return dict(self.default_openai_realtime_session)
# Create a cache for voice configs
voice_config_cache: dict[str, VoiceConfig] = {}
def get_voice_config(session_id: str) -> VoiceConfig:
"""Get or create a VoiceConfig instance for the given session_id."""
if session_id is None:
msg = "session_id cannot be None"
raise ValueError(msg)
if session_id not in voice_config_cache:
voice_config_cache[session_id] = VoiceConfig(session_id)
return voice_config_cache[session_id]
# Create a global dictionary to store queues for each session
message_queues: dict[str, asyncio.Queue] = defaultdict(asyncio.Queue)
# Track active message processing tasks
message_tasks: dict[str, asyncio.Task] = {}
async def get_flow_desc_from_db(flow_id: str) -> Flow:
"""Get flow from database."""
async with session_scope() as session:
stmt = select(Flow).where(Flow.id == UUID(flow_id))
result = await session.exec(stmt)
flow = result.scalar_one_or_none()
if not flow:
error_message = f"Flow with id {flow_id} not found"
raise ValueError(error_message)
return flow.description
def pcm16_to_float_array(pcm_data):
values = np.frombuffer(pcm_data, dtype=np.int16).astype(np.float32)
return values / 32768.0 # Normalize to -1.0 to 1.0
async def text_chunker_with_timeout(chunks, timeout=0.3):
"""Async generator that takes an async iterable (of text pieces),.
accumulates them and yields chunks without breaking sentences.
If no new text is received within 'timeout' seconds and there is
buffered text, it flushes that text.
"""
splitters = (".", ",", "?", "!", ";", ":", "", "-", "(", ")", "[", "]", "}", " ")
buffer = ""
ait = chunks.__aiter__()
while True:
try:
text = await asyncio.wait_for(ait.__anext__(), timeout=timeout)
except asyncio.TimeoutError:
if buffer:
yield buffer + " "
buffer = ""
continue
except StopAsyncIteration:
break
if text is None:
if buffer:
yield buffer + " "
break
if buffer and buffer[-1] in splitters:
yield buffer + " "
buffer = text
elif text and text[0] in splitters:
yield buffer + text[0] + " "
buffer = text[1:]
else:
buffer += text
if buffer:
yield buffer + " "
async def queue_generator(queue: asyncio.Queue):
"""Async generator that yields items from a queue."""
while True:
item = await queue.get()
if item is None:
break
yield item
async def handle_function_call(
websocket: WebSocket,
openai_ws: websockets.WebSocketClientProtocol,
function_call: dict,
function_call_args: str,
flow_id: str,
background_tasks: BackgroundTasks,
current_user: CurrentActiveUser,
conversation_id: str,
):
"""Handle function calls from the OpenAI API."""
try:
args = json.loads(function_call_args) if function_call_args else {}
input_request = InputValueRequest(
input_value=args.get("input"), components=[], type="chat", session=conversation_id
)
response = await build_flow_and_stream(
flow_id=UUID(flow_id),
inputs=input_request,
background_tasks=background_tasks,
current_user=current_user,
)
result = ""
async for line in response.body_iterator:
if not line:
continue
event_data = json.loads(line)
await websocket.send_json({"type": "flow.build.progress", "data": event_data})
if event_data.get("event") == "end_vertex":
text_part = (
event_data.get("data", {})
.get("build_data", "")
.get("data", {})
.get("results", {})
.get("message", {})
.get("text", "")
)
result += text_part
function_output = {
"type": "conversation.item.create",
"item": {
"type": "function_call_output",
"call_id": function_call.get("call_id"),
"output": str(result),
},
}
await openai_ws.send(json.dumps(function_output))
await openai_ws.send(json.dumps({"type": "response.create"}))
except json.JSONDecodeError as e:
trace = traceback.format_exc()
logger.error(f"JSON decode error: {e!s}\ntrace: {trace}")
function_output = {
"type": "conversation.item.create",
"item": {
"type": "function_call_output",
"call_id": function_call.get("call_id"),
"output": f"Error parsing arguments: {e!s}",
},
}
await openai_ws.send(json.dumps(function_output))
except ValueError as e:
trace = traceback.format_exc()
logger.error(f"Value error: {e!s}\ntrace: {trace}")
function_output = {
"type": "conversation.item.create",
"item": {
"type": "function_call_output",
"call_id": function_call.get("call_id"),
"output": f"Error with input values: {e!s}",
},
}
await openai_ws.send(json.dumps(function_output))
except (ConnectionError, websockets.exceptions.WebSocketException) as e:
trace = traceback.format_exc()
logger.error(f"Connection error: {e!s}\ntrace: {trace}")
function_output = {
"type": "conversation.item.create",
"item": {
"type": "function_call_output",
"call_id": function_call.get("call_id"),
"output": f"Connection error: {e!s}",
},
}
await openai_ws.send(json.dumps(function_output))
except (KeyError, AttributeError, TypeError) as e:
logger.error(f"Error executing flow: {e}")
logger.error(traceback.format_exc())
function_output = {
"type": "conversation.item.create",
"item": {
"type": "function_call_output",
"call_id": function_call.get("call_id"),
"output": f"Error executing flow: {e}",
},
}
await openai_ws.send(json.dumps(function_output))
# --- Synchronous text chunker using a standard queue ---
def sync_text_chunker(sync_queue_obj: queue.Queue, timeout: float = 0.3):
"""Synchronous generator that reads text pieces from a sync queue.
accumulates them and yields complete chunks.
"""
splitters = (".", ",", "?", "!", ";", ":", "", "-", "(", ")", "[", "]", "}", " ")
buffer = ""
while True:
try:
text = sync_queue_obj.get(timeout=timeout)
except queue.Empty:
if buffer:
yield buffer + " "
buffer = ""
continue
if text is None:
if buffer:
yield buffer + " "
break
if buffer and buffer[-1] in splitters:
yield buffer + " "
buffer = text
elif text and text[0] in splitters:
yield buffer + text[0] + " "
buffer = text[1:]
else:
buffer += text
if buffer:
yield buffer + " "
@router.websocket("/ws/flow_as_tool/{flow_id}")
async def flow_as_tool_websocket_no_session(
client_websocket: WebSocket,
flow_id: str,
background_tasks: BackgroundTasks,
session: DbSession,
):
session_id = str(uuid4())
await flow_as_tool_websocket(
client_websocket=client_websocket,
flow_id=flow_id,
background_tasks=background_tasks,
session=session,
session_id=session_id,
)
@router.websocket("/ws/flow_as_tool/{flow_id}/{session_id}")
async def flow_as_tool_websocket(
client_websocket: WebSocket,
flow_id: str,
background_tasks: BackgroundTasks,
session: DbSession,
session_id: str,
):
"""WebSocket endpoint registering the flow as a tool for real-time interaction."""
try:
await client_websocket.accept()
voice_config = get_voice_config(session_id)
token = client_websocket.cookies.get("access_token_lf")
current_user = None
if token:
current_user = await get_current_user_by_jwt(token, session)
if current_user is None:
current_user = await api_key_security(Security(api_key_query), Security(api_key_header))
if current_user is None:
await client_websocket.send_json(
{
"type": "error",
"code": "langflow_auth",
"message": "You must pass a valid Langflow token or cookie.",
}
)
return
variable_service = get_variable_service()
try:
openai_key_value = await variable_service.get_variable(
user_id=current_user.id, name="OPENAI_API_KEY", field="openai_api_key", session=session
)
openai_key = openai_key_value if openai_key_value is not None else os.getenv("OPENAI_API_KEY", "")
if not openai_key or openai_key == "dummy":
await client_websocket.send_json(
{
"type": "error",
"code": "api_key_missing",
"key_name": "OPENAI_API_KEY",
"message": "OpenAI API key not found. Please set your API key as an env var or a "
"global variable.",
}
)
return
except Exception as e: # noqa: BLE001
logger.error(f"Error with API key: {e}")
logger.error(traceback.format_exc())
return
try:
flow_description = await get_flow_desc_from_db(flow_id)
flow_tool = {
"name": "execute_flow",
"type": "function",
"description": flow_description or "Execute the flow with the given input",
"parameters": {
"type": "object",
"properties": {"input": {"type": "string", "description": "The input to send to the flow"}},
"required": ["input"],
},
}
except Exception as e: # noqa: BLE001
await client_websocket.send_json({"error": f"Failed to load flow: {e!s}"})
logger.error(f"Failed to load flow: {e}")
return
url = "wss://api.openai.com/v1/realtime?model=gpt-4o-mini-realtime-preview"
headers = {
"Authorization": f"Bearer {openai_key}",
"OpenAI-Beta": "realtime=v1",
}
def init_session_dict():
session_dict = voice_config.get_session_dict()
session_dict["tools"] = [flow_tool]
return session_dict
async with websockets.connect(url, extra_headers=headers) as openai_ws:
openai_realtime_session = init_session_dict()
session_update = {"type": "session.update", "session": openai_realtime_session}
await openai_ws.send(json.dumps(session_update))
# Setup for VAD processing.
vad_queue: asyncio.Queue = asyncio.Queue()
vad_audio_buffer = bytearray()
bot_speaking_flag = [False]
vad = webrtcvad.Vad(mode=3)
async def process_vad_audio() -> None:
nonlocal vad_audio_buffer
last_speech_time = datetime.now(tz=timezone.utc)
while True:
base64_data = await vad_queue.get()
raw_chunk_24k = base64.b64decode(base64_data)
vad_audio_buffer.extend(raw_chunk_24k)
has_speech = False
while len(vad_audio_buffer) >= BYTES_PER_24K_FRAME:
frame_24k = vad_audio_buffer[:BYTES_PER_24K_FRAME]
del vad_audio_buffer[:BYTES_PER_24K_FRAME]
try:
frame_16k = resample_24k_to_16k(frame_24k)
is_speech = vad.is_speech(frame_16k, VAD_SAMPLE_RATE_16K)
if is_speech:
has_speech = True
logger.trace("!", end="")
if bot_speaking_flag[0]:
await openai_ws.send(json.dumps({"type": "response.cancel"}))
bot_speaking_flag[0] = False
except Exception as e: # noqa: BLE001
logger.error(f"[ERROR] VAD processing failed (ValueError): {e}")
continue
if has_speech:
last_speech_time = datetime.now(tz=timezone.utc)
logger.trace(".", end="")
else:
time_since_speech = (datetime.now(tz=timezone.utc) - last_speech_time).total_seconds()
if time_since_speech >= 1.0:
logger.trace("_", end="")
shared_state = {"last_event_type": None, "event_count": 0}
def log_event(event, _direction: str) -> None:
event_type = event["type"]
# Ensure shared_state has necessary keys initialized
if "last_event_type" not in shared_state:
shared_state["last_event_type"] = None
if "event_count" not in shared_state:
shared_state["event_count"] = 0
if event_type != shared_state["last_event_type"]:
logger.debug(f"Event (session - {session_id}): {_direction} {event_type}")
shared_state["last_event_type"] = event_type
shared_state["event_count"] = 0
# Explicitly convert to integer if needed
current_count = int(shared_state["event_count"]) if shared_state["event_count"] is not None else 0
shared_state["event_count"] = current_count + 1
def send_event(websocket, event, loop, direction) -> None:
asyncio.run_coroutine_threadsafe(
websocket.send_json(event),
loop,
).result()
log_event(event, direction)
def pass_through(from_dict, to_dict, keys):
for key in keys:
if key in from_dict:
to_dict[key] = from_dict[key]
def merge(from_dict, to_dict, keys):
for key in keys:
if key in from_dict:
if not isinstance(from_dict[key], str):
msg = f"Only string values are supported for merge. Issue with key: {key}"
raise ValueError(msg)
new_value = from_dict[key]
if key not in to_dict:
to_dict[key] = new_value
else:
if not isinstance(to_dict[key], str):
msg = f"Only string values are supported for merge. Issue with key: {key}"
raise ValueError(msg)
old_value = to_dict[key]
to_dict[key] = f"{old_value}\n{new_value}"
def warn_if_present(config_dict, keys):
for key in keys:
if key in config_dict:
logger.warning(f"Removing key {key} from session.update.")
def update_global_session(from_session):
# Create a new session dict instead of modifying global
new_session = init_session_dict()
pass_through(
from_session,
new_session,
["voice", "temperature", "turn_detection", "input_audio_transcription"],
)
merge(from_session, new_session, ["instructions"])
warn_if_present(
from_session, ["modalities", "tools", "tool_choice", "input_audio_format", "output_audio_format"]
)
return new_session
# --- Spawn a text delta queue and task for TTS ---
text_delta_queue: asyncio.Queue = asyncio.Queue()
text_delta_task: asyncio.Task | None = None # Will hold our background task.
async def process_text_deltas(async_q: asyncio.Queue):
"""Transfer text deltas from the async queue to a synchronous queue.
then run the ElevenLabs TTS call (which expects a sync generator) in a separate thread.
"""
sync_q: queue.Queue = queue.Queue()
async def transfer_text_deltas():
while True:
item = await async_q.get()
sync_q.put(item)
if item is None:
break
# Schedule the transfer task in the main event loop.
transfer_task = asyncio.create_task(transfer_text_deltas())
# Create the synchronous generator from the sync queue.
sync_gen = sync_text_chunker(sync_q, timeout=0.3)
elevenlabs_client = await get_or_create_elevenlabs_client(current_user.id, session)
if elevenlabs_client is None:
transfer_task.cancel()
return
# Capture the current event loop to schedule send operations.
main_loop = asyncio.get_running_loop()
def tts_thread():
# Create a new event loop for this thread.
new_loop = asyncio.new_event_loop()
asyncio.set_event_loop(new_loop)
async def run_tts():
try:
audio_stream = elevenlabs_client.generate(
voice=voice_config.elevenlabs_voice,
output_format="pcm_24000",
text=sync_gen, # synchronous generator expected by ElevenLabs
model=voice_config.elevenlabs_model,
voice_settings=None,
stream=True,
)
for chunk in audio_stream:
base64_audio = base64.b64encode(chunk).decode("utf-8")
# Schedule sending the audio chunk in the main event loop.
event = {"type": "response.audio.delta", "delta": base64_audio}
send_event(client_websocket, event, main_loop, "")
event = {"type": "response.done"}
send_event(client_websocket, event, main_loop, "")
except Exception as e: # noqa: BLE001
logger.error(f"Error in TTS processing (ValueError): {e}")
new_loop.run_until_complete(run_tts())
new_loop.close()
threading.Thread(target=tts_thread, daemon=True).start()
async def forward_to_openai() -> None:
nonlocal openai_realtime_session
try:
num_audio_samples = 0 # Initialize as an integer instead of None
while True:
message_text = await client_websocket.receive_text()
msg = json.loads(message_text)
if msg.get("type") == "input_audio_buffer.append":
logger.trace(f"buffer_id {msg.get('buffer_id', '')}")
base64_data = msg.get("audio", "")
if not base64_data:
continue
# Ensure we're adding to an integer
num_audio_samples += len(base64_data)
event = {"type": "input_audio_buffer.append", "audio": base64_data}
await openai_ws.send(json.dumps(event))
log_event(event, "")
if voice_config.barge_in_enabled:
await vad_queue.put(base64_data)
elif msg.get("type") == "input_audio_buffer.commit":
if num_audio_samples > AUDIO_SAMPLE_THRESHOLD:
await openai_ws.send(message_text)
log_event(msg, "")
num_audio_samples = 0
elif msg.get("type") == "langflow.elevenlabs.config":
logger.info(f"langflow.elevenlabs.config {msg}")
voice_config.use_elevenlabs = msg["enabled"]
voice_config.elevenlabs_voice = msg.get("voice_id", voice_config.elevenlabs_voice)
# Update modalities based on TTS choice
modalities = ["text"] if voice_config.use_elevenlabs else ["audio", "text"]
openai_realtime_session["modalities"] = modalities
session_update = {"type": "session.update", "session": openai_realtime_session}
await openai_ws.send(json.dumps(session_update))
log_event(session_update, "")
elif msg.get("type") == "session.update":
openai_realtime_session = update_global_session(msg["session"])
session_update = {"type": "session.update", "session": openai_realtime_session}
await openai_ws.send(json.dumps(session_update))
log_event(session_update, "")
else:
await openai_ws.send(message_text)
log_event(msg, "")
except (WebSocketDisconnect, websockets.ConnectionClosedOK, websockets.ConnectionClosedError):
pass
async def forward_to_client() -> None:
nonlocal bot_speaking_flag, text_delta_queue, text_delta_task
function_call = None
function_call_args = ""
conversation_id = str(uuid4())
# Store function call tasks to prevent garbage collection
function_call_tasks = []
try:
while True:
data = await openai_ws.recv()
event = json.loads(data)
event_type = event.get("type")
do_forward = True
do_forward = do_forward and not (event_type == "response.done" and voice_config.use_elevenlabs)
do_forward = do_forward and event_type.find("flow.") != 0
if do_forward:
await client_websocket.send_text(data)
if event_type == "response.text.delta":
if voice_config.use_elevenlabs:
delta = event.get("delta", "")
await text_delta_queue.put(delta)
if text_delta_task is None:
# if text_delta_task is None or text_delta_task.done():
text_delta_task = asyncio.create_task(process_text_deltas(text_delta_queue))
elif event_type == "response.text.done":
if voice_config.use_elevenlabs:
await text_delta_queue.put(None)
if text_delta_task and not text_delta_task.done():
await text_delta_task
text_delta_task = None
try:
message_text = event.get("text", "")
await add_message_to_db(message_text, session, flow_id, session_id, "Machine", "AI")
except ValueError as e:
logger.error(f"Error saving message to database (ValueError): {e}")
logger.error(traceback.format_exc())
except (KeyError, AttributeError, TypeError) as e:
# Replace blind Exception with specific exceptions
logger.error(f"Error saving message to database: {e}")
logger.error(traceback.format_exc())
elif event_type == "response.output_item.added":
bot_speaking_flag[0] = True
item = event.get("item", {})
if item.get("type") == "function_call":
function_call = item
function_call_args = ""
elif event_type == "response.output_item.done":
try:
transcript = extract_transcript(event)
if transcript and transcript.strip():
await add_message_to_db(transcript, session, flow_id, session_id, "Machine", "AI")
except ValueError as e:
logger.error(f"Error saving message to database (ValueError): {e}")
logger.error(traceback.format_exc())
except (KeyError, AttributeError, TypeError) as e:
# Replace blind Exception with specific exceptions
logger.error(f"Error saving message to database: {e}")
logger.error(traceback.format_exc())
bot_speaking_flag[0] = False
elif event_type == "response.function_call_arguments.delta":
function_call_args += event.get("delta", "")
elif event_type == "response.function_call_arguments.done":
if function_call:
# Create and store reference to the task
function_call_task = asyncio.create_task(
handle_function_call(
client_websocket,
openai_ws,
function_call,
function_call_args,
flow_id,
background_tasks,
current_user,
conversation_id,
)
)
# Store the task reference to prevent garbage collection
function_call_tasks.append(function_call_task)
# Clean up completed tasks periodically
function_call_tasks = [t for t in function_call_tasks if not t.done()]
function_call = None
function_call_args = ""
elif event_type == "response.audio.delta":
# there are no audio deltas from OpenAI if ElevenLabs is used (because modality = ["text"]).
event.get("delta", "")
elif event_type == "conversation.item.input_audio_transcription.completed":
try:
message_text = event.get("transcript", "")
if message_text and message_text.strip():
await add_message_to_db(message_text, session, flow_id, session_id, "User", "User")
except ValueError as e:
logger.error(f"Error saving message to database (ValueError): {e}")
logger.error(traceback.format_exc())
except (KeyError, AttributeError, TypeError) as e:
# Replace blind Exception with specific exceptions
logger.error(f"Error saving message to database: {e}")
logger.error(traceback.format_exc())
elif event_type == "error":
pass
else:
await client_websocket.send_text(data)
log_event(event, "")
except (WebSocketDisconnect, websockets.ConnectionClosedOK, websockets.ConnectionClosedError):
pass
# Fix for storing references to asyncio tasks
vad_task = None
if voice_config.barge_in_enabled:
# Store the task reference to prevent it from being garbage collected
vad_task = asyncio.create_task(process_vad_audio())
await asyncio.gather(
forward_to_openai(),
forward_to_client(),
)
except Exception as e: # noqa: BLE001
logger.error(f"Value error: {e}")
logger.error(traceback.format_exc())
finally:
# Ensure that the client websocket is closed.
try:
await client_websocket.close()
except Exception as e: # noqa: BLE001
logger.debug(f"{e} ")
logger.info("Client websocket cleanup complete.")
# Make sure to clean up the task
if vad_task and not vad_task.done():
vad_task.cancel()
@router.get("/elevenlabs/voice_ids")
async def get_elevenlabs_voice_ids(
current_user: CurrentActiveUser,
session: DbSession,
):
"""Get available voice IDs from ElevenLabs API."""
try:
# Get or create the ElevenLabs client
elevenlabs_client = await get_or_create_elevenlabs_client(current_user.id, session)
if elevenlabs_client is None:
return {"error": "ElevenLabs API key not found or invalid"}
voices_response = elevenlabs_client.voices.get_all()
voices = voices_response.voices
# Fix for PERF401: Use list comprehension
return [
{
"voice_id": voice.voice_id,
"name": voice.name,
}
for voice in voices
]
except ValueError as e:
logger.error(f"Error fetching ElevenLabs voices (ValueError): {e}")
return {"error": str(e)}
except requests.RequestException as e:
logger.error(f"Error fetching ElevenLabs voices (RequestException): {e}")
return {"error": str(e)}
except (KeyError, AttributeError, TypeError) as e:
# More specific exceptions instead of blind Exception
logger.error(f"Error fetching ElevenLabs voices: {e}")
logger.error(traceback.format_exc())
return {"error": str(e)}
# Replace ElevenLabsClient class with a better implementation
class ElevenLabsClientManager:
_instance = None
_api_key = None
@classmethod
async def get_client(cls, user_id=None, session=None):
"""Get or create an ElevenLabs client with the API key."""
if cls._instance is None:
if cls._api_key is None and user_id and session:
variable_service = get_variable_service()
try:
cls._api_key = await variable_service.get_variable(
user_id=user_id,
name="ELEVENLABS_API_KEY",
field="elevenlabs_api_key",
session=session,
)
except (InvalidToken, ValueError) as e:
logger.error(f"Error with ElevenLabs API key: {e}")
cls._api_key = os.getenv("ELEVENLABS_API_KEY", "")
if not cls._api_key:
logger.error("ElevenLabs API key not found")
return None
except (KeyError, AttributeError, sqlalchemy.exc.SQLAlchemyError) as e:
logger.error(f"Exception getting ElevenLabs API key: {e}")
return None
if cls._api_key:
cls._instance = ElevenLabs(api_key=cls._api_key)
return cls._instance
# Update the get_or_create_elevenlabs_client function to use the new manager
async def get_or_create_elevenlabs_client(user_id=None, session=None):
"""Get or create an ElevenLabs client with the API key."""
return await ElevenLabsClientManager.get_client(user_id, session)
# Global dictionary to track the last sender for each session (identified by queue_key)
last_sender_by_session: defaultdict[str, str | None] = defaultdict(lambda: None)
async def wait_for_sender_change(queue_key, current_sender, timeout=5):
"""Wait until the last sender for this session is not the same as current_sender.
or until the timeout expires.
"""
waited = 0
interval = 0.05
while last_sender_by_session[queue_key] == current_sender and waited < timeout:
await asyncio.sleep(interval)
waited += interval
async def add_message_to_db(message, session, flow_id, session_id, sender, sender_name):
"""Enforce alternating sequence by checking the last sender.
If two consecutive messages come from the same party (e.g. AI/AI), wait briefly.
"""
queue_key = f"{flow_id}:{session_id}"
# If the incoming sender is the same as the last recorded sender,
# wait for a change (with a timeout as a fallback).
if last_sender_by_session[queue_key] == sender:
await wait_for_sender_change(queue_key, sender, timeout=5)
last_sender_by_session[queue_key] = sender
# Now proceed to create the message
message_obj = MessageTable(
text=message,
sender=sender,
sender_name=sender_name,
session_id=session_id,
files=[],
flow_id=uuid.UUID(flow_id) if isinstance(flow_id, str) else flow_id,
properties=Properties().model_dump(),
content_blocks=[],
category="audio",
)
await message_queues[queue_key].put(message_obj)
# Update last sender for this session
if queue_key not in message_tasks or message_tasks[queue_key].done():
message_tasks[queue_key] = asyncio.create_task(process_message_queue(queue_key, session))
async def process_message_queue(queue_key, session):
"""Process messages from the queue one by one."""
try:
while True:
message = await message_queues[queue_key].get()
try:
await aadd_messagetables([message], session)
logger.debug(f"Added message to DB: {message.text[:30]}...")
except ValueError as e:
logger.error(f"Error saving message to database (ValueError): {e}")
logger.error(traceback.format_exc())
except sqlalchemy.exc.SQLAlchemyError as e:
logger.error(f"Error saving message to database (SQLAlchemyError): {e}")
logger.error(traceback.format_exc())
except (KeyError, AttributeError, TypeError) as e:
# More specific exceptions instead of blind Exception
logger.error(f"Error saving message to database: {e}")
logger.error(traceback.format_exc())
finally:
message_queues[queue_key].task_done()
if message_queues[queue_key].empty():
break
except Exception as e: # noqa: BLE001
logger.debug(f"Message queue processor for {queue_key} was cancelled: {e}")
logger.error(traceback.format_exc())
def extract_transcript(json_data):
try:
content_list = json_data.get("item", {}).get("content", [])
for content_item in content_list:
if content_item.get("type") == "audio":
return content_item.get("transcript", "")
# Move this to the else block
except (KeyError, TypeError, AttributeError) as e:
logger.debug(f"Error extracting transcript: {e}")
return ""
else:
# This is now properly in the else block
return ""

View file

@ -1,15 +1,21 @@
import asyncio
from collections.abc import Awaitable, Callable
from typing import Any
from pydantic import Field, create_model
from langflow.helpers.base_model import BaseModel
def create_tool_coroutine(tool_name: str, arg_schema: type[BaseModel], session) -> Callable[[dict], Awaitable]:
async def tool_coroutine(*args):
if len(args) == 0:
msg = f"at least one positional argument is required {args}"
async def tool_coroutine(*args, **kwargs):
fields = arg_schema.model_fields.keys()
expected_field_count = len(fields)
if len(args) + len(kwargs) != expected_field_count:
msg = f"{expected_field_count} arguments are required. Received: {args} {kwargs}"
raise ValueError(msg)
arg_dict = dict(zip(arg_schema.model_fields.keys(), args, strict=False))
arg_dict = dict(zip(fields, args, strict=False))
arg_dict.update(kwargs)
return await session.call_tool(tool_name, arguments=arg_dict)
return tool_coroutine
@ -24,3 +30,43 @@ def create_tool_func(tool_name: str, session) -> Callable[..., str]:
return loop.run_until_complete(session.call_tool(tool_name, arguments=kwargs))
return tool_func
def create_input_schema_from_json_schema(schema: dict[str, Any]) -> type[BaseModel]:
"""Converts a JSON schema into a Pydantic model dynamically.
:param schema: The JSON schema as a dictionary.
:return: A Pydantic model class.
"""
if schema.get("type") != "object":
msg = "JSON schema must be of type 'object' at the root level."
raise ValueError(msg)
fields = {}
properties = schema.get("properties", {})
required_fields = set(schema.get("required", []))
for field_name, field_def in properties.items():
# Extract type
field_type_str = field_def.get("type", "str") # Default to string type if not specified
field_type = {
"string": str,
"str": str,
"integer": int,
"int": int,
"number": float,
"boolean": bool,
"array": list,
"object": dict,
}.get(field_type_str, Any)
# Extract description and default if present
field_metadata = {"description": field_def.get("description", "")}
if field_name not in required_fields:
field_metadata["default"] = field_def.get("default", None)
# Create Pydantic field
fields[field_name] = (field_type, Field(**field_metadata))
# Dynamically create the model
return create_model("InputSchema", **fields)

View file

@ -30,6 +30,7 @@ class LCChatMemoryComponent(Component):
raise ValueError(msg)
def build_base_memory(self) -> BaseChatMemory:
"""Builds the base memory."""
return ConversationBufferMemory(chat_memory=self.build_message_history())
@abstractmethod

View file

@ -3,11 +3,11 @@ import asyncio
from contextlib import AsyncExitStack
import httpx
from langchain_core.tools import StructuredTool
from mcp import ClientSession, types
from mcp.client.sse import sse_client
from langflow.base.mcp.util import create_tool_coroutine, create_tool_func
from langflow.components.tools.mcp_stdio import create_input_schema_from_json_schema
from langflow.base.mcp.util import create_input_schema_from_json_schema, create_tool_coroutine, create_tool_func
from langflow.custom import Component
from langflow.field_typing import Tool
from langflow.io import MessageTextInput, Output
@ -32,6 +32,17 @@ class MCPSseClient:
return response.headers.get("Location") # Return the redirect URL
return url # Return the original URL if no redirect
async def _connect_with_timeout(
self, url: str, headers: dict[str, str] | None, timeout_seconds: int, sse_read_timeout_seconds: int
):
"""Connect to the SSE server with timeout."""
sse_transport = await self.exit_stack.enter_async_context(
sse_client(url, headers, timeout_seconds, sse_read_timeout_seconds)
)
self.sse, self.write = sse_transport
self.session = await self.exit_stack.enter_async_context(ClientSession(self.sse, self.write))
await self.session.initialize()
async def connect_to_server(
self, url: str, headers: dict[str, str] | None, timeout_seconds: int = 500, sse_read_timeout_seconds: int = 500
):
@ -51,18 +62,7 @@ class MCPSseClient:
except asyncio.TimeoutError as err:
error_message = f"Connection to {url} timed out after {timeout_seconds} seconds"
raise TimeoutError(error_message) from err
else: # Only executed if no TimeoutError
return response.tools
async def _connect_with_timeout(
self, url: str, headers: dict[str, str] | None, timeout_seconds: int, sse_read_timeout_seconds: int
):
sse_transport = await self.exit_stack.enter_async_context(
sse_client(url, headers, timeout_seconds, sse_read_timeout_seconds)
)
self.sse, self.write = sse_transport
self.session = await self.exit_stack.enter_async_context(ClientSession(self.sse, self.write))
await self.session.initialize()
return response.tools
class MCPSse(Component):
@ -98,12 +98,12 @@ class MCPSse(Component):
for tool in self.tools:
args_schema = create_input_schema_from_json_schema(tool.inputSchema)
tool_list.append(
Tool(
StructuredTool(
name=tool.name, # maybe format this
description=tool.description,
args_schema=args_schema,
coroutine=create_tool_coroutine(tool.name, args_schema, self.client.session),
func=create_tool_func(tool.name, self.client.session),
coroutine=create_tool_coroutine(tool.name, args_schema, self.client.session),
)
)

View file

@ -1,13 +1,12 @@
# from langflow.field_typing import Data
import os
from contextlib import AsyncExitStack
from typing import Any
from langchain_core.tools import StructuredTool
from mcp import ClientSession, StdioServerParameters, types
from mcp.client.stdio import stdio_client
from pydantic import BaseModel, Field, create_model
from langflow.base.mcp.util import create_tool_coroutine, create_tool_func
from langflow.base.mcp.util import create_input_schema_from_json_schema, create_tool_coroutine, create_tool_func
from langflow.custom import Component
from langflow.field_typing import Tool
from langflow.io import MessageTextInput, Output
@ -36,46 +35,6 @@ class MCPStdioClient:
return response.tools
def create_input_schema_from_json_schema(schema: dict[str, Any]) -> type[BaseModel]:
"""Converts a JSON schema into a Pydantic model dynamically.
:param schema: The JSON schema as a dictionary.
:return: A Pydantic model class.
"""
if schema.get("type") != "object":
msg = "JSON schema must be of type 'object' at the root level."
raise ValueError(msg)
fields = {}
properties = schema.get("properties", {})
required_fields = set(schema.get("required", []))
for field_name, field_def in properties.items():
# Extract type
field_type_str = field_def.get("type", "str") # Default to string type if not specified
field_type = {
"string": str,
"str": str,
"integer": int,
"int": int,
"number": float,
"boolean": bool,
"array": list,
"object": dict,
}.get(field_type_str, Any)
# Extract description and default if present
field_metadata = {"description": field_def.get("description", "")}
if field_name not in required_fields:
field_metadata["default"] = field_def.get("default", None)
# Create Pydantic field
fields[field_name] = (field_type, Field(**field_metadata))
# Dynamically create the model
return create_model("InputSchema", **fields)
class MCPStdio(Component):
client = MCPStdioClient()
tools = types.ListToolsResult
@ -111,11 +70,12 @@ class MCPStdio(Component):
for tool in self.tools:
args_schema = create_input_schema_from_json_schema(tool.inputSchema)
tool_list.append(
Tool(
StructuredTool(
name=tool.name,
description=tool.description,
coroutine=create_tool_coroutine(tool.name, args_schema, self.client.session),
args_schema=args_schema,
func=create_tool_func(tool.name, args_schema),
coroutine=create_tool_coroutine(tool.name, args_schema, self.client.session),
)
)
self.tool_names = [tool.name for tool in self.tools]

View file

@ -1,9 +1,11 @@
# mypy: ignore-errors
import ast
import asyncio
import contextlib
import inspect
import re
import traceback
from pathlib import Path
from typing import Any
from uuid import UUID
@ -560,3 +562,134 @@ async def update_component_build_config(
if inspect.iscoroutinefunction(component.update_build_config):
return await component.update_build_config(build_config, field_value, field_name)
return await asyncio.to_thread(component.update_build_config, build_config, field_value, field_name)
async def get_all_types_dict(components_paths: list[str]):
"""Get all types dictionary with full component loading."""
# This is the async version of the existing function
return await abuild_custom_components(components_paths=components_paths)
async def get_single_component_dict(component_type: str, component_name: str, components_paths: list[str]):
"""Get a single component dictionary."""
# For example, if components are loaded by importing Python modules:
for base_path in components_paths:
module_path = Path(base_path) / component_type / f"{component_name}.py"
if module_path.exists():
# Try to import the module
module_name = f"langflow.components.{component_type}.{component_name}"
try:
# This is a simplified example - actual implementation may vary
import importlib.util
spec = importlib.util.spec_from_file_location(module_name, module_path)
if spec and spec.loader:
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
if hasattr(module, "template"):
return module.template
except ImportError as e:
logger.error(f"Import error loading component {module_path}: {e!s}")
except AttributeError as e:
logger.error(f"Attribute error loading component {module_path}: {e!s}")
except ValueError as e:
logger.error(f"Value error loading component {module_path}: {e!s}")
except (KeyError, IndexError) as e:
logger.error(f"Data structure error loading component {module_path}: {e!s}")
except RuntimeError as e:
logger.error(f"Runtime error loading component {module_path}: {e!s}")
logger.debug("Full traceback for runtime error", exc_info=True)
except OSError as e:
logger.error(f"OS error loading component {module_path}: {e!s}")
# If we get here, the component wasn't found or couldn't be loaded
return None
async def load_custom_component(component_name: str, components_paths: list[str]):
"""Load a custom component by name.
Args:
component_name: Name of the component to load
components_paths: List of paths to search for components
"""
from langflow.interface.custom_component import get_custom_component_from_name
try:
# First try to get the component from the registered components
component_class = get_custom_component_from_name(component_name)
if component_class:
# Define the function locally if it's not imported
def get_custom_component_template(component_cls):
"""Get template for a custom component class."""
# This is a simplified implementation - adjust as needed
if hasattr(component_cls, "get_template"):
return component_cls.get_template()
if hasattr(component_cls, "template"):
return component_cls.template
return None
return get_custom_component_template(component_class)
# If not found in registered components, search in the provided paths
for path in components_paths:
# Try to find the component in different category directories
base_path = Path(path)
if base_path.exists() and base_path.is_dir():
# Search for the component in all subdirectories
for category_dir in base_path.iterdir():
if category_dir.is_dir():
component_file = category_dir / f"{component_name}.py"
if component_file.exists():
# Try to import the module
module_name = f"langflow.components.{category_dir.name}.{component_name}"
try:
import importlib.util
spec = importlib.util.spec_from_file_location(module_name, component_file)
if spec and spec.loader:
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
if hasattr(module, "template"):
return module.template
if hasattr(module, "get_template"):
return module.get_template()
except ImportError as e:
logger.error(f"Import error loading component {component_file}: {e!s}")
logger.debug("Import error traceback", exc_info=True)
except AttributeError as e:
logger.error(f"Attribute error loading component {component_file}: {e!s}")
logger.debug("Attribute error traceback", exc_info=True)
except (ValueError, TypeError) as e:
logger.error(f"Value/Type error loading component {component_file}: {e!s}")
logger.debug("Value/Type error traceback", exc_info=True)
except (KeyError, IndexError) as e:
logger.error(f"Data structure error loading component {component_file}: {e!s}")
logger.debug("Data structure error traceback", exc_info=True)
except RuntimeError as e:
logger.error(f"Runtime error loading component {component_file}: {e!s}")
logger.debug("Runtime error traceback", exc_info=True)
except OSError as e:
logger.error(f"OS error loading component {component_file}: {e!s}")
logger.debug("OS error traceback", exc_info=True)
except ImportError as e:
logger.error(f"Import error loading custom component {component_name}: {e!s}")
return None
except AttributeError as e:
logger.error(f"Attribute error loading custom component {component_name}: {e!s}")
return None
except ValueError as e:
logger.error(f"Value error loading custom component {component_name}: {e!s}")
return None
except (KeyError, IndexError) as e:
logger.error(f"Data structure error loading custom component {component_name}: {e!s}")
return None
except RuntimeError as e:
logger.error(f"Runtime error loading custom component {component_name}: {e!s}")
logger.debug("Full traceback for runtime error", exc_info=True)
return None
# If we get here, the component wasn't found in any of the paths
logger.warning(f"Component {component_name} not found in any of the provided paths")
return None

View file

@ -702,6 +702,16 @@ class Vertex:
event_manager: EventManager | None = None,
**kwargs,
) -> Any:
# Add lazy loading check at the beginning
# Check if we need to fully load this component first
from langflow.interface.components import ensure_component_loaded
from langflow.services.deps import get_settings_service
if get_settings_service().settings.lazy_load_components:
component_name = self.id.split("-")[0]
await ensure_component_loaded(self.vertex_type, component_name, get_settings_service())
# Continue with the original implementation
async with self._lock:
if self.state == VertexStates.INACTIVE:
# If the vertex is inactive, return None

View file

@ -1,24 +1,280 @@
from __future__ import annotations
import json
from typing import TYPE_CHECKING
from pathlib import Path
from typing import TYPE_CHECKING, Any
from loguru import logger
from langflow.custom.utils import abuild_custom_components, build_custom_components
from langflow.custom.utils import abuild_custom_components
if TYPE_CHECKING:
from langflow.services.settings.service import SettingsService
async def aget_all_types_dict(components_paths):
"""Get all types dictionary combining native and custom components."""
# Create a class to manage component cache instead of using globals
class ComponentCache:
def __init__(self):
self.all_types_dict: dict[str, Any] | None = None
self.fully_loaded_components: dict[str, bool] = {}
# Singleton instance
component_cache = ComponentCache()
async def get_and_cache_all_types_dict(
settings_service: SettingsService,
):
"""Get and cache the types dictionary, with partial loading support."""
if component_cache.all_types_dict is None:
logger.debug("Building langchain types dict")
if settings_service.settings.lazy_load_components:
# Partial loading mode - just load component metadata
logger.debug("Using partial component loading")
component_cache.all_types_dict = await aget_component_metadata(settings_service.settings.components_path)
else:
# Traditional full loading
component_cache.all_types_dict = await aget_all_types_dict(settings_service.settings.components_path)
# Log loading stats
component_count = sum(len(comps) for comps in component_cache.all_types_dict.get("components", {}).values())
logger.debug(f"Loaded {component_count} components")
return component_cache.all_types_dict
async def aget_all_types_dict(components_paths: list[str]):
"""Get all types dictionary with full component loading."""
return await abuild_custom_components(components_paths=components_paths)
def get_all_types_dict(components_paths):
"""Get all types dictionary combining native and custom components."""
return build_custom_components(components_paths=components_paths)
async def aget_component_metadata(components_paths: list[str]):
"""Get just the metadata for all components without loading full templates."""
# This builds a skeleton of the all_types_dict with just basic component info
components_dict: dict = {"components": {}}
# Get all component types
component_types = await discover_component_types(components_paths)
logger.debug(f"Discovered {len(component_types)} component types: {', '.join(component_types)}")
# For each component type directory
for component_type in component_types:
components_dict["components"][component_type] = {}
# Get list of components in this type
component_names = await discover_component_names(component_type, components_paths)
logger.debug(f"Found {len(component_names)} components for type {component_type}")
# Create stub entries with just basic metadata
for name in component_names:
# Get minimal metadata for component
metadata = await get_component_minimal_metadata(component_type, name, components_paths)
if metadata:
components_dict["components"][component_type][name] = metadata
# Mark as needing full loading
components_dict["components"][component_type][name]["lazy_loaded"] = True
return components_dict
async def discover_component_types(components_paths: list[str]) -> list[str]:
"""Discover available component types by scanning directories."""
component_types: set[str] = set()
for path in components_paths:
path_obj = Path(path)
if not path_obj.exists():
continue
for item in path_obj.iterdir():
# Only include directories that don't start with _ or .
if item.is_dir() and not item.name.startswith(("_", ".")):
component_types.add(item.name)
# Add known types that might not be in directories
standard_types = {
"agents",
"chains",
"embeddings",
"llms",
"memories",
"prompts",
"tools",
"retrievers",
"textsplitters",
"toolkits",
"utilities",
"vectorstores",
"custom_components",
"documentloaders",
"outputparsers",
"wrappers",
}
component_types.update(standard_types)
return sorted(component_types)
async def discover_component_names(component_type: str, components_paths: list[str]) -> list[str]:
"""Discover component names for a specific type by scanning directories."""
component_names: set[str] = set()
for path in components_paths:
type_dir = Path(path) / component_type
if type_dir.exists():
for filename in type_dir.iterdir():
# Get Python files that don't start with __
if filename.name.endswith(".py") and not filename.name.startswith("__"):
component_name = filename.name[:-3] # Remove .py extension
component_names.add(component_name)
return sorted(component_names)
async def get_component_minimal_metadata(component_type: str, component_name: str, components_paths: list[str]):
"""Extract minimal metadata for a component without loading its full implementation."""
# Create a more complete metadata structure that the UI needs
metadata = {
"display_name": component_name.replace("_", " ").title(),
"name": component_name,
"type": component_type,
"description": f"A {component_type} component (not fully loaded)",
"template": {
"_type": component_type,
"inputs": {},
"outputs": {},
"output_types": [],
"documentation": f"A {component_type} component",
"display_name": component_name.replace("_", " ").title(),
"base_classes": [component_type],
},
}
# Try to find the file to verify it exists
component_path = None
for path in components_paths:
candidate_path = Path(path) / component_type / f"{component_name}.py"
if candidate_path.exists():
component_path = candidate_path
break
if not component_path:
return None
return metadata
async def ensure_component_loaded(component_type: str, component_name: str, settings_service: SettingsService):
"""Ensure a component is fully loaded if it was only partially loaded."""
# If already fully loaded, return immediately
component_key = f"{component_type}:{component_name}"
if component_key in component_cache.fully_loaded_components:
return
# If we don't have a cache or the component doesn't exist in the cache, nothing to do
if (
not component_cache.all_types_dict
or "components" not in component_cache.all_types_dict
or component_type not in component_cache.all_types_dict["components"]
or component_name not in component_cache.all_types_dict["components"][component_type]
):
return
# Check if component is marked for lazy loading
if component_cache.all_types_dict["components"][component_type][component_name].get("lazy_loaded", False):
logger.debug(f"Fully loading component {component_type}:{component_name}")
# Load just this specific component
full_component = await load_single_component(
component_type, component_name, settings_service.settings.components_path
)
if full_component:
# Replace the stub with the fully loaded component
component_cache.all_types_dict["components"][component_type][component_name] = full_component
# Remove lazy_loaded flag if it exists
if "lazy_loaded" in component_cache.all_types_dict["components"][component_type][component_name]:
del component_cache.all_types_dict["components"][component_type][component_name]["lazy_loaded"]
# Mark as fully loaded
component_cache.fully_loaded_components[component_key] = True
logger.debug(f"Component {component_type}:{component_name} fully loaded")
else:
logger.warning(f"Failed to fully load component {component_type}:{component_name}")
async def load_single_component(component_type: str, component_name: str, components_paths: list[str]):
"""Load a single component fully."""
from langflow.custom.utils import get_single_component_dict
try:
# Delegate to a more specific function that knows how to load
# a single component of a specific type
return await get_single_component_dict(component_type, component_name, components_paths)
except (ImportError, ModuleNotFoundError) as e:
# Handle issues with importing the component or its dependencies
logger.error(f"Import error loading component {component_type}:{component_name}: {e!s}")
return None
except (AttributeError, TypeError) as e:
# Handle issues with component structure or type errors
logger.error(f"Component structure error for {component_type}:{component_name}: {e!s}")
return None
except FileNotFoundError as e:
# Handle missing files
logger.error(f"File not found for component {component_type}:{component_name}: {e!s}")
return None
except ValueError as e:
# Handle invalid values or configurations
logger.error(f"Invalid configuration for component {component_type}:{component_name}: {e!s}")
return None
except (KeyError, IndexError) as e:
# Handle data structure access errors
logger.error(f"Data structure error for component {component_type}:{component_name}: {e!s}")
return None
except RuntimeError as e:
# Handle runtime errors
logger.error(f"Runtime error loading component {component_type}:{component_name}: {e!s}")
logger.debug("Full traceback for runtime error", exc_info=True)
return None
except OSError as e:
# Handle OS-related errors (file system, permissions, etc.)
logger.error(f"OS error loading component {component_type}:{component_name}: {e!s}")
return None
# Also add a utility function to load specific component types
async def get_type_dict(component_type: str, settings_service: SettingsService | None = None):
"""Get a specific component type dictionary, loading if needed."""
if settings_service is None:
# Import here to avoid circular imports
from langflow.services.deps import get_settings_service
settings_service = get_settings_service()
# Make sure all_types_dict is loaded (at least partially)
if component_cache.all_types_dict is None:
await get_and_cache_all_types_dict(settings_service)
# Check if component type exists in the cache
if (
component_cache.all_types_dict
and "components" in component_cache.all_types_dict
and component_type in component_cache.all_types_dict["components"]
):
# If in lazy mode, ensure all components of this type are fully loaded
if settings_service.settings.lazy_load_components:
for component_name in list(component_cache.all_types_dict["components"][component_type].keys()):
await ensure_component_loaded(component_type, component_name, settings_service)
return component_cache.all_types_dict["components"][component_type]
return {}
# TypeError: unhashable type: 'list'
@ -43,7 +299,10 @@ async def aget_all_components(components_paths, *, as_dict=False):
def get_all_components(components_paths, *, as_dict=False):
"""Get all components names combining native and custom components."""
all_types_dict = get_all_types_dict(components_paths)
# Import here to avoid circular imports
from langflow.custom.utils import build_custom_components
all_types_dict = build_custom_components(components_paths=components_paths)
components = [] if not as_dict else {}
for category in all_types_dict.values():
for component in category.values():
@ -53,17 +312,3 @@ def get_all_components(components_paths, *, as_dict=False):
else:
components.append(component)
return components
all_types_dict_cache = None
async def get_and_cache_all_types_dict(
settings_service: SettingsService,
):
global all_types_dict_cache # noqa: PLW0603
if all_types_dict_cache is None:
logger.debug("Building langchain types dict")
all_types_dict_cache = await aget_all_types_dict(settings_service.settings.components_path)
return all_types_dict_cache

View file

@ -19,7 +19,7 @@ from typing_extensions import NotRequired, override
from langflow.settings import DEV
VALID_LOG_LEVELS = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]
VALID_LOG_LEVELS = ["TRACE", "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]
# Human-readable
DEFAULT_LOG_FORMAT = (
"<green>{time:YYYY-MM-DD HH:mm:ss}</green> - <level>{level: <8}</level> - {module} - <level>{message}</level>"

View file

@ -125,19 +125,51 @@ def get_lifespan(*, fix_migration=False, version=None):
temp_dirs: list[TemporaryDirectory] = []
sync_flows_from_fs_task = None
try:
start_time = asyncio.get_event_loop().time()
rprint("[bold blue]Initializing services[/bold blue]")
await initialize_services(fix_migration=fix_migration)
rprint(f"✓ Services initialized in {asyncio.get_event_loop().time() - start_time:.2f}s")
current_time = asyncio.get_event_loop().time()
rprint("[bold blue]Setting up LLM caching[/bold blue]")
setup_llm_caching()
rprint(f"✓ LLM caching setup in {asyncio.get_event_loop().time() - current_time:.2f}s")
current_time = asyncio.get_event_loop().time()
rprint("[bold blue]Initializing super user[/bold blue]")
await initialize_super_user_if_needed()
rprint(f"✓ Super user initialized in {asyncio.get_event_loop().time() - current_time:.2f}s")
current_time = asyncio.get_event_loop().time()
rprint("[bold blue]Loading bundles[/bold blue]")
temp_dirs, bundles_components_paths = await load_bundles_with_error_handling()
get_settings_service().settings.components_path.extend(bundles_components_paths)
rprint(f"✓ Bundles loaded in {asyncio.get_event_loop().time() - current_time:.2f}s")
current_time = asyncio.get_event_loop().time()
rprint("[bold blue]Caching types[/bold blue]")
all_types_dict = await get_and_cache_all_types_dict(get_settings_service())
rprint(f"✓ Types cached in {asyncio.get_event_loop().time() - current_time:.2f}s")
current_time = asyncio.get_event_loop().time()
rprint("[bold blue]Creating/updating starter projects[/bold blue]")
await create_or_update_starter_projects(all_types_dict)
rprint(f"✓ Starter projects updated in {asyncio.get_event_loop().time() - current_time:.2f}s")
telemetry_service.start()
current_time = asyncio.get_event_loop().time()
rprint("[bold blue]Loading flows[/bold blue]")
await load_flows_from_directory()
sync_flows_from_fs_task = asyncio.create_task(sync_flows_from_fs())
queue_service = get_queue_service()
if not queue_service.is_started(): # Start if not already started
queue_service.start()
rprint(f"✓ Flows loaded in {asyncio.get_event_loop().time() - current_time:.2f}s")
total_time = asyncio.get_event_loop().time() - start_time
rprint(f"[bold green]✓ Total initialization time: {total_time:.2f}s[/bold green]")
yield
except Exception as exc:
@ -166,6 +198,7 @@ def create_app():
__version__ = get_version_info()["version"]
rprint("configuring")
configure()
lifespan = get_lifespan(version=__version__)
app = FastAPI(lifespan=lifespan, title="Langflow", version=__version__)

View file

@ -238,6 +238,9 @@ class Settings(BaseSettings):
Default is 24 hours (86400 seconds). Minimum is 600 seconds (10 minutes)."""
event_delivery: Literal["polling", "streaming"] = "polling"
"""How to deliver build events to the frontend. Can be 'polling' or 'streaming'."""
lazy_load_components: bool = False
"""If set to True, Langflow will only partially load components at startup and fully load them on demand.
This significantly reduces startup time but may cause a slight delay when a component is first used."""
@field_validator("dev")
@classmethod

View file

@ -0,0 +1,92 @@
import asyncio
import base64
from pathlib import Path
import numpy as np
from scipy.signal import resample
from langflow.logging import logger
SAMPLE_RATE_24K = 24000
VAD_SAMPLE_RATE_16K = 16000
FRAME_DURATION_MS = 20
BYTES_PER_SAMPLE = 2
BYTES_PER_24K_FRAME = int(SAMPLE_RATE_24K * FRAME_DURATION_MS / 1000) * BYTES_PER_SAMPLE
BYTES_PER_16K_FRAME = int(VAD_SAMPLE_RATE_16K * FRAME_DURATION_MS / 1000) * BYTES_PER_SAMPLE
def resample_24k_to_16k(frame_24k_bytes):
"""Resample a 20ms frame from 24kHz to 16kHz.
Args:
frame_24k_bytes: A bytes object containing 20ms of 24kHz audio (960 bytes)
Returns:
A bytes object containing 20ms of 16kHz audio (640 bytes)
Raises:
ValueError: If the input frame is not exactly 960 bytes
"""
if len(frame_24k_bytes) != BYTES_PER_24K_FRAME:
msg = f"Expected exactly {BYTES_PER_24K_FRAME} bytes for 24kHz frame, got {len(frame_24k_bytes)}"
raise ValueError(msg)
# Convert bytes to numpy array of int16
frame_24k = np.frombuffer(frame_24k_bytes, dtype=np.int16)
# Resample from 24kHz to 16kHz (2/3 ratio)
# For a 20ms frame, we go from 480 samples to 320 samples
frame_16k = resample(frame_24k, int(len(frame_24k) * 2 / 3))
# Convert back to int16 and then to bytes
frame_16k = frame_16k.astype(np.int16)
return frame_16k.tobytes()
# def resample_24k_to_16k(frame_24k_bytes: bytes) -> bytes:
# """
# Convert one 20ms chunk (960 bytes @ 24kHz) to 20ms @ 16kHz (640 bytes).
# Raises ValueError if the frame is not exactly 960 bytes.
# """
# if len(frame_24k_bytes) != BYTES_PER_24K_FRAME:
# raise ValueError(
# f"Expected exactly {BYTES_PER_24K_FRAME} bytes for a 20ms 24k frame, "
# f"but got {len(frame_24k_bytes)}"
# )
# # Convert bytes -> int16 array (480 samples)
# samples_24k = np.frombuffer(frame_24k_bytes, dtype=np.int16)
#
# # Resample 24k => 16k (ratio=2/3)
# # Should get 320 samples out if the chunk was exactly 480 samples in
# samples_16k = resample_poly(samples_24k, up=2, down=3)
#
# # Round & convert to int16
# samples_16k = np.rint(samples_16k).astype(np.int16)
#
# # Convert back to bytes
# frame_16k_bytes = samples_16k.tobytes()
# if len(frame_16k_bytes) != BYTES_PER_16K_FRAME:
# raise ValueError(
# f"Expected exactly {BYTES_PER_16K_FRAME} bytes after resampling "
# f"to 20ms@16kHz, got {len(frame_16k_bytes)}"
# )
# return frame_16k_bytes
#
async def write_audio_to_file(audio_base64: str, filename: str = "output_audio.raw") -> None:
"""Decode the base64-encoded audio and write (append) it to a file asynchronously."""
try:
audio_bytes = base64.b64decode(audio_base64)
# Use asyncio.to_thread to perform file I/O without blocking the event loop
await asyncio.to_thread(_write_bytes_to_file, audio_bytes, filename)
logger.info(f"Wrote {len(audio_bytes)} bytes to {filename}")
except (OSError, base64.binascii.Error) as e: # type: ignore[attr-defined]
logger.error(f"Error writing audio to file: {e}")
def _write_bytes_to_file(data: bytes, filename: str) -> None:
"""Helper function to write bytes to a file using a context manager."""
with Path(filename).open("ab") as f:
f.write(data)

View file

@ -83,6 +83,7 @@ dependencies = [
"greenlet>=3.1.1",
"jsonquerylang>=1.1.1",
"sqlalchemy[aiosqlite]>=2.0.38,<3.0.0",
"elevenlabs>=1.54.0",
]
[dependency-groups]

Binary file not shown.

View file

@ -0,0 +1,101 @@
import numpy as np
import pytest
import webrtcvad
from langflow.utils.voice_utils import (
BYTES_PER_16K_FRAME,
BYTES_PER_24K_FRAME,
SAMPLE_RATE_24K,
VAD_SAMPLE_RATE_16K,
resample_24k_to_16k,
)
def test_resample_24k_to_16k_valid_frame():
"""Test that valid 960-byte frames (20ms @ 24kHz) resample to 640 bytes (20ms @ 16kHz)."""
# Generate a fake 20ms @ 24kHz frame (960 bytes)
duration_samples_24k = int(0.02 * SAMPLE_RATE_24K) # 480 samples
# Use the newer numpy random Generator
rng = np.random.default_rng()
fake_frame_24k = (rng.random(duration_samples_24k) * 32767).astype(np.int16)
frame_24k_bytes = fake_frame_24k.tobytes()
assert len(frame_24k_bytes) == BYTES_PER_24K_FRAME # 960
# Resample
frame_16k_bytes = resample_24k_to_16k(frame_24k_bytes)
# Check length after resampling
assert len(frame_16k_bytes) == BYTES_PER_16K_FRAME # 640
def test_resample_24k_to_16k_invalid_frame():
"""Test that passing an invalid size frame raises a ValueError."""
invalid_frame = b"\x00\x01" * 100 # only 200 bytes, not 960
with pytest.raises(ValueError, match="Expected exactly"):
_ = resample_24k_to_16k(invalid_frame)
def test_webrtcvad_silence_detection():
"""Make sure that passing all-zero frames leads to is_speech == False."""
vad = webrtcvad.Vad(mode=0)
# Generate 1 second of silence @16k, chunk it in 20ms frames
num_samples = VAD_SAMPLE_RATE_16K # 1 second
silent_audio = np.zeros(num_samples, dtype=np.int16).tobytes()
frame_size = BYTES_PER_16K_FRAME # 640
num_frames = len(silent_audio) // frame_size
speech_frames = 0
for i in range(num_frames):
frame_16k = silent_audio[i * frame_size : (i + 1) * frame_size]
is_speech = vad.is_speech(frame_16k, VAD_SAMPLE_RATE_16K)
if is_speech:
speech_frames += 1
# Expect zero frames labeled as speech
assert speech_frames == 0
def test_webrtcvad_with_real_data():
"""End-to-end test.
- Generate synthetic 24kHz audio
- Break into 20ms frames
- Resample to 16k
- Check how many frames VAD detects as speech.
This test is approximate, since random audio won't always be "speech."
"""
# Instead of reading from a file, generate synthetic audio
# Create 1 second of random audio data at 24kHz
num_samples = SAMPLE_RATE_24K # 1 second
rng = np.random.default_rng(seed=42) # Use a fixed seed for reproducibility
# Generate random audio (this won't be detected as speech, but that's fine for testing)
raw_data_24k = (rng.random(num_samples) * 32767).astype(np.int16).tobytes()
# We'll chunk into 20ms frames (960 bytes each)
frame_size_24k = BYTES_PER_24K_FRAME # 960
total_frames = len(raw_data_24k) // frame_size_24k
vad = webrtcvad.Vad(mode=2)
resampled_all = bytearray()
speech_count = 0
for i in range(total_frames):
frame_24k = raw_data_24k[i * frame_size_24k : (i + 1) * frame_size_24k]
frame_16k = resample_24k_to_16k(frame_24k)
resampled_all.extend(frame_16k) # Append to our buffer
is_speech = vad.is_speech(frame_16k, VAD_SAMPLE_RATE_16K)
if is_speech:
speech_count += 1
# For random noise, we expect very few frames to be detected as speech
# We're not making a strict assertion, just verifying the process works
assert len(resampled_all) == (total_frames * BYTES_PER_16K_FRAME)
# Log the speech detection rate
speech_count / total_frames if total_frames > 0 else 0

View file

@ -1005,14 +1005,14 @@
},
"node_modules/@esbuild/darwin-arm64": {
"version": "0.21.5",
"resolved": "https://registry.npmjs.org/@esbuild/darwin-arm64/-/darwin-arm64-0.21.5.tgz",
"integrity": "sha512-DwqXqZyuk5AiWWf3UfLiRDJ5EDd49zg6O9wclZ7kUMv2WRFr4HKjXp/5t8JZ11QbQfUS6/cRCKGwYhtNAY88kQ==",
"resolved": "https://registry.npmjs.org/@esbuild/linux-x64/-/linux-x64-0.21.5.tgz",
"integrity": "sha512-1rYdTpyv03iycF1+BhzrzQJCdOuAOtaqHTWJZCWvijKD2N5Xu0TtVC8/+1faWqcP9iBCWOmjmhoH94dH82BxPQ==",
"cpu": [
"arm64"
"x64"
],
"optional": true,
"os": [
"darwin"
"linux"
],
"engines": {
"node": ">=12"
@ -2007,14 +2007,14 @@
},
"node_modules/@million/lint/node_modules/@esbuild/darwin-arm64": {
"version": "0.20.2",
"resolved": "https://registry.npmjs.org/@esbuild/darwin-arm64/-/darwin-arm64-0.20.2.tgz",
"integrity": "sha512-4J6IRT+10J3aJH3l1yzEg9y3wkTDgDk7TSDFX+wKFiWjqWp/iCfLIYzGyasx9l0SAFPT1HwSCR+0w/h1ES/MjA==",
"resolved": "https://registry.npmjs.org/@esbuild/linux-x64/-/linux-x64-0.20.2.tgz",
"integrity": "sha512-1MdwI6OOTsfQfek8sLwgyjOXAu+wKhLEoaOLTjbijk6E2WONYpH9ZU2mNtR+lZ2B4uwr+usqGuVfFT9tMtGvGw==",
"cpu": [
"arm64"
"x64"
],
"optional": true,
"os": [
"darwin"
"linux"
],
"engines": {
"node": ">=12"
@ -2415,14 +2415,14 @@
},
"node_modules/@napi-rs/nice-darwin-arm64": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/@napi-rs/nice-darwin-arm64/-/nice-darwin-arm64-1.0.1.tgz",
"integrity": "sha512-91k3HEqUl2fsrz/sKkuEkscj6EAj3/eZNCLqzD2AA0TtVbkQi8nqxZCZDMkfklULmxLkMxuUdKe7RvG/T6s2AA==",
"resolved": "https://registry.npmjs.org/@napi-rs/nice-linux-x64-gnu/-/nice-linux-x64-gnu-1.0.1.tgz",
"integrity": "sha512-XQAJs7DRN2GpLN6Fb+ZdGFeYZDdGl2Fn3TmFlqEL5JorgWKrQGRUrpGKbgZ25UeZPILuTKJ+OowG2avN8mThBA==",
"cpu": [
"arm64"
"x64"
],
"optional": true,
"os": [
"darwin"
"linux"
],
"engines": {
"node": ">= 10"
@ -4122,11 +4122,11 @@
"resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.36.0.tgz",
"integrity": "sha512-JQ1Jk5G4bGrD4pWJQzWsD8I1n1mgPXq33+/vP4sk8j/z/C2siRuxZtaUA7yMTf71TCZTZl/4e1bfzwUmFb3+rw==",
"cpu": [
"arm64"
"x64"
],
"optional": true,
"os": [
"darwin"
"linux"
]
},
"node_modules/@rollup/rollup-darwin-x64": {
@ -4699,12 +4699,12 @@
"resolved": "https://registry.npmjs.org/@swc/core-darwin-arm64/-/core-darwin-arm64-1.11.11.tgz",
"integrity": "sha512-vJcjGVDB8cZH7zyOkC0AfpFYI/7GHKG0NSsH3tpuKrmoAXJyCYspKPGid7FT53EAlWreN7+Pew+bukYf5j+Fmg==",
"cpu": [
"arm64"
"x64"
],
"dev": true,
"optional": true,
"os": [
"darwin"
"linux"
],
"engines": {
"node": ">=10"
@ -8449,19 +8449,6 @@
"optional": true,
"peer": true
},
"node_modules/fsevents": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz",
"integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==",
"hasInstallScript": true,
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
}
},
"node_modules/function-bind": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz",
@ -15644,19 +15631,6 @@
}
}
},
"node_modules/vite/node_modules/fsevents": {
"version": "2.3.3",
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz",
"integrity": "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==",
"hasInstallScript": true,
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
}
},
"node_modules/w3c-keyname": {
"version": "2.2.8",
"resolved": "https://registry.npmjs.org/w3c-keyname/-/w3c-keyname-2.2.8.tgz",

View file

@ -166,7 +166,7 @@ export default function GlobalVariableModal({
</Tabs>
</div>
<div className="space-y-2">
<div className="space-y-2" id="global-variable-modal-inputs">
<Label>Name*</Label>
<Input
value={key}

View file

@ -56,13 +56,15 @@ const OptionBadge = ({
className={cn("flex items-center gap-1 truncate", className)}
>
<div className="truncate">{option}</div>
<X
className="h-3 w-3 cursor-pointer bg-transparent hover:text-destructive"
onClick={(e) =>
onRemove(e as unknown as React.MouseEvent<HTMLButtonElement>)
}
data-testid="remove-icon-badge"
/>
<div>
<X
className="h-3 w-3 cursor-pointer bg-transparent hover:text-destructive"
onClick={(e) =>
onRemove(e as unknown as React.MouseEvent<HTMLButtonElement>)
}
data-testid="remove-icon-badge"
/>
</div>
</Badge>
);
@ -71,17 +73,24 @@ const CommandItemContent = ({
isSelected,
optionButton,
nodeStyle,
commandWidth,
}: {
option: string;
isSelected: boolean;
optionButton: (option: string) => ReactNode;
nodeStyle?: string;
commandWidth?: string;
}) => (
<div className="group flex w-full items-center justify-between">
<div className="flex items-center justify-between">
<SelectionIndicator isSelected={isSelected} />
<ShadTooltip content={option} side="left">
<div className={cn("truncate pr-2", nodeStyle ? "max-w-52" : "w-full")}>
<div
className={cn("w-full truncate pr-2", nodeStyle && "max-w-52")}
style={{
maxWidth: commandWidth,
}}
>
<span>{option}</span>
</div>
</ShadTooltip>
@ -119,6 +128,7 @@ const getInputClassName = (
disabled: boolean,
password: boolean,
selectedOptions: string[],
blockAddNewGlobalVariable: boolean = false,
) => {
return cn(
"popover-input nodrag w-full truncate px-1 pr-4",
@ -127,6 +137,7 @@ const getInputClassName = (
disabled &&
"disabled:text-muted disabled:opacity-100 placeholder:disabled:text-muted-foreground",
password && "text-clip pr-14",
blockAddNewGlobalVariable && "text-clip pr-8",
selectedOptions?.length >= 0 && "cursor-default",
);
};
@ -173,6 +184,8 @@ const CustomInputPopover = ({
optionButton,
autoFocus,
popoverWidth,
commandWidth,
blockAddNewGlobalVariable,
}) => {
const [isFocused, setIsFocused] = useState(false);
const memoizedOptions = useMemo(() => new Set<string>(options), [options]);
@ -230,7 +243,11 @@ const CustomInputPopover = ({
</div>
) : selectedOption?.length > 0 ? (
<ShadTooltip content={selectedOption} side="left">
<div>
<div
style={{
maxWidth: commandWidth,
}}
>
<OptionBadge
option={selectedOption}
onRemove={(e) => handleRemoveOption(selectedOption, e)}
@ -266,6 +283,7 @@ const CustomInputPopover = ({
disabled,
password,
selectedOptions,
blockAddNewGlobalVariable,
)}
placeholder={
selectedOptions?.length > 0 || selectedOption ? "" : placeholder
@ -318,6 +336,7 @@ const CustomInputPopover = ({
}
optionButton={optionButton}
nodeStyle={nodeStyle}
commandWidth={commandWidth}
/>
</CommandItem>
))}

View file

@ -40,6 +40,8 @@ export default function InputComponent({
nodeStyle,
isToolMode,
popoverWidth,
commandWidth,
blockAddNewGlobalVariable = false,
}: InputComponentType): JSX.Element {
const [pwdVisible, setPwdVisible] = useState(false);
const refInput = useRef<HTMLInputElement>(null);
@ -151,54 +153,57 @@ export default function InputComponent({
optionsPlaceholder={optionsPlaceholder}
nodeStyle={nodeStyle}
popoverWidth={popoverWidth}
commandWidth={commandWidth}
blockAddNewGlobalVariable={blockAddNewGlobalVariable}
/>
)}
</>
)}
{(setSelectedOption || setSelectedOptions) && (
<span
className={cn(
password && selectedOption === "" ? "right-8" : "right-0",
"absolute inset-y-0 flex items-center pr-2.5",
disabled && "cursor-not-allowed opacity-50",
)}
>
<button
disabled={disabled}
onClick={(e) => {
if (disabled) return;
setShowOptions(!showOptions);
e.preventDefault();
e.stopPropagation();
}}
{(setSelectedOption || setSelectedOptions) &&
!blockAddNewGlobalVariable && (
<span
className={cn(
onChange && setSelectedOption && selectedOption !== ""
? "text-accent-emerald-foreground"
: "text-placeholder-foreground",
!disabled && "hover:text-foreground",
password && selectedOption === "" ? "right-8" : "right-0",
"absolute inset-y-0 flex items-center pr-2.5",
disabled && "cursor-not-allowed opacity-50",
)}
>
<ForwardedIconComponent
name={
getIconName(
disabled!,
selectedOption!,
optionsIcon,
nodeStyle!,
isToolMode!,
) || "ChevronsUpDown"
}
<button
disabled={disabled}
onClick={(e) => {
if (disabled) return;
setShowOptions(!showOptions);
e.preventDefault();
e.stopPropagation();
}}
className={cn(
disabled ? "cursor-grab text-placeholder" : "cursor-pointer",
"icon-size",
onChange && setSelectedOption && selectedOption !== ""
? "text-accent-emerald-foreground"
: "text-placeholder-foreground",
!disabled && "hover:text-foreground",
)}
strokeWidth={ICON_STROKE_WIDTH}
aria-hidden="true"
/>
</button>
</span>
)}
>
<ForwardedIconComponent
name={
getIconName(
disabled!,
selectedOption!,
optionsIcon,
nodeStyle!,
isToolMode!,
) || "ChevronsUpDown"
}
className={cn(
disabled ? "cursor-grab text-placeholder" : "cursor-pointer",
"icon-size",
)}
strokeWidth={ICON_STROKE_WIDTH}
aria-hidden="true"
/>
</button>
</span>
)}
{password && (!setSelectedOption || selectedOption === "") && (
<button

View file

@ -2,6 +2,8 @@ import {
useDeleteGlobalVariables,
useGetGlobalVariables,
} from "@/controllers/API/queries/variables";
import GeneralDeleteConfirmationModal from "@/shared/components/delete-confirmation-modal";
import GeneralGlobalVariableModal from "@/shared/components/global-variable-modal";
import { useGlobalVariablesStore } from "@/stores/globalVariablesStore/globalVariables";
import { useEffect } from "react";
import DeleteConfirmationModal from "../../../../../modals/deleteConfirmationModal";
@ -26,10 +28,7 @@ export default function InputGlobalComponent({
placeholder,
isToolMode = false,
}: InputProps<string, InputGlobalComponentType>): JSX.Element {
const setErrorData = useAlertStore((state) => state.setErrorData);
const { data: globalVariables } = useGetGlobalVariables();
const { mutate: mutateDeleteGlobalVariable } = useDeleteGlobalVariables();
const unavailableFields = useGlobalVariablesStore(
(state) => state.unavailableFields,
);
@ -59,31 +58,9 @@ export default function InputGlobalComponent({
}
}, [globalVariables, unavailableFields]);
async function handleDelete(key: string) {
if (!globalVariables) return;
const id = globalVariables.find((variable) => variable.name === key)?.id;
if (id !== undefined) {
mutateDeleteGlobalVariable(
{ id },
{
onSuccess: () => {
if (value === key && load_from_db) {
handleOnNewValue({ value: "", load_from_db: false });
}
},
onError: () => {
setErrorData({
title: "Error deleting variable",
list: [cn("ID not found for variable: ", key)],
});
},
},
);
} else {
setErrorData({
title: "Error deleting variable",
list: [cn("ID not found for variable: ", key)],
});
function handleDelete(key: string) {
if (value === key && load_from_db) {
handleOnNewValue({ value: "", load_from_db: false });
}
}
@ -100,43 +77,12 @@ export default function InputGlobalComponent({
options={globalVariables?.map((variable) => variable.name) ?? []}
optionsPlaceholder={"Global Variables"}
optionsIcon="Globe"
optionsButton={
<GlobalVariableModal disabled={disabled}>
<CommandItem value="doNotFilter-addNewVariable">
<ForwardedIconComponent
name="Plus"
className={cn("mr-2 h-4 w-4 text-primary")}
aria-hidden="true"
/>
<span>Add New Variable</span>
</CommandItem>
</GlobalVariableModal>
}
optionsButton={<GeneralGlobalVariableModal />}
optionButton={(option) => (
<DeleteConfirmationModal
onConfirm={(e) => {
e.stopPropagation();
e.preventDefault();
handleDelete(option);
}}
description={'variable "' + option + '"'}
asChild
>
<button
onClick={(e) => {
e.stopPropagation();
}}
className="pr-1"
>
<ForwardedIconComponent
name="Trash2"
className={cn(
"h-4 w-4 text-primary opacity-0 hover:text-status-red group-hover:opacity-100",
)}
aria-hidden="true"
/>
</button>
</DeleteConfirmationModal>
<GeneralDeleteConfirmationModal
option={option}
onConfirmDelete={() => handleDelete(option)}
/>
)}
selectedOption={
load_from_db &&

View file

@ -14,6 +14,8 @@ const buttonVariants = cva(
"bg-destructive text-destructive-foreground hover:bg-destructive/90",
outline:
"border border-input hover:bg-input hover:text-accent-foreground ",
outlineAmber:
"border border-accent-amber-foreground hover:border-accent-amber",
primary:
"border bg-background text-secondary-foreground hover:bg-muted hover:shadow-sm",
warning:
@ -31,6 +33,7 @@ const buttonVariants = cva(
},
size: {
default: "h-10 py-2 px-4",
md: "h-8 py-2 px-4",
sm: "h-9 px-3 rounded-md",
xs: "py-0.5 px-3 rounded-md",
lg: "h-11 px-8 rounded-md",

View file

@ -950,8 +950,8 @@ export const LANGFLOW_REFRESH_TOKEN = "refresh_token_lf";
export const LANGFLOW_ACCESS_TOKEN_EXPIRE_SECONDS = 60 * 60 - 60 * 60 * 0.1;
export const LANGFLOW_ACCESS_TOKEN_EXPIRE_SECONDS_ENV =
Number(process.env.ACCESS_TOKEN_EXPIRE_SECONDS) -
Number(process.env.ACCESS_TOKEN_EXPIRE_SECONDS) * 0.1;
Number(process.env?.ACCESS_TOKEN_EXPIRE_SECONDS ?? 60) -
Number(process.env?.ACCESS_TOKEN_EXPIRE_SECONDS ?? 60) * 0.1;
export const TEXT_FIELD_TYPES: string[] = ["str", "SecretStr"];
export const NODE_WIDTH = 384;
export const NODE_HEIGHT = NODE_WIDTH * 3;
@ -1006,6 +1006,7 @@ export const DEFAULT_PLACEHOLDER = "Type something...";
export const DEFAULT_TOOLSET_PLACEHOLDER = "Used as a tool";
export const SAVE_API_KEY_ALERT = "API key saved successfully";
export const PLAYGROUND_BUTTON_NAME = "Playground";
export const POLLING_MESSAGES = {
ENDPOINT_NOT_AVAILABLE: "Endpoint not available",

View file

@ -24,6 +24,7 @@ export const URLs = {
STARTER_PROJECTS: `starter-projects`,
SIDEBAR_CATEGORIES: `sidebar_categories`,
ALL: `all`,
VOICE: `voice`,
PUBLIC_FLOW: `/flows/public_flow`,
} as const;

View file

@ -16,15 +16,13 @@ export const useGetTypes: useQueryFunctionType<undefined> = (options) => {
`${getURL("ALL")}?force_refresh=true`,
);
const data = response?.data;
console.log("[Types] Got types data:", data);
setTypes(data);
return data;
} catch {
(error) => {
console.error("An error has occurred while fetching types.");
console.log(error);
setLoading(false);
throw error;
};
} catch (error) {
console.error("[Types] Error fetching types:", error);
setLoading(false);
throw error;
}
};

View file

@ -0,0 +1,63 @@
import { useMessagesStore } from "@/stores/messagesStore";
import { UseMutationResult } from "@tanstack/react-query";
import { ColDef, ColGroupDef } from "ag-grid-community";
import { extractColumnsFromRows } from "../../../../utils/utils";
import { api } from "../../api";
import { getURL } from "../../helpers/constants";
import { UseRequestProcessor } from "../../services/request-processor";
interface MessagesQueryParams {
id?: string;
mode: "intersection" | "union";
excludedFields?: string[];
params?: object;
}
interface MessagesResponse {
rows: Array<object>;
columns: Array<ColDef | ColGroupDef>;
}
export const useGetMessagesMutation = (
options?: any,
): UseMutationResult<
MessagesResponse,
unknown,
MessagesQueryParams,
unknown
> => {
const { mutate } = UseRequestProcessor();
const getMessagesFn = async (
payload: MessagesQueryParams,
): Promise<MessagesResponse> => {
const { id, mode, excludedFields, params } = payload;
const config = {};
if (id) {
config["params"] = { flow_id: id };
}
if (params) {
config["params"] = { ...config["params"], ...params };
}
const data = await api.get<any>(`${getURL("MESSAGES")}`, config);
const columns = extractColumnsFromRows(data.data, mode, excludedFields);
useMessagesStore.getState().setMessages(data.data);
return { rows: data.data, columns };
};
// Cast the mutation to the correct type
const mutation = mutate(
["useGetMessagesMutation"],
getMessagesFn,
options,
) as UseMutationResult<
MessagesResponse,
unknown,
MessagesQueryParams,
unknown
>;
return mutation;
};

View file

@ -0,0 +1,208 @@
import { useMessagesStore } from "@/stores/messagesStore";
import { UseMutationResult } from "@tanstack/react-query";
import { ColDef, ColGroupDef } from "ag-grid-community";
import { useEffect, useRef } from "react";
import { extractColumnsFromRows } from "../../../../utils/utils";
import { api } from "../../api";
import { getURL } from "../../helpers/constants";
import { UseRequestProcessor } from "../../services/request-processor";
interface MessagesQueryParams {
id?: string;
mode: "intersection" | "union";
excludedFields?: string[];
params?: object;
onSuccess?: (data: MessagesResponse) => void;
stopPollingOn?: (data: MessagesResponse) => boolean;
}
interface MessagesResponse {
rows: Array<object>;
columns: Array<ColDef | ColGroupDef>;
}
interface PollingItem {
interval: NodeJS.Timeout;
timestamp: number;
id: string;
callback: () => Promise<void>;
}
const MessagesPollingManager = {
pollingQueue: new Map<string, PollingItem[]>(),
activePolls: new Map<string, PollingItem>(),
enqueuePolling(id: string, pollingItem: PollingItem) {
if (!this.pollingQueue.has(id)) {
this.pollingQueue.set(id, []);
}
this.pollingQueue.set(
id,
(this.pollingQueue.get(id) || []).filter(
(item) => item.timestamp !== pollingItem.timestamp,
),
);
this.pollingQueue.get(id)?.push(pollingItem);
if (!this.activePolls.has(id)) {
this.startNextPolling(id);
}
},
startNextPolling(id: string) {
const queue = this.pollingQueue.get(id) || [];
if (queue.length === 0) {
this.activePolls.delete(id);
return;
}
const nextPoll = queue[0];
this.activePolls.set(id, nextPoll);
nextPoll.callback();
},
stopPoll(id: string) {
const activePoll = this.activePolls.get(id);
if (activePoll) {
clearInterval(activePoll.interval);
this.activePolls.delete(id);
const queue = this.pollingQueue.get(id) || [];
this.pollingQueue.set(
id,
queue.filter((item) => item.timestamp !== activePoll.timestamp),
);
this.startNextPolling(id);
}
},
stopAll() {
this.activePolls.forEach((poll) => clearInterval(poll.interval));
this.activePolls.clear();
this.pollingQueue.clear();
},
removeFromQueue(id: string, timestamp: number) {
const queue = this.pollingQueue.get(id) || [];
this.pollingQueue.set(
id,
queue.filter((item) => item.timestamp !== timestamp),
);
},
};
export const useGetMessagesPollingMutation = (
options?: any,
): UseMutationResult<
MessagesResponse,
unknown,
MessagesQueryParams,
unknown
> => {
const { mutate } = UseRequestProcessor();
const requestIdRef = useRef<string | null>(null);
const requestInProgressRef = useRef<Record<string, boolean>>({});
// Default polling interval of 5 seconds (5000ms)
const POLLING_INTERVAL = 5000;
const getMessagesFn = async (
payload: MessagesQueryParams,
): Promise<MessagesResponse> => {
const requestId = payload.id || "default";
if (requestInProgressRef.current[requestId]) {
return Promise.reject("Request already in progress");
}
try {
requestInProgressRef.current[requestId] = true;
const { id, mode, excludedFields, params } = payload;
const config = {};
if (id) {
config["params"] = { flow_id: id };
}
if (params) {
config["params"] = { ...config["params"], ...params };
}
const data = await api.get<any>(`${getURL("MESSAGES")}`, config);
const columns = extractColumnsFromRows(data.data, mode, excludedFields);
useMessagesStore.getState().setMessages(data.data);
return { rows: data.data, columns };
} finally {
requestInProgressRef.current[requestId] = false;
}
};
const startPolling = (payload: MessagesQueryParams) => {
const requestId = payload.id || "default";
if (requestInProgressRef.current[requestId]) {
return Promise.reject("Request already in progress");
}
if (
requestIdRef.current === requestId &&
MessagesPollingManager.activePolls.has(requestId)
) {
return Promise.resolve({ rows: [], columns: [] });
}
requestIdRef.current = requestId;
const timestamp = Date.now();
const pollCallback = async () => {
const data = await getMessagesFn(payload);
payload.onSuccess?.(data);
if (payload.stopPollingOn?.(data)) {
MessagesPollingManager.stopPoll(requestId);
}
};
const intervalId = setInterval(pollCallback, POLLING_INTERVAL);
const pollingItem: PollingItem = {
interval: intervalId,
timestamp,
id: requestId,
callback: pollCallback,
};
MessagesPollingManager.enqueuePolling(requestId, pollingItem);
return getMessagesFn(payload).then((data) => {
payload.onSuccess?.(data);
if (payload.stopPollingOn?.(data)) {
MessagesPollingManager.stopPoll(requestId);
}
return data;
});
};
useEffect(() => {
return () => {
if (requestIdRef.current) {
MessagesPollingManager.stopPoll(requestIdRef.current);
}
};
}, []);
// Cast the mutation to the correct type
const mutation = mutate(
["useGetMessagesMutation"],
(payload: MessagesQueryParams) =>
startPolling(payload) ?? Promise.reject("Failed to start polling"),
options,
) as UseMutationResult<
MessagesResponse,
unknown,
MessagesQueryParams,
unknown
>;
return mutation;
};
export { MessagesPollingManager };

View file

@ -0,0 +1,49 @@
import { useVoiceStore } from "@/stores/voiceStore";
import { useQueryFunctionType } from "@/types/api";
import { api } from "../../api";
import { getURL } from "../../helpers/constants";
import { UseRequestProcessor } from "../../services/request-processor";
export const useGetVoiceList: useQueryFunctionType<undefined, any> = (
options,
) => {
const { query } = UseRequestProcessor();
const setVoices = useVoiceStore((state) => state.setVoices);
const voices = useVoiceStore((state) => state.voices);
const getVoiceListFn = async (): Promise<
{
name: string;
voice_id: string;
}[]
> => {
if (voices.length > 0) {
return voices;
}
const res = await api.get(`${getURL("VOICE")}/elevenlabs/voice_ids`);
const data = res.data;
const voicesMapped = data.map((voice) => ({
name: voice.name,
value: voice.voice_id,
}));
setVoices(voicesMapped);
return voicesMapped;
};
const defaultOptions = {
refetchOnMount: false,
refetchOnWindowFocus: false,
staleTime: 1000 * 60 * 5,
...options,
};
const queryResult = query(
["useGetVoiceList"],
getVoiceListFn,
defaultOptions,
);
return queryResult;
};

View file

@ -11,3 +11,4 @@ export const ENABLE_DATASTAX_LANGFLOW = false;
export const ENABLE_FILE_MANAGEMENT = true;
export const ENABLE_PUBLISH = true;
export const ENABLE_WIDGET = true;
export const ENABLE_VOICE_ASSISTANT = true;

View file

@ -4,7 +4,7 @@ import { Separator } from "@/components/ui/separator";
import { cn } from "@/utils/utils";
import IconComponent from "../../../components/common/genericIconComponent";
import { ChatViewWrapperProps } from "../types/chat-view-wrapper";
import ChatView from "./chatView/chat-view";
import ChatView from "./chatView/components/chat-view";
export const ChatViewWrapper = ({
selectedViewField,

View file

@ -1,16 +1,13 @@
import { Button } from "@/components/ui/button";
import Loading from "@/components/ui/loading";
import { usePostUploadFile } from "@/controllers/API/queries/files/use-post-upload-file";
import useFileSizeValidator from "@/shared/hooks/use-file-size-validator";
import useAlertStore from "@/stores/alertStore";
import useFlowStore from "@/stores/flowStore";
import { useUtilityStore } from "@/stores/utilityStore";
import { useEffect, useRef } from "react";
import { AnimatePresence, motion } from "framer-motion";
import { useEffect, useRef, useState } from "react";
import ShortUniqueId from "short-unique-id";
import {
ALLOWED_IMAGE_INPUT_EXTENSIONS,
CHAT_INPUT_PLACEHOLDER,
CHAT_INPUT_PLACEHOLDER_SEND,
FS_ERROR_TEXT,
SN_ERROR_TEXT,
} from "../../../../../constants/constants";
@ -19,12 +16,12 @@ import {
ChatInputType,
FilePreviewType,
} from "../../../../../types/components";
import FilePreview from "../fileComponent/components/file-preview";
import ButtonSendWrapper from "./components/button-send-wrapper";
import TextAreaWrapper from "./components/text-area-wrapper";
import UploadFileButton from "./components/upload-file-button";
import InputWrapper from "./components/input-wrapper";
import NoInputView from "./components/no-input";
import { VoiceAssistant } from "./components/voice-assistant/voice-assistant";
import useAutoResizeTextArea from "./hooks/use-auto-resize-text-area";
import useFocusOnUnlock from "./hooks/use-focus-unlock";
export default function ChatInput({
sendMessage,
inputRef,
@ -42,6 +39,8 @@ export default function ChatInput({
const isBuilding = useFlowStore((state) => state.isBuilding);
const chatValue = useUtilityStore((state) => state.chatValueStore);
const [showAudioInput, setShowAudioInput] = useState(false);
useFocusOnUnlock(isBuilding, inputRef);
useAutoResizeTextArea(chatValue, inputRef);
@ -164,8 +163,6 @@ export default function ChatInput({
);
};
const classNameFilePreview = `flex w-full items-center gap-2 py-2 overflow-auto custom-scroll`;
const handleButtonClick = () => {
fileInputRef.current!.click();
};
@ -177,99 +174,56 @@ export default function ChatInput({
if (noInput) {
return (
<div className="flex h-full w-full flex-col items-center justify-center">
<div className="flex w-full flex-col items-center justify-center gap-3 rounded-md border border-input bg-muted p-2 py-4">
{!isBuilding ? (
<Button
data-testid="button-send"
className="font-semibold"
onClick={() => {
sendMessage({
repeat: 1,
});
}}
>
Run Flow
</Button>
) : (
<Button
onClick={stopBuilding}
data-testid="button-stop"
unstyled
className="form-modal-send-button cursor-pointer bg-muted text-foreground hover:bg-secondary-hover dark:hover:bg-input"
>
<div className="flex items-center gap-2 rounded-md text-[14px] font-medium">
Stop
<Loading className="h-[16px] w-[16px]" />
</div>
</Button>
)}
<p className="text-muted-foreground">
Add a{" "}
<a
className="underline underline-offset-4"
target="_blank"
href="https://docs.langflow.org/components-io#chat-input"
>
Chat Input
</a>{" "}
component to your flow to send messages.
</p>
</div>
</div>
<NoInputView
isBuilding={isBuilding}
sendMessage={sendMessage}
stopBuilding={stopBuilding}
/>
);
}
return (
<div className="flex w-full flex-col-reverse">
<div className="flex w-full flex-col rounded-md border border-input p-4 hover:border-muted-foreground focus:border-[1.75px] has-[:focus]:border-primary">
<TextAreaWrapper
isBuilding={isBuilding}
checkSendingOk={checkSendingOk}
send={send}
noInput={noInput}
chatValue={chatValue}
CHAT_INPUT_PLACEHOLDER={CHAT_INPUT_PLACEHOLDER}
CHAT_INPUT_PLACEHOLDER_SEND={CHAT_INPUT_PLACEHOLDER_SEND}
inputRef={inputRef}
files={files}
isDragging={isDragging}
/>
<div className={classNameFilePreview}>
{files.map((file) => (
<FilePreview
error={file.error}
file={file.file}
loading={file.loading}
key={file.id}
onDelete={() => {
handleDeleteFile(file);
}}
/>
))}
</div>
<div className="flex w-full items-end justify-between">
{!playgroundPage && (
<div className={isBuilding ? "cursor-not-allowed" : ""}>
<UploadFileButton
isBuilding={isBuilding}
fileInputRef={fileInputRef}
handleFileChange={handleFileChange}
handleButtonClick={handleButtonClick}
/>
</div>
)}
<div className={playgroundPage ? "ml-auto" : ""}>
<ButtonSendWrapper
send={send}
noInput={noInput}
chatValue={chatValue}
files={files}
/>
</div>
</div>
</div>
</div>
<AnimatePresence mode="wait">
{showAudioInput ? (
<motion.div
key="voice-assistant"
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
exit={{ opacity: 0 }}
transition={{ duration: 0.2 }}
>
<VoiceAssistant
flowId={currentFlowId}
setShowAudioInput={setShowAudioInput}
/>
</motion.div>
) : (
<motion.div
key="input-wrapper"
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
exit={{ opacity: 0 }}
transition={{ duration: 0.2 }}
>
<InputWrapper
isBuilding={isBuilding}
checkSendingOk={checkSendingOk}
send={send}
noInput={noInput}
chatValue={chatValue}
inputRef={inputRef}
files={files}
isDragging={isDragging}
handleDeleteFile={handleDeleteFile}
fileInputRef={fileInputRef}
handleFileChange={handleFileChange}
handleButtonClick={handleButtonClick}
setShowAudioInput={setShowAudioInput}
currentFlowId={currentFlowId}
playgroundPage={playgroundPage}
/>
</motion.div>
)}
</AnimatePresence>
);
}

View file

@ -0,0 +1,113 @@
import { ENABLE_VOICE_ASSISTANT } from "@/customization/feature-flags";
import { FilePreviewType } from "@/types/components";
import React from "react";
import {
CHAT_INPUT_PLACEHOLDER,
CHAT_INPUT_PLACEHOLDER_SEND,
} from "../../../../../../constants/constants";
import FilePreview from "../../fileComponent/components/file-preview";
import ButtonSendWrapper from "./button-send-wrapper";
import TextAreaWrapper from "./text-area-wrapper";
import UploadFileButton from "./upload-file-button";
import VoiceButton from "./voice-assistant/components/voice-button";
interface InputWrapperProps {
isBuilding: boolean;
checkSendingOk: (event: React.KeyboardEvent<HTMLTextAreaElement>) => boolean;
send: () => void;
noInput: boolean;
chatValue: string;
inputRef: React.RefObject<HTMLTextAreaElement>;
files: FilePreviewType[];
isDragging: boolean;
handleDeleteFile: (file: FilePreviewType) => void;
fileInputRef: React.RefObject<HTMLInputElement>;
handleFileChange: (event: React.ChangeEvent<HTMLInputElement>) => void;
handleButtonClick: () => void;
setShowAudioInput: (value: boolean) => void;
currentFlowId: string;
playgroundPage: boolean;
}
const InputWrapper: React.FC<InputWrapperProps> = ({
isBuilding,
checkSendingOk,
send,
noInput,
chatValue,
inputRef,
files,
isDragging,
handleDeleteFile,
fileInputRef,
handleFileChange,
handleButtonClick,
setShowAudioInput,
currentFlowId,
playgroundPage,
}) => {
const classNameFilePreview = `flex w-full items-center gap-2 py-2 overflow-auto custom-scroll`;
return (
<div className="flex w-full flex-col-reverse">
<div
data-testid="input-wrapper"
className="flex w-full flex-col rounded-md border border-input p-4 hover:border-muted-foreground focus:border-[1.75px] has-[:focus]:border-primary"
>
<TextAreaWrapper
isBuilding={isBuilding}
checkSendingOk={checkSendingOk}
send={send}
noInput={noInput}
chatValue={chatValue}
CHAT_INPUT_PLACEHOLDER={CHAT_INPUT_PLACEHOLDER}
CHAT_INPUT_PLACEHOLDER_SEND={CHAT_INPUT_PLACEHOLDER_SEND}
inputRef={inputRef}
files={files}
isDragging={isDragging}
/>
<div className={classNameFilePreview}>
{files.map((file) => (
<FilePreview
error={file.error}
file={file.file}
loading={file.loading}
key={file.id}
onDelete={() => {
handleDeleteFile(file);
}}
/>
))}
</div>
<div className="flex w-full items-end justify-between">
{!playgroundPage && (
<div className={isBuilding ? "cursor-not-allowed" : ""}>
<UploadFileButton
isBuilding={isBuilding}
fileInputRef={fileInputRef}
handleFileChange={handleFileChange}
handleButtonClick={handleButtonClick}
/>
</div>
)}
<div className="flex items-center gap-2">
{ENABLE_VOICE_ASSISTANT && (
<VoiceButton toggleRecording={() => setShowAudioInput(true)} />
)}
<div className={playgroundPage ? "ml-auto" : ""}>
<ButtonSendWrapper
send={send}
noInput={noInput}
chatValue={chatValue}
files={files}
/>
</div>
</div>
</div>
</div>
</div>
);
};
export default InputWrapper;

View file

@ -0,0 +1,64 @@
import { Button } from "@/components/ui/button";
import Loading from "@/components/ui/loading";
import React, { useEffect, useRef, useState } from "react";
import IconComponent from "../../../../../../components/common/genericIconComponent";
import { ICON_STROKE_WIDTH } from "../../../../../../constants/constants";
import { cn } from "../../../../../../utils/utils";
interface NoInputViewProps {
isBuilding: boolean;
sendMessage: (args: { repeat: number }) => void;
stopBuilding: () => void;
}
const NoInputView: React.FC<NoInputViewProps> = ({
isBuilding,
sendMessage,
stopBuilding,
}) => {
return (
<div className="flex h-full w-full flex-col items-center justify-center">
<div className="flex w-full flex-col items-center justify-center gap-3 rounded-md border border-input bg-muted p-2 py-4">
{!isBuilding ? (
<Button
data-testid="button-send"
className="font-semibold"
onClick={() => {
sendMessage({
repeat: 1,
});
}}
>
Run Flow
</Button>
) : (
<Button
onClick={stopBuilding}
data-testid="button-stop"
unstyled
className="form-modal-send-button cursor-pointer bg-muted text-foreground hover:bg-secondary-hover dark:hover:bg-input"
>
<div className="flex items-center gap-2 rounded-md text-[14px] font-medium">
Stop
<Loading className="h-[16px] w-[16px]" />
</div>
</Button>
)}
<p className="text-muted-foreground">
Add a{" "}
<a
className="underline underline-offset-4"
target="_blank"
href="https://docs.langflow.org/components-io#chat-input"
>
Chat Input
</a>{" "}
component to your flow to send messages.
</p>
</div>
</div>
);
};
export default NoInputView;

View file

@ -24,7 +24,7 @@ const UploadFileButton = ({
/>
<Button
disabled={isBuilding}
className={`flex h-[32px] w-[32px] items-center justify-center rounded-md bg-muted font-bold transition-all ${
className={`btn-playground-actions ${
isBuilding
? "cursor-not-allowed"
: "text-muted-foreground hover:text-primary"

View file

@ -0,0 +1,431 @@
import IconComponent from "@/components/common/genericIconComponent";
import ShadTooltip from "@/components/common/shadTooltipComponent";
import InputComponent from "@/components/core/parameterRenderComponent/components/inputComponent";
import { getPlaceholder } from "@/components/core/parameterRenderComponent/helpers/get-placeholder-disabled";
import { Button } from "@/components/ui/button";
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuTrigger,
} from "@/components/ui/dropdown-menu";
import { Separator } from "@/components/ui/separator";
import { useGetVoiceList } from "@/controllers/API/queries/voice/use-get-voice-list";
import GeneralDeleteConfirmationModal from "@/shared/components/delete-confirmation-modal";
import GeneralGlobalVariableModal from "@/shared/components/global-variable-modal";
import { useGlobalVariablesStore } from "@/stores/globalVariablesStore/globalVariables";
import { useVoiceStore } from "@/stores/voiceStore";
import { getLocalStorage, setLocalStorage } from "@/utils/local-storage-util";
import { useEffect, useRef, useState } from "react";
import AudioSettingsHeader from "./components/header";
import LanguageSelect from "./components/language-select";
import MicrophoneSelect from "./components/microphone-select";
import VoiceSelect from "./components/voice-select";
const ALL_LANGUAGES = [
{ value: "en-US", name: "English (US)" },
{ value: "en-GB", name: "English (UK)" },
{ value: "it-IT", name: "Italian" },
{ value: "fr-FR", name: "French" },
{ value: "es-ES", name: "Spanish" },
{ value: "de-DE", name: "German" },
{ value: "ja-JP", name: "Japanese" },
{ value: "pt-BR", name: "Portuguese (Brazil)" },
{ value: "zh-CN", name: "Chinese (Simplified)" },
{ value: "ru-RU", name: "Russian" },
{ value: "ar-SA", name: "Arabic" },
{ value: "hi-IN", name: "Hindi" },
];
interface SettingsVoiceModalProps {
children?: React.ReactNode;
userOpenaiApiKey?: string;
userElevenLabsApiKey?: string;
hasElevenLabsApiKeyEnv?: boolean;
setShowSettingsModal: (
open: boolean,
openaiApiKey: string,
elevenLabsApiKey: string,
) => void;
hasOpenAIAPIKey: boolean;
language?: string;
setLanguage?: (language: string) => void;
handleClickSaveOpenAIApiKey: (openaiApiKey: string) => void;
isEditingOpenAIKey: boolean;
setIsEditingOpenAIKey: (isEditingOpenAIKey: boolean) => void;
}
const SettingsVoiceModal = ({
children,
userOpenaiApiKey,
userElevenLabsApiKey,
setShowSettingsModal,
hasOpenAIAPIKey,
language,
setLanguage,
handleClickSaveOpenAIApiKey,
isEditingOpenAIKey,
setIsEditingOpenAIKey,
}: SettingsVoiceModalProps) => {
const popupRef = useRef<HTMLDivElement>(null);
const [voice, setVoice] = useState<string>("alloy");
const [open, setOpen] = useState<boolean>(false);
const voices = useVoiceStore((state) => state.voices);
const shouldFetchVoices = voices.length === 0;
const [openaiApiKey, setOpenaiApiKey] = useState<string>(
userOpenaiApiKey ?? "",
);
const [elevenLabsApiKey, setElevenLabsApiKey] = useState<string>(
userElevenLabsApiKey ?? "",
);
const globalVariables = useGlobalVariablesStore(
(state) => state.globalVariablesEntries,
);
const openaiVoices = useVoiceStore((state) => state.openaiVoices);
const [allVoices, setAllVoices] = useState<
{
name: string;
value: string;
}[]
>([]);
const saveButtonClicked = useRef(false);
const {
data: voiceList,
isFetched,
refetch,
} = useGetVoiceList({
enabled: shouldFetchVoices,
refetchOnMount: shouldFetchVoices,
refetchOnWindowFocus: shouldFetchVoices,
staleTime: Infinity,
});
const [microphones, setMicrophones] = useState<MediaDeviceInfo[]>([]);
const [selectedMicrophone, setSelectedMicrophone] = useState<string>("");
const [currentLanguage, setCurrentLanguage] = useState(
localStorage.getItem("lf_preferred_language") || "en-US",
);
useEffect(() => {
if (isFetched) {
if (voiceList) {
const allVoicesMerged = [...openaiVoices, ...voiceList];
voiceList.length > 0
? setAllVoices(allVoicesMerged)
: setAllVoices(openaiVoices);
} else {
setAllVoices(openaiVoices);
}
}
}, [voiceList, isFetched, userElevenLabsApiKey]);
useEffect(() => {
const audioSettings = JSON.parse(
getLocalStorage("lf_audio_settings_playground") || "{}",
);
if (isFetched) {
if (audioSettings.provider) {
setVoice(audioSettings.voice);
} else {
setVoice(openaiVoices[0].value);
}
} else {
setVoice(openaiVoices[0].value);
}
}, [isFetched]);
const handleSetVoice = (value: string) => {
setVoice(value);
const isOpenAiVoice = openaiVoices.some((voice) => voice.value === value);
if (isOpenAiVoice) {
setLocalStorage(
"lf_audio_settings_playground",
JSON.stringify({
provider: "openai",
voice: value,
}),
);
} else {
setLocalStorage(
"lf_audio_settings_playground",
JSON.stringify({
provider: "elevenlabs",
voice: value,
}),
);
}
};
const onOpenChangeDropdownMenu = (open: boolean) => {
setOpen(open);
setShowSettingsModal(open, openaiApiKey, elevenLabsApiKey);
};
const checkIfGlobalVariableExists = (variable: string) => {
return globalVariables?.map((variable) => variable).includes(variable);
};
const handleSetMicrophone = (deviceId: string) => {
setSelectedMicrophone(deviceId);
localStorage.setItem("lf_selected_microphone", deviceId);
};
useEffect(() => {
setOpenaiApiKey(userOpenaiApiKey ?? "");
}, [userOpenaiApiKey]);
useEffect(() => {
setElevenLabsApiKey(userElevenLabsApiKey ?? "");
if (!userElevenLabsApiKey) {
handleSetVoice(openaiVoices[0].value);
setAllVoices(openaiVoices);
return;
}
refetch();
}, [userElevenLabsApiKey]);
useEffect(() => {
if (!hasOpenAIAPIKey) {
setOpen(true);
}
}, [hasOpenAIAPIKey]);
const handleSetLanguage = (value: string) => {
setCurrentLanguage(value);
localStorage.setItem("lf_preferred_language", value);
if (setLanguage) {
setLanguage(value);
}
};
useEffect(() => {
if (language) {
setCurrentLanguage(language);
}
}, [language]);
const handleClickSaveApiKey = (value: string) => {
if (!value) return;
if (value === "OPENAI_API_KEY") {
setIsEditingOpenAIKey(false);
return;
}
handleClickSaveOpenAIApiKey(value);
saveButtonClicked.current = true;
};
const handleOpenAIKeyChange = (value: string) => {
if (!value) return;
setOpenaiApiKey(value);
};
useEffect(() => {
setOpenaiApiKey("");
}, [isEditingOpenAIKey]);
useEffect(() => {
if (!open) {
setIsEditingOpenAIKey(false);
}
}, [open]);
const showAddOpenAIKeyButton = !hasOpenAIAPIKey || isEditingOpenAIKey;
const showAllSettings = hasOpenAIAPIKey && !isEditingOpenAIKey;
return (
<>
<DropdownMenu open={open} onOpenChange={onOpenChangeDropdownMenu}>
<DropdownMenuTrigger data-dropdown-trigger="true">
{children}
</DropdownMenuTrigger>
<DropdownMenuContent
className="w-[324px] rounded-xl shadow-lg"
sideOffset={18}
alignOffset={-54}
align="end"
>
<div ref={popupRef} className="rounded-3xl">
<div>
<AudioSettingsHeader />
<Separator className="w-full" />
<div className="w-full space-y-4 p-4">
<div className="grid w-full items-center gap-2">
<span className="flex items-center text-sm">
OpenAI API Key
<span className="ml-1 text-destructive">*</span>
<ShadTooltip content="OpenAI API key is required to use the voice assistant.">
<div>
<IconComponent
name="Info"
strokeWidth={2}
className="relative -top-[3px] left-1 h-[14px] w-[14px] text-placeholder"
/>
</div>
</ShadTooltip>
</span>
{showAddOpenAIKeyButton && (
<>
<InputComponent
isObjectOption={false}
password={false}
nodeStyle
popoverWidth="16rem"
placeholder={getPlaceholder(
false,
"Enter your OpenAI API key",
)}
id="openai-api-key"
options={
globalVariables?.map((variable) => variable) ?? []
}
optionsPlaceholder={"Global Variables"}
optionsIcon="Globe"
optionsButton={<GeneralGlobalVariableModal />}
optionButton={(option) => (
<GeneralDeleteConfirmationModal
option={option}
onConfirmDelete={() => {}}
/>
)}
value={openaiApiKey}
onChange={handleOpenAIKeyChange}
selectedOption={
checkIfGlobalVariableExists(openaiApiKey)
? openaiApiKey
: ""
}
commandWidth="11rem"
/>
</>
)}
{showAllSettings && (
<>
<Button
variant="primary"
className="w-full"
onClick={() => setIsEditingOpenAIKey(true)}
size="md"
>
Edit
</Button>
</>
)}
</div>
{!showAllSettings && (
<div className="flex gap-2">
<Button
onClick={() => setIsEditingOpenAIKey(false)}
variant="primary"
size="md"
className="w-full"
data-testid="voice-assistant-settings-modal-cancel-button"
>
Cancel
</Button>
<Button
onClick={() => handleClickSaveApiKey(openaiApiKey)}
className="w-full"
disabled={!openaiApiKey}
size="md"
data-testid="voice-assistant-settings-modal-save-button"
>
{isEditingOpenAIKey ? "Update" : "Save"}
</Button>
</div>
)}
{showAllSettings && (
<>
<div className="grid w-full items-center gap-2">
<span className="flex items-center text-sm">
ElevenLabs API Key
<ShadTooltip content="If you have an ElevenLabs API key, you can select ElevenLabs voices.">
<div>
<IconComponent
name="Info"
strokeWidth={2}
className="relative -top-[3px] left-1 h-[14px] w-[14px] text-placeholder"
/>
</div>
</ShadTooltip>
</span>
<InputComponent
isObjectOption={false}
password
nodeStyle
popoverWidth="16rem"
placeholder={getPlaceholder(
false,
"Enter your ElevenLabs API key",
)}
id="eleven-labs-api-key"
options={
globalVariables?.map((variable) => variable) ?? []
}
optionsPlaceholder={"Global Variables"}
optionsIcon="Globe"
optionsButton={<GeneralGlobalVariableModal />}
optionButton={(option) => (
<GeneralDeleteConfirmationModal
option={option}
onConfirmDelete={() => {}}
/>
)}
value={elevenLabsApiKey}
onChange={(value) => {
setElevenLabsApiKey(value);
}}
selectedOption={
checkIfGlobalVariableExists(elevenLabsApiKey)
? elevenLabsApiKey
: ""
}
setSelectedOption={setElevenLabsApiKey}
commandWidth="11rem"
blockAddNewGlobalVariable
/>
</div>
<VoiceSelect
voice={voice}
handleSetVoice={handleSetVoice}
allVoices={allVoices}
/>
<MicrophoneSelect
selectedMicrophone={selectedMicrophone}
handleSetMicrophone={handleSetMicrophone}
microphones={microphones}
setMicrophones={setMicrophones}
setSelectedMicrophone={setSelectedMicrophone}
/>
<LanguageSelect
language={currentLanguage}
handleSetLanguage={handleSetLanguage}
allLanguages={ALL_LANGUAGES}
/>
</>
)}
</div>
</div>
</div>
</DropdownMenuContent>
</DropdownMenu>
</>
);
};
export default SettingsVoiceModal;

View file

@ -0,0 +1,27 @@
import React from "react";
import IconComponent from "../../../../../../../../../../components/common/genericIconComponent";
import { ICON_STROKE_WIDTH } from "../../../../../../../../../../constants/constants";
const AudioSettingsHeader = () => {
return (
<div
className="grid gap-1 p-4"
data-testid="voice-assistant-settings-modal-header"
>
<p className="flex items-center gap-2 text-sm text-primary">
<IconComponent
name="Settings"
strokeWidth={ICON_STROKE_WIDTH}
className="h-4 w-4 text-muted-foreground hover:text-foreground"
/>
Voice settings
</p>
<p className="text-[13px] leading-4 text-muted-foreground">
Voice chat is powered by OpenAI. You can also add more voices with
ElevenLabs.
</p>
</div>
);
};
export default AudioSettingsHeader;

View file

@ -0,0 +1,58 @@
import IconComponent from "../../../../../../../../../../components/common/genericIconComponent";
import ShadTooltip from "../../../../../../../../../../components/common/shadTooltipComponent";
import {
Select,
SelectContent,
SelectGroup,
SelectItem,
SelectTrigger,
SelectValue,
} from "../../../../../../../../../../components/ui/select";
interface LanguageSelectProps {
language: string;
handleSetLanguage: (value: string) => void;
allLanguages: { value: string; name: string }[];
}
const LanguageSelect = ({
language,
handleSetLanguage,
allLanguages,
}: LanguageSelectProps) => {
return (
<div className="grid w-full items-center gap-2">
<span className="flex w-full items-center text-sm">
Preferred Language
<ShadTooltip content="Select the language for speech recognition">
<div>
<IconComponent
name="Info"
strokeWidth={2}
className="relative -top-[3px] left-1 h-[14px] w-[14px] text-placeholder"
/>
</div>
</ShadTooltip>
</span>
<Select value={language} onValueChange={handleSetLanguage}>
<SelectTrigger className="h-9 w-full">
<SelectValue placeholder="Select language" />
</SelectTrigger>
<SelectContent className="max-h-[200px]">
<SelectGroup>
{allLanguages.map((lang) => (
<SelectItem key={lang?.value} value={lang?.value}>
<div className="max-w-[220px] truncate text-left">
{lang?.name}
</div>
</SelectItem>
))}
</SelectGroup>
</SelectContent>
</Select>
</div>
);
};
export default LanguageSelect;

View file

@ -0,0 +1,115 @@
import { useEffect } from "react";
import IconComponent from "../../../../../../../../../../components/common/genericIconComponent";
import ShadTooltip from "../../../../../../../../../../components/common/shadTooltipComponent";
import {
Select,
SelectContent,
SelectGroup,
SelectItem,
SelectTrigger,
SelectValue,
} from "../../../../../../../../../../components/ui/select";
interface MicrophoneSelectProps {
selectedMicrophone: string;
handleSetMicrophone: (value: string) => void;
microphones: MediaDeviceInfo[];
setMicrophones: (microphones: MediaDeviceInfo[]) => void;
setSelectedMicrophone: (microphone: string) => void;
}
const MicrophoneSelect = ({
selectedMicrophone,
handleSetMicrophone,
microphones,
setMicrophones,
setSelectedMicrophone,
}: MicrophoneSelectProps) => {
useEffect(() => {
const getMicrophones = async () => {
try {
await navigator?.mediaDevices?.getUserMedia({ audio: true });
const devices = await navigator?.mediaDevices?.enumerateDevices();
const audioInputDevices = devices?.filter(
(device) => device.kind === "audioinput",
);
setMicrophones(audioInputDevices);
if (audioInputDevices.length > 0 && !selectedMicrophone) {
const savedMicrophoneId = localStorage.getItem(
"lf_selected_microphone",
);
if (
savedMicrophoneId &&
audioInputDevices.some(
(device) => device.deviceId === savedMicrophoneId,
)
) {
setSelectedMicrophone(savedMicrophoneId);
} else {
setSelectedMicrophone(audioInputDevices[0].deviceId);
}
}
} catch (error) {
console.error("Error accessing media devices:", error);
}
};
getMicrophones();
navigator?.mediaDevices?.addEventListener("devicechange", getMicrophones);
return () => {
navigator?.mediaDevices?.removeEventListener(
"devicechange",
getMicrophones,
);
};
}, []);
return (
<div
className="grid w-full items-center gap-2"
data-testid="voice-assistant-settings-modal-microphone-select"
>
<span className="flex w-full items-center text-sm">
Audio Input
<ShadTooltip content="Select which microphone to use for voice input">
<div>
<IconComponent
name="Info"
strokeWidth={2}
className="relative -top-[3px] left-1 h-[14px] w-[14px] text-placeholder"
/>
</div>
</ShadTooltip>
</span>
<Select value={selectedMicrophone} onValueChange={handleSetMicrophone}>
<SelectTrigger className="h-9 w-full">
<SelectValue placeholder="Select microphone" />
</SelectTrigger>
<SelectContent className="max-h-[200px]">
<SelectGroup>
{microphones?.map((device) => (
<SelectItem key={device?.deviceId} value={device?.deviceId}>
<div className="max-w-[220px] truncate text-left">
{device?.label ||
`Microphone ${device?.deviceId?.slice(0, 5)}...`}
</div>
</SelectItem>
))}
{microphones?.length === 0 && (
<SelectItem value="no-microphones" disabled>
No microphones found
</SelectItem>
)}
</SelectGroup>
</SelectContent>
</Select>
</div>
);
};
export default MicrophoneSelect;

View file

@ -0,0 +1,59 @@
import IconComponent from "../../../../../../../../../../components/common/genericIconComponent";
import ShadTooltip from "../../../../../../../../../../components/common/shadTooltipComponent";
import {
Select,
SelectContent,
SelectGroup,
SelectItem,
SelectTrigger,
SelectValue,
} from "../../../../../../../../../../components/ui/select";
import { toTitleCase } from "../../../../../../../../../../utils/utils";
interface VoiceSelectProps {
voice: string;
handleSetVoice: (value: string) => void;
allVoices: { value: string; name: string }[];
}
const VoiceSelect = ({
voice,
handleSetVoice,
allVoices,
}: VoiceSelectProps) => {
return (
<div className="grid w-full items-center gap-2">
<span className="flex w-full items-center text-sm">
Voice
<ShadTooltip content="You can select ElevenLabs voices if you have an ElevenLabs API key. Otherwise, you can only select OpenAI voices.">
<div>
<IconComponent
name="Info"
strokeWidth={2}
className="relative -top-[3px] left-1 h-[14px] w-[14px] text-placeholder"
/>
</div>
</ShadTooltip>
</span>
<Select value={voice} onValueChange={handleSetVoice}>
<SelectTrigger className="h-9 w-full">
<SelectValue placeholder="Select" />
</SelectTrigger>
<SelectContent className="max-h-[200px]">
<SelectGroup>
{allVoices?.map((voice, index) => (
<SelectItem value={voice?.value} key={index}>
<div className="max-w-[220px] truncate text-left">
{toTitleCase(voice?.name)}
</div>
</SelectItem>
))}
</SelectGroup>
</SelectContent>
</Select>
</div>
);
};
export default VoiceSelect;

View file

@ -0,0 +1,35 @@
import ForwardedIconComponent from "@/components/common/genericIconComponent";
import ShadTooltip from "@/components/common/shadTooltipComponent";
import { Button } from "@/components/ui/button";
interface SettingsVoiceButtonProps {
isRecording: boolean;
setShowSettingsModal: (value: boolean) => void;
}
const SettingsVoiceButton = ({
isRecording,
setShowSettingsModal,
}: SettingsVoiceButtonProps) => {
return (
<>
<ShadTooltip content="Audio Settings" side="top">
<div>
<Button
className={`btn-playground-actions cursor-pointer text-muted-foreground hover:text-primary`}
unstyled
disabled={isRecording}
onClick={() => setShowSettingsModal(true)}
>
<ForwardedIconComponent
className={`h-[18px] w-[18px]`}
name={"Wrench"}
/>
</Button>
</div>
</ShadTooltip>
</>
);
};
export default SettingsVoiceButton;

View file

@ -0,0 +1,32 @@
import ForwardedIconComponent from "@/components/common/genericIconComponent";
import { Button } from "@/components/ui/button";
import { ICON_STROKE_WIDTH } from "@/constants/constants";
interface VoiceButtonProps {
toggleRecording: () => void;
}
const VoiceButton = ({ toggleRecording }: VoiceButtonProps) => {
return (
<>
<div>
<Button
onClick={toggleRecording}
className="btn-playground-actions group"
unstyled
data-testid="voice-button"
>
<ForwardedIconComponent
className={
"icon-size text-muted-foreground group-hover:text-primary"
}
name={"Mic"}
strokeWidth={ICON_STROKE_WIDTH}
/>
</Button>
</div>
</>
);
};
export default VoiceButton;

View file

@ -0,0 +1,90 @@
class StreamProcessor extends AudioWorkletProcessor {
constructor(options) {
super();
this.bufferSize = 4096;
this.buffer = new Float32Array(this.bufferSize);
this.bufferIndex = 0;
// Increase threshold for much less sensitivity
this.noiseThreshold = 0.15;
// Require more consecutive frames above threshold to trigger speech detection
this.activationThreshold = 8;
this.silenceFrameCount = 0;
this.activationCount = 0;
this.isSpeaking = false;
this.port.onmessage = this.handleMessage.bind(this);
}
handleMessage(event) {
if (event.data.type === "updateNoiseGate") {
this.noiseThreshold = event.data.threshold;
}
}
calculateRMS(buffer) {
let sum = 0;
for (let i = 0; i < buffer.length; i++) {
sum += buffer[i] * buffer[i];
}
return Math.sqrt(sum / buffer.length);
}
process(inputs, outputs) {
const input = inputs[0];
if (!input || !input.length) return true;
const channel = input[0];
// Calculate RMS volume
const rms = this.calculateRMS(channel);
// Scale the RMS value to match the scale used in use-bar-controls.ts
// This makes the threshold more comparable to the "3" value
const scaledRMS = rms * 10;
// Use scaled value for detection
const isSilent = scaledRMS < 3;
// Voice activity detection logic with stricter requirements
if (isSilent) {
this.activationCount = 0;
this.silenceFrameCount++;
// Require more silent frames before deciding speech has ended
if (this.silenceFrameCount > 20 && this.isSpeaking) {
this.isSpeaking = false;
}
} else {
this.silenceFrameCount = 0;
if (!this.isSpeaking) {
// Require multiple consecutive frames above threshold to start speech
this.activationCount++;
if (this.activationCount >= this.activationThreshold) {
this.isSpeaking = true;
}
}
}
// Fill buffer with audio data
for (let i = 0; i < channel.length; i++) {
if (this.bufferIndex < this.bufferSize) {
// Apply noise gate - zero out audio when not speaking
this.buffer[this.bufferIndex++] = !this.isSpeaking ? 0 : channel[i];
}
}
// When buffer is full, send it to the main thread
if (this.bufferIndex >= this.bufferSize) {
const audioData = this.buffer.slice(0);
this.port.postMessage({
type: "input",
audio: audioData,
isSilent: !this.isSpeaking,
volume: scaledRMS, // Send scaled volume for consistency
});
this.bufferIndex = 0;
}
return true;
}
}
registerProcessor("stream_processor", StreamProcessor);

View file

@ -0,0 +1,13 @@
import { getLocalStorage, setLocalStorage } from "@/utils/local-storage-util";
export const checkProvider = () => {
const audioSettings = JSON.parse(
getLocalStorage("lf_audio_settings_playground") || "{}",
);
if (!audioSettings.provider) {
setLocalStorage(
"lf_audio_settings_playground",
JSON.stringify({ provider: "openai", voice: "alloy" }),
);
}
};

View file

@ -0,0 +1,5 @@
export const formatTime = (timeInSeconds: number): string => {
const minutes = Math.floor(timeInSeconds / 60);
const seconds = Math.floor(timeInSeconds % 60);
return `${minutes.toString().padStart(2, "0")}:${seconds.toString().padStart(2, "0")}s`;
};

View file

@ -0,0 +1,91 @@
export const workletCode = `
class StreamProcessor extends AudioWorkletProcessor {
constructor() {
super();
// manipulate this to change the buffer size, 24000 is one per second
// this.inputBuffer = new Float32Array(24000);
this.inputBuffer = new Float32Array(128);
this.inputOffset = 0;
this.outputBuffers = [];
this.isPlaying = false;
this.port.onmessage = (event) => {
if (event.data.type === 'playback') {
this.outputBuffers.push(event.data.audio);
this.isPlaying = true;
}
else if (event.data.type === 'stop_playback') {
// Immediately stop playback and clear any queued audio
this.outputBuffers = [];
this.isPlaying = false;
// Optionally notify main thread if you want
this.port.postMessage({ type: 'done' });
}
};
}
process(inputs, outputs, parameters) {
const input = inputs[0];
if (input && input.length > 0) {
const inputData = input[0];
for (let i = 0; i < inputData.length; i++) {
this.inputBuffer[this.inputOffset++] = inputData[i];
if (this.inputOffset >= this.inputBuffer.length) {
const outputData = new Int16Array(this.inputBuffer.length);
for (let j = 0; j < this.inputBuffer.length; j++) {
outputData[j] = Math.max(-1, Math.min(1, this.inputBuffer[j])) * 0x7FFF;
}
this.port.postMessage({
type: 'input',
audio: outputData
});
// manipulate this to change the buffer size, 24000 is one per second
// this.inputBuffer = new Float32Array(24000);
this.inputBuffer = new Float32Array(128);
this.inputOffset = 0;
}
}
}
const output = outputs[0];
if (output && output.length > 0 && this.isPlaying) {
if (this.outputBuffers.length > 0) {
const currentBuffer = this.outputBuffers[0];
const chunkSize = Math.min(output[0].length, currentBuffer.length);
const gain = 0.8;
for (let channel = 0; channel < output.length; channel++) {
const outputChannel = output[channel];
for (let i = 0; i < chunkSize; i++) {
outputChannel[i] = currentBuffer[i] * gain;
}
}
if (chunkSize === currentBuffer.length) {
this.outputBuffers.shift();
} else {
this.outputBuffers[0] = currentBuffer.slice(chunkSize);
}
}
if (this.outputBuffers.length === 0) {
this.isPlaying = false;
this.port.postMessage({ type: 'done' });
}
}
return true;
}
}
try {
registerProcessor('stream_processor', StreamProcessor);
} catch (e) {
// Check for registration error without relying on DOMException
if (e && e.message && e.message.includes('is already registered')) {
// Processor already registered, ignore the error
} else {
throw e;
}
}
`;

View file

@ -0,0 +1,17 @@
export function base64ToFloat32Array(base64String: string): Float32Array {
const binaryString = atob(base64String);
const pcmData = new Int16Array(binaryString.length / 2);
for (let i = 0; i < binaryString.length; i += 2) {
const lsb = binaryString.charCodeAt(i);
const msb = binaryString.charCodeAt(i + 1);
pcmData[i / 2] = (msb << 8) | lsb;
}
const float32Data = new Float32Array(pcmData.length);
for (let i = 0; i < pcmData.length; i++) {
float32Data[i] = pcmData[i] / 32768.0;
}
return float32Data;
}

View file

@ -0,0 +1,172 @@
import { useEffect, useRef } from "react";
export const useBarControls = (
isRecording: boolean,
setRecordingTime: React.Dispatch<React.SetStateAction<number>>,
setBarHeights: React.Dispatch<React.SetStateAction<number[]>>,
analyserRef?: React.MutableRefObject<AnalyserNode | null>,
setSoundDetected?,
) => {
const animationFrameRef = useRef<number | null>(null);
const timeDataRef = useRef<Uint8Array | null>(null);
const baseHeightsRef = useRef<number[]>([]);
const lastRandomizeTimeRef = useRef<number>(0);
const minHeightRef = useRef<number>(20);
const analyzerInitializedRef = useRef<boolean>(false);
useEffect(() => {
if (isRecording) {
analyzerInitializedRef.current = false;
if (analyserRef?.current) {
const analyser = analyserRef.current;
analyser.fftSize = 256;
timeDataRef.current = new Uint8Array(analyser.fftSize);
analyzerInitializedRef.current = true;
}
const interval = setInterval(() => {
setRecordingTime((prev) => prev + 1);
}, 1000);
return () => clearInterval(interval);
} else {
setBarHeights(Array(30).fill(minHeightRef.current));
if (setSoundDetected) setSoundDetected(false);
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
animationFrameRef.current = null;
}
timeDataRef.current = null;
}
}, [
isRecording,
setRecordingTime,
setBarHeights,
setSoundDetected,
analyserRef,
]);
useEffect(() => {
const staticHeights = Array(30)
.fill(0)
.map((_, i) => {
const position = i / 30;
const height = 50 + Math.sin(position * Math.PI) * 30;
return Math.max(minHeightRef.current, Math.min(80, height));
});
setBarHeights(staticHeights);
baseHeightsRef.current = staticHeights;
}, [setBarHeights]);
useEffect(() => {
if (
analyserRef?.current &&
!analyzerInitializedRef.current &&
isRecording
) {
const analyser = analyserRef.current;
analyser.fftSize = 256;
timeDataRef.current = new Uint8Array(analyser.fftSize);
analyzerInitializedRef.current = true;
}
}, [analyserRef?.current, isRecording]);
useEffect(() => {
if (!isRecording) return;
if (
analyserRef?.current &&
(!timeDataRef.current || !analyzerInitializedRef.current)
) {
const analyser = analyserRef.current;
analyser.fftSize = 256;
timeDataRef.current = new Uint8Array(analyser.fftSize);
analyzerInitializedRef.current = true;
}
const animate = (timestamp: number) => {
let soundDetected = false;
let scaledVolume = 0;
if (analyserRef?.current && timeDataRef.current) {
try {
const analyser = analyserRef.current;
if (timeDataRef.current.length !== analyser.fftSize) {
timeDataRef.current = new Uint8Array(analyser.fftSize);
}
analyser.getByteTimeDomainData(timeDataRef.current);
let sum = 0;
let max = 0;
for (let i = 0; i < timeDataRef.current.length; i++) {
const deviation = Math.abs(timeDataRef.current[i] - 128);
sum += deviation;
max = Math.max(max, deviation);
}
const volumeLevel =
(sum / (timeDataRef.current.length * 128)) * 0.5 +
(max / 128) * 0.5;
scaledVolume = volumeLevel * 10;
soundDetected = scaledVolume > 0.3;
if (setSoundDetected) {
setSoundDetected(soundDetected);
}
} catch (error) {
console.error("Error detecting sound:", error);
if (setSoundDetected) {
setSoundDetected(false);
}
}
} else {
if (setSoundDetected) {
setSoundDetected(false);
}
}
const shouldRandomize =
soundDetected && timestamp - lastRandomizeTimeRef.current > 100;
if (shouldRandomize) {
lastRandomizeTimeRef.current = timestamp;
}
setBarHeights((prevHeights) => {
return prevHeights.map((height, index) => {
if (soundDetected) {
const baseHeight = baseHeightsRef.current[index] || 50;
const volumeFactor = 1.0 + Math.min(1.5, scaledVolume);
const randomFactor = shouldRandomize
? 0.7 + Math.random() * 0.6
: 0.85 + Math.random() * 0.3;
const newHeight = baseHeight * volumeFactor * randomFactor;
return Math.max(minHeightRef.current, Math.min(120, newHeight));
} else {
return height + (minHeightRef.current - height) * 0.2;
}
});
});
animationFrameRef.current = requestAnimationFrame(animate);
};
animationFrameRef.current = requestAnimationFrame(animate);
return () => {
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
animationFrameRef.current = null;
}
};
}, [isRecording, analyserRef, setSoundDetected, setBarHeights]);
};

View file

@ -0,0 +1,140 @@
import { BuildStatus } from "@/constants/enums";
import { base64ToFloat32Array } from "../helpers/utils";
export const useHandleWebsocketMessage = (
event: MessageEvent,
interruptPlayback: () => void,
audioContextRef: React.MutableRefObject<AudioContext | null>,
audioQueueRef: React.MutableRefObject<AudioBuffer[]>,
isPlayingRef: React.MutableRefObject<boolean>,
playNextAudioChunk: () => void,
setIsBuilding: (isBuilding: boolean) => void,
revertBuiltStatusFromBuilding: () => void,
clearEdgesRunningByNodes: () => void,
setMessage: React.Dispatch<React.SetStateAction<string>>,
edges,
setStatus: React.Dispatch<React.SetStateAction<string>>,
messagesStore,
setEdges,
addDataToFlowPool: (data: any, nodeId: string) => void,
updateEdgesRunningByNodes: (nodeIds: string[], isRunning: boolean) => void,
updateBuildStatus: (nodeIds: string[], status: BuildStatus) => void,
hasOpenAIAPIKey: boolean,
showErrorAlert: (title: string, list: string[]) => void,
) => {
const data = JSON.parse(event.data);
switch (data.type) {
case "response.content_part.added":
if (data.part?.type === "text" && data.part.text) {
setMessage((prev) => prev + data.part.text);
}
break;
case "response.done":
if (data.response?.status_details?.error?.code) {
const errorCode =
data.response?.status_details?.error?.code?.replaceAll("_", " ");
setStatus(`API key error: ${errorCode}`);
showErrorAlert("API key error: " + errorCode, [
"Please check your API key and try again",
]);
}
break;
case "response.cancelled":
interruptPlayback();
break;
case "response.audio.delta":
if (data.delta && audioContextRef.current) {
try {
const float32Data = base64ToFloat32Array(data.delta);
const audioBuffer = audioContextRef.current.createBuffer(
2,
float32Data.length,
24000,
);
audioBuffer.copyToChannel(float32Data, 0);
audioBuffer.copyToChannel(float32Data, 1);
audioQueueRef.current.push(audioBuffer);
if (!isPlayingRef.current) {
playNextAudioChunk();
}
} catch (error) {
console.error("Error processing audio response:", error);
}
}
break;
case "flow.build.progress":
const buildData = data.data;
switch (buildData.event) {
case "start":
setIsBuilding(true);
break;
case "start_vertex":
updateBuildStatus([buildData.vertex_id], BuildStatus.BUILDING);
const newEdges = edges.map((edge) => {
if (buildData.vertex_id === edge.data?.targetHandle?.id) {
edge.animated = true;
edge.className = "running";
}
return edge;
});
setEdges(newEdges);
break;
case "end_vertex":
updateBuildStatus([buildData.vertex_id], BuildStatus.BUILT);
addDataToFlowPool(
{
...buildData.data.build_data,
run_id: buildData.run_id,
id: buildData.vertex_id,
valid: true,
},
buildData.vertex_id,
);
updateEdgesRunningByNodes([buildData.vertex_id], false);
break;
case "error":
updateBuildStatus([buildData.vertex_id], BuildStatus.ERROR);
updateEdgesRunningByNodes([buildData.vertex_id], false);
break;
case "end":
setIsBuilding(false);
revertBuiltStatusFromBuilding();
clearEdgesRunningByNodes();
break;
case "add_message":
messagesStore.addMessage(buildData.data);
break;
}
break;
case "error":
if (data.code === "api_key_missing") {
setStatus("Error: " + "API key is missing");
showErrorAlert("API key not valid", [
"Please check your API key and try again",
]);
return;
}
if (data.error.message.toLowerCase().includes("api key")) {
setStatus("Error: " + "API key is missing");
showErrorAlert("API key not valid", [
"Please check your API key and try again",
]);
return;
}
data.error.message === "Cancellation failed: no active response found"
? interruptPlayback()
: setStatus("Error: " + data.error);
break;
}
};

View file

@ -0,0 +1,29 @@
import { MutableRefObject } from "react";
export const useInitializeAudio = async (
audioContextRef: MutableRefObject<AudioContext | null>,
setStatus: (status: string) => void,
startConversation: () => void,
): Promise<void> => {
try {
if (audioContextRef.current?.state === "closed") {
audioContextRef.current = null;
}
if (!audioContextRef.current) {
audioContextRef.current = new (window.AudioContext ||
(window as any).webkitAudioContext)({
sampleRate: 24000,
});
}
if (audioContextRef.current.state === "suspended") {
await audioContextRef.current.resume();
}
startConversation();
} catch (error) {
console.error("Failed to initialize audio:", error);
setStatus("Error: Failed to initialize audio");
}
};

View file

@ -0,0 +1,13 @@
import { MutableRefObject } from "react";
export const useInterruptPlayback = (
audioQueueRef: MutableRefObject<AudioBuffer[]>,
isPlayingRef: MutableRefObject<boolean>,
processorRef: MutableRefObject<AudioWorkletNode | null>,
) => {
audioQueueRef.current.splice(0, audioQueueRef.current.length);
isPlayingRef.current = false;
if (processorRef.current) {
processorRef.current.port.postMessage({ type: "stop_playback" });
}
};

View file

@ -0,0 +1,27 @@
import { MutableRefObject } from "react";
export const usePlayNextAudioChunk = (
audioQueueRef: MutableRefObject<AudioBuffer[]>,
isPlayingRef: MutableRefObject<boolean>,
processorRef: MutableRefObject<AudioWorkletNode | null>,
) => {
if (audioQueueRef.current.length === 0) {
isPlayingRef.current = false;
return;
}
isPlayingRef.current = true;
const audioBuffer = audioQueueRef.current.shift();
if (audioBuffer && processorRef.current) {
try {
processorRef.current.port.postMessage({
type: "playback",
audio: audioBuffer.getChannelData(0),
});
} catch (error) {
console.error("Error playing audio:", error);
isPlayingRef.current = false;
}
}
};

View file

@ -0,0 +1,80 @@
import { getLocalStorage } from "@/utils/local-storage-util";
export const useStartConversation = (
flowId: string,
wsRef: React.MutableRefObject<WebSocket | null>,
setStatus: (status: string) => void,
startRecording: () => void,
handleWebSocketMessage: (event: MessageEvent) => void,
stopRecording: () => void,
currentSessionId: string,
) => {
const currentHost = window.location.hostname;
const currentPort = window.location.port;
const protocol = window.location.protocol === "https:" ? "wss:" : "ws:";
const url = `${protocol}//${currentHost}:${currentPort}/api/v1/voice/ws/flow_as_tool/${flowId}/${currentSessionId}`;
try {
if (wsRef.current?.readyState === WebSocket.CONNECTING) {
return;
}
if (wsRef.current?.readyState === WebSocket.OPEN) {
wsRef.current.close();
}
const audioSettings = JSON.parse(
getLocalStorage("lf_audio_settings_playground") || "{}",
);
wsRef.current = new WebSocket(url);
wsRef.current.onopen = () => {
setStatus("Connected");
if (wsRef.current?.readyState === WebSocket.OPEN) {
wsRef.current.send(
JSON.stringify({
type: "langflow.elevenlabs.config",
enabled: audioSettings.provider === "elevenlabs",
voice_id:
audioSettings.provider === "elevenlabs"
? audioSettings.voice
: "",
}),
);
if (audioSettings.provider !== "elevenlabs") {
wsRef.current.send(
JSON.stringify({
type: "session.update",
session: {
voice: audioSettings.voice,
},
}),
);
}
startRecording();
}
};
wsRef.current.onmessage = handleWebSocketMessage;
wsRef.current.onclose = (event) => {
if (event.code !== 1000) {
// 1000 is normal closure
console.warn(`WebSocket closed with code ${event.code}`);
}
setStatus(`Disconnected (${event.code})`);
stopRecording();
};
wsRef.current.onerror = (error) => {
console.error("WebSocket Error:", error);
setStatus("Connection error");
stopRecording();
};
} catch (error) {
console.error("Failed to create WebSocket:", error);
setStatus("Connection failed");
stopRecording();
}
};

View file

@ -0,0 +1,103 @@
import { MutableRefObject } from "react";
export const useStartRecording = async (
audioContextRef: MutableRefObject<AudioContext | null>,
microphoneRef: MutableRefObject<MediaStreamAudioSourceNode | null>,
analyserRef: MutableRefObject<AnalyserNode | null>,
wsRef: MutableRefObject<WebSocket | null>,
setIsRecording: (isRecording: boolean) => void,
playNextAudioChunk: () => void,
isPlayingRef: MutableRefObject<boolean>,
audioQueueRef: MutableRefObject<AudioBuffer[]>,
workletCode: string,
processorRef: MutableRefObject<AudioWorkletNode | null>,
setStatus: (status: string) => void,
) => {
try {
const selectedMicrophone = localStorage.getItem("lf_selected_microphone");
const preferredLanguage =
localStorage.getItem("lf_preferred_language") || "en-US";
const stream = await navigator?.mediaDevices?.getUserMedia({
audio: {
noiseSuppression: true,
echoCancellation: true,
autoGainControl: true,
sampleRate: 48000,
deviceId: selectedMicrophone
? { exact: selectedMicrophone }
: undefined,
},
});
if (!audioContextRef.current) return;
microphoneRef.current =
audioContextRef?.current?.createMediaStreamSource(stream);
analyserRef.current = audioContextRef?.current?.createAnalyser();
analyserRef.current.fftSize = 2048;
microphoneRef.current.connect(analyserRef.current);
const blob = new Blob([workletCode], { type: "application/javascript" });
const workletUrl = URL.createObjectURL(blob);
try {
try {
await audioContextRef.current.audioWorklet.addModule(workletUrl);
} catch (err) {
// Check if the error is because the processor is already registered
if (
err instanceof DOMException &&
err.message.includes("already been loaded")
) {
console.log("AudioWorklet module already loaded, continuing...");
} else {
throw err;
}
}
processorRef.current = new AudioWorkletNode(
audioContextRef.current,
"stream_processor",
);
analyserRef.current.connect(processorRef.current);
processorRef.current.connect(audioContextRef.current.destination);
processorRef.current.port.onmessage = (event) => {
if (event.data.type === "input" && event.data.audio && wsRef.current) {
// Only send audio if it's not detected as silence
const base64Audio = btoa(
String.fromCharCode.apply(
null,
Array.from(new Uint8Array(event.data.audio.buffer)),
),
);
wsRef.current.send(
JSON.stringify({
type: "input_audio_buffer.append",
audio: base64Audio,
language: preferredLanguage,
}),
);
} else if (event.data.type === "done") {
if (audioQueueRef.current.length > 0) {
playNextAudioChunk();
} else {
isPlayingRef.current = false;
}
}
};
setIsRecording(true);
} catch (err) {
console.error("AudioWorklet failed to load:", err);
setStatus("Error initializing audio: " + (err as Error).message);
} finally {
URL.revokeObjectURL(workletUrl);
}
} catch (err) {
console.error("Error accessing microphone:", err);
setStatus("Error: " + (err as Error).message);
}
};

View file

@ -0,0 +1,25 @@
export const useStopRecording = (
microphoneRef,
processorRef: React.MutableRefObject<AudioWorkletNode | null>,
analyserRef: React.MutableRefObject<AnalyserNode | null>,
wsRef: React.MutableRefObject<WebSocket | null>,
setIsRecording: (isRecording: boolean) => void,
) => {
if (microphoneRef.current) {
microphoneRef.current.disconnect();
microphoneRef.current = null;
}
if (processorRef.current) {
processorRef.current.disconnect();
processorRef.current = null;
}
if (analyserRef.current) {
analyserRef.current.disconnect();
analyserRef.current = null;
}
if (wsRef.current) {
wsRef.current.close();
wsRef.current = null;
}
setIsRecording(false);
};

View file

@ -0,0 +1,482 @@
import ShadTooltip from "@/components/common/shadTooltipComponent";
import { Button } from "@/components/ui/button";
import { ICON_STROKE_WIDTH, SAVE_API_KEY_ALERT } from "@/constants/constants";
import { useGetMessagesPollingMutation } from "@/controllers/API/queries/messages/use-get-messages-polling";
import {
useGetGlobalVariables,
usePatchGlobalVariables,
usePostGlobalVariables,
} from "@/controllers/API/queries/variables";
import useAlertStore from "@/stores/alertStore";
import useFlowStore from "@/stores/flowStore";
import { useGlobalVariablesStore } from "@/stores/globalVariablesStore/globalVariables";
import { useMessagesStore } from "@/stores/messagesStore";
import { useUtilityStore } from "@/stores/utilityStore";
import { useVoiceStore } from "@/stores/voiceStore";
import { cn } from "@/utils/utils";
import { AxiosError } from "axios";
import { useEffect, useMemo, useRef, useState } from "react";
import IconComponent from "../../../../../../../components/common/genericIconComponent";
import SettingsVoiceModal from "./components/audio-settings/audio-settings-dialog";
import { checkProvider } from "./helpers/check-provider";
import { formatTime } from "./helpers/format-time";
import { workletCode } from "./helpers/streamProcessor";
import { useBarControls } from "./hooks/use-bar-controls";
import { useHandleWebsocketMessage } from "./hooks/use-handle-websocket-message";
import { useInitializeAudio } from "./hooks/use-initialize-audio";
import { useInterruptPlayback } from "./hooks/use-interrupt-playback";
import { usePlayNextAudioChunk } from "./hooks/use-play-next-audio-chunk";
import { useStartConversation } from "./hooks/use-start-conversation";
import { useStartRecording } from "./hooks/use-start-recording";
import { useStopRecording } from "./hooks/use-stop-recording";
interface VoiceAssistantProps {
flowId: string;
setShowAudioInput: (value: boolean) => void;
}
export function VoiceAssistant({
flowId,
setShowAudioInput,
}: VoiceAssistantProps) {
const [recordingTime, setRecordingTime] = useState(0);
const [isRecording, setIsRecording] = useState(false);
const [status, setStatus] = useState("");
const [message, setMessage] = useState("");
const [showSettingsModal, setShowSettingsModal] = useState(false);
const [addKey, setAddKey] = useState(false);
const [barHeights, setBarHeights] = useState<number[]>(Array(30).fill(20));
const [preferredLanguage, setPreferredLanguage] = useState(
localStorage.getItem("lf_preferred_language") || "en-US",
);
const [isEditingOpenAIKey, setIsEditingOpenAIKey] = useState<boolean>(false);
const waveformRef = useRef<HTMLDivElement>(null);
const audioContextRef = useRef<AudioContext | null>(null);
const wsRef = useRef<WebSocket | null>(null);
const processorRef = useRef<AudioWorkletNode | null>(null);
const audioQueueRef = useRef<AudioBuffer[]>([]);
const isPlayingRef = useRef(false);
const microphoneRef = useRef<MediaStreamAudioSourceNode | null>(null);
const analyserRef = useRef<AnalyserNode | null>(null);
const soundDetected = useVoiceStore((state) => state.soundDetected);
const setSoundDetected = useVoiceStore((state) => state.setSoundDetected);
const messagesStore = useMessagesStore();
const setIsBuilding = useFlowStore((state) => state.setIsBuilding);
const edges = useFlowStore((state) => state.edges);
const setEdges = useFlowStore((state) => state.setEdges);
const updateBuildStatus = useFlowStore((state) => state.updateBuildStatus);
const addDataToFlowPool = useFlowStore((state) => state.addDataToFlowPool);
const updateEdgesRunningByNodes = useFlowStore(
(state) => state.updateEdgesRunningByNodes,
);
const revertBuiltStatusFromBuilding = useFlowStore(
(state) => state.revertBuiltStatusFromBuilding,
);
const clearEdgesRunningByNodes = useFlowStore(
(state) => state.clearEdgesRunningByNodes,
);
const variables = useGlobalVariablesStore(
(state) => state.globalVariablesEntries,
);
const createVariable = usePostGlobalVariables();
const updateVariable = usePatchGlobalVariables();
const setSuccessData = useAlertStore((state) => state.setSuccessData);
const currentSessionId = useUtilityStore((state) => state.currentSessionId);
const setErrorData = useAlertStore((state) => state.setErrorData);
const { data: globalVariables } = useGetGlobalVariables();
const hasOpenAIAPIKey = useMemo(() => {
return (
variables?.find((variable) => variable === "OPENAI_API_KEY")?.length! > 0
);
}, [variables, open, addKey]);
const hasElevenLabsApiKey = useMemo(() => {
return (
variables?.find((variable) => variable === "ELEVENLABS_API_KEY")
?.length! > 0
);
}, [variables, addKey, open]);
const openaiApiKey = useMemo(() => {
return variables?.find((variable) => variable === "OPENAI_API_KEY");
}, [variables, addKey]);
const openaiApiKeyGlobalVariable = useMemo(() => {
return globalVariables?.find(
(variable) => variable.name === "OPENAI_API_KEY",
);
}, [globalVariables]);
const elevenLabsApiKeyGlobalVariable = useMemo(() => {
return globalVariables?.find(
(variable) => variable.name === "ELEVENLABS_API_KEY",
);
}, [globalVariables]);
const hasElevenLabsApiKeyEnv = useMemo(() => {
return Boolean(process.env?.ELEVENLABS_API_KEY);
}, [variables, addKey]);
useEffect(() => {
if (!isRecording && hasOpenAIAPIKey && !showSettingsModal) {
setIsRecording(true);
initializeAudio();
} else {
stopRecording();
}
}, []);
const getMessagesMutation = useGetMessagesPollingMutation();
const initializeAudio = async () => {
useInitializeAudio(audioContextRef, setStatus, startConversation);
};
const startRecording = async () => {
useStartRecording(
audioContextRef,
microphoneRef,
analyserRef,
wsRef,
setIsRecording,
playNextAudioChunk,
isPlayingRef,
audioQueueRef,
workletCode,
processorRef,
setStatus,
);
};
const stopRecording = () => {
useStopRecording(
microphoneRef,
processorRef,
analyserRef,
wsRef,
setIsRecording,
);
};
const playNextAudioChunk = () => {
usePlayNextAudioChunk(audioQueueRef, isPlayingRef, processorRef);
};
const handleWebSocketMessage = (event: MessageEvent) => {
useHandleWebsocketMessage(
event,
interruptPlayback,
audioContextRef,
audioQueueRef,
isPlayingRef,
playNextAudioChunk,
setIsBuilding,
revertBuiltStatusFromBuilding,
clearEdgesRunningByNodes,
setMessage,
edges,
setStatus,
messagesStore,
setEdges,
addDataToFlowPool,
updateEdgesRunningByNodes,
updateBuildStatus,
hasOpenAIAPIKey,
showErrorAlert,
);
};
const startConversation = () => {
useStartConversation(
flowId,
wsRef,
setStatus,
startRecording,
handleWebSocketMessage,
stopRecording,
currentSessionId,
);
};
const interruptPlayback = () => {
useInterruptPlayback(audioQueueRef, isPlayingRef, processorRef);
};
useBarControls(
isRecording,
setRecordingTime,
setBarHeights,
analyserRef,
setSoundDetected,
);
const handleGetMessagesMutation = () => {
getMessagesMutation.mutate({
mode: "union",
id: currentSessionId,
});
};
const showErrorAlert = (title: string, list: string[]) => {
setErrorData({
title,
list,
});
setIsRecording(false);
};
const handleSaveApiKey = async (
apiKey: string,
variableName: string,
elevenLabsKey: boolean,
) => {
const updateOpenAiKey =
isEditingOpenAIKey && openaiApiKeyGlobalVariable?.id;
const updateElevenLabsApiKey =
elevenLabsApiKeyGlobalVariable?.id && elevenLabsKey;
if (updateOpenAiKey || updateElevenLabsApiKey) {
await updateVariable.mutateAsync(
{
name: variableName,
value: apiKey,
id: elevenLabsKey
? elevenLabsApiKeyGlobalVariable?.id!
: openaiApiKeyGlobalVariable?.id!,
},
{
onSuccess: () => {
setSuccessData({
title: SAVE_API_KEY_ALERT,
});
setAddKey(!addKey);
setIsEditingOpenAIKey(false);
},
},
);
return;
}
await createVariable.mutateAsync(
{
name: variableName,
value: apiKey,
type: "secret",
default_fields: ["voice_mode"],
},
{
onSuccess: () => {
setSuccessData({
title: SAVE_API_KEY_ALERT,
});
setAddKey(!addKey);
},
},
);
};
useEffect(() => {
checkProvider();
handleGetMessagesMutation();
return () => {
stopRecording();
if (audioContextRef.current) {
audioContextRef.current.close();
audioContextRef.current = null;
}
};
}, []);
const handleCloseAudioInput = () => {
setIsRecording;
stopRecording();
setShowAudioInput(false);
};
const handleSetShowSettingsModal = async (
open: boolean,
openaiApiKey: string,
elevenLabsApiKey: string,
) => {
const saveApiKey = openaiApiKey && openaiApiKey !== "OPENAI_API_KEY";
const saveElevenLabsApiKey =
elevenLabsApiKey && elevenLabsApiKey !== "ELEVENLABS_API_KEY";
if (open) {
stopRecording();
if (audioContextRef.current) {
audioContextRef.current.close();
audioContextRef.current = null;
}
setIsRecording(false);
} else {
setRecordingTime(0);
setBarHeights(Array(30).fill(20));
if (hasOpenAIAPIKey) {
if (audioContextRef.current) {
audioContextRef.current.close();
audioContextRef.current = null;
}
analyserRef.current = null;
setTimeout(() => {
initializeAudio();
startRecording();
setIsRecording(true);
}, 100);
}
}
if (saveApiKey) {
await handleSaveApiKey(openaiApiKey, "OPENAI_API_KEY", false);
}
if (saveElevenLabsApiKey && !open) {
await handleSaveApiKey(elevenLabsApiKey, "ELEVENLABS_API_KEY", true);
}
};
const handleToggleRecording = () => {
if (isRecording) {
if (microphoneRef?.current && microphoneRef?.current?.mediaStream) {
microphoneRef.current.mediaStream.getAudioTracks().forEach((track) => {
track.enabled = false;
});
}
setBarHeights(Array(30).fill(20));
setIsRecording(false);
} else {
if (microphoneRef?.current && microphoneRef?.current?.mediaStream) {
microphoneRef.current.mediaStream.getAudioTracks().forEach((track) => {
track.enabled = true;
});
} else {
startRecording();
}
setIsRecording(true);
}
};
useEffect(() => {
if (preferredLanguage) {
localStorage.setItem("lf_preferred_language", preferredLanguage);
}
}, [preferredLanguage]);
const handleClickSaveOpenAIApiKey = async (openaiApiKey: string) => {
await handleSaveApiKey(openaiApiKey, "OPENAI_API_KEY", false);
};
return (
<>
<div
data-testid="voice-assistant-container"
className="mx-auto flex w-full max-w-[324px] items-center justify-center rounded-md border bg-background px-4 py-2 shadow-xl"
>
<div
className={cn(
"flex items-center",
hasOpenAIAPIKey ? "gap-3" : "gap-2",
)}
>
<ShadTooltip
content={isRecording ? "Mute" : "Unmute"}
delayDuration={500}
>
<Button unstyled onClick={handleToggleRecording}>
<IconComponent
name={isRecording ? "Mic" : "MicOff"}
strokeWidth={ICON_STROKE_WIDTH}
className="h-4 w-4 text-placeholder-foreground"
/>
</Button>
</ShadTooltip>
<div
ref={waveformRef}
className="flex h-5 flex-1 items-center justify-center"
>
{barHeights.map((height, index) => (
<div
key={index}
className={cn(
"mx-[1px] w-[2px] rounded-sm transition-all duration-200",
isRecording && soundDetected
? "bg-red-foreground"
: "bg-placeholder-foreground",
)}
style={{ height: `${height}%` }}
/>
))}
</div>
<div className="min-w-[50px] cursor-default text-center font-mono text-sm font-medium text-placeholder-foreground">
{hasOpenAIAPIKey ? formatTime(recordingTime) : "--:--s"}
</div>
<div>
<SettingsVoiceModal
userOpenaiApiKey={openaiApiKey}
userElevenLabsApiKey={elevenLabsApiKeyGlobalVariable?.name}
hasElevenLabsApiKeyEnv={hasElevenLabsApiKeyEnv}
setShowSettingsModal={handleSetShowSettingsModal}
hasOpenAIAPIKey={hasOpenAIAPIKey}
language={preferredLanguage}
setLanguage={setPreferredLanguage}
handleClickSaveOpenAIApiKey={handleClickSaveOpenAIApiKey}
isEditingOpenAIKey={isEditingOpenAIKey}
setIsEditingOpenAIKey={setIsEditingOpenAIKey}
>
{hasOpenAIAPIKey ? (
<>
<Button data-testid="voice-assistant-settings-icon" unstyled>
<IconComponent
name="Settings"
strokeWidth={ICON_STROKE_WIDTH}
className={cn(
"relative top-[2px] h-4 w-4 text-muted-foreground hover:text-foreground",
)}
/>
</Button>
</>
) : (
<>
<Button
variant={"outlineAmber"}
size={"icon"}
data-testid="voice-assistant-settings-icon-without-openai"
className="group h-8 w-8"
>
<IconComponent
name="Settings"
strokeWidth={ICON_STROKE_WIDTH}
className={cn(
"h-4 w-4 text-accent-amber-foreground group-hover:text-accent-amber",
)}
/>
</Button>
</>
)}
</SettingsVoiceModal>
</div>
<Button
unstyled
onClick={handleCloseAudioInput}
data-testid="voice-assistant-close-button"
>
<IconComponent
name="X"
strokeWidth={ICON_STROKE_WIDTH}
className="h-4 w-4 text-muted-foreground hover:text-foreground"
/>
</Button>
</div>
</div>
</>
);
}

View file

@ -9,7 +9,6 @@ import {
import useFlowsManagerStore from "@/stores/flowsManagerStore";
import useFlowStore from "@/stores/flowStore";
import { useUtilityStore } from "@/stores/utilityStore";
import { ChatMessageType } from "@/types/chat";
import Convert from "ansi-to-html";
import { useEffect, useRef, useState } from "react";
import Robot from "../../../../../assets/robot.png";
@ -54,6 +53,8 @@ export default function ChatMessage({
const [showError, setShowError] = useState(false);
const isBuilding = useFlowStore((state) => state.isBuilding);
const isAudioMessage = chat.category === "audio";
useEffect(() => {
const chatMessageString = chat.message ? chat.message.toString() : "";
setChatMessage(chatMessageString);
@ -306,7 +307,17 @@ export default function ChatMessage({
"sender_name_" + chat.sender_name?.toLocaleLowerCase()
}
>
{chat.sender_name}
<span className="flex items-center gap-2">
{chat.sender_name}
{isAudioMessage && (
<div className="flex h-5 w-5 items-center justify-center rounded-sm bg-muted">
<ForwardedIconComponent
name="mic"
className="h-3 w-3 text-muted-foreground"
/>
</div>
)}
</span>
{chat.properties?.source && !playgroundPage && (
<div className="text-[13px] font-normal text-muted-foreground">
{chat.properties?.source.source}
@ -379,6 +390,7 @@ export default function ChatMessage({
/>
) : (
<MarkdownField
isAudioMessage={isAudioMessage}
chat={chat}
isEmpty={isEmpty}
chatMessage={chatMessage}
@ -407,9 +419,10 @@ export default function ChatMessage({
) : (
<>
<div
className={`w-full items-baseline whitespace-pre-wrap break-words text-[14px] font-normal ${
isEmpty ? "text-muted-foreground" : "text-primary"
}`}
className={cn(
"w-full items-baseline whitespace-pre-wrap break-words text-[14px] font-normal",
isEmpty ? "text-muted-foreground" : "text-primary",
)}
data-testid={`chat-message-${chat.sender_name}-${chatMessage}`}
>
{isEmpty ? EMPTY_INPUT_SEND_MESSAGE : decodedMessage}
@ -441,6 +454,7 @@ export default function ChatMessage({
isBotMessage={!chat.isSend}
onEvaluate={handleEvaluateAnswer}
evaluation={chat.properties?.positive_feedback}
isAudioMessage={isAudioMessage}
/>
</div>
</div>

View file

@ -11,6 +11,7 @@ type MarkdownFieldProps = {
isEmpty: boolean;
chatMessage: string;
editedFlag: React.ReactNode;
isAudioMessage?: boolean;
};
// Function to replace <think> tags with a placeholder before markdown processing
@ -26,6 +27,7 @@ export const MarkdownField = ({
isEmpty,
chatMessage,
editedFlag,
isAudioMessage,
}: MarkdownFieldProps) => {
// Process the chat message to handle <think> tags
const processedChatMessage = preprocessChatMessage(chatMessage);

View file

@ -10,6 +10,7 @@ export function EditMessageButton({
onEvaluate,
isBotMessage,
evaluation,
isAudioMessage,
}: ButtonHTMLAttributes<HTMLButtonElement> & {
onEdit: () => void;
onCopy: () => void;
@ -17,6 +18,7 @@ export function EditMessageButton({
onEvaluate?: (value: boolean | null) => void;
isBotMessage?: boolean;
evaluation?: boolean | null;
isAudioMessage?: boolean;
}) {
const [isCopied, setIsCopied] = useState(false);
@ -32,18 +34,20 @@ export function EditMessageButton({
return (
<div className="flex items-center rounded-md border border-border bg-background">
<ShadTooltip styleClasses="z-50" content="Edit message" side="top">
<div className="p-1">
<Button
variant="ghost"
size="icon"
onClick={onEdit}
className="h-8 w-8"
>
<IconComponent name="Pen" className="h-4 w-4" />
</Button>
</div>
</ShadTooltip>
{!isAudioMessage && (
<ShadTooltip styleClasses="z-50" content="Edit message" side="top">
<div className="p-1">
<Button
variant="ghost"
size="icon"
onClick={onEdit}
className="h-8 w-8"
>
<IconComponent name="Pen" className="h-4 w-4" />
</Button>
</div>
</ShadTooltip>
)}
<ShadTooltip
styleClasses="z-50"

View file

@ -1,20 +1,26 @@
import LangflowLogo from "@/assets/LangflowLogo.svg?react";
import ForwardedIconComponent from "@/components/common/genericIconComponent";
import { ProfileIcon } from "@/components/core/appHeaderComponent/components/ProfileIcon";
import { TextEffectPerChar } from "@/components/ui/textAnimation";
import { CustomProfileIcon } from "@/customization/components/custom-profile-icon";
import { ENABLE_DATASTAX_LANGFLOW } from "@/customization/feature-flags";
import { track } from "@/customization/utils/analytics";
import { useMessagesStore } from "@/stores/messagesStore";
import { useUtilityStore } from "@/stores/utilityStore";
import { useVoiceStore } from "@/stores/voiceStore";
import { cn } from "@/utils/utils";
import { memo, useEffect, useMemo, useRef, useState } from "react";
import { v5 as uuidv5 } from "uuid";
import useTabVisibility from "../../../../shared/hooks/use-tab-visibility";
import useFlowsManagerStore from "../../../../stores/flowsManagerStore";
import useFlowStore from "../../../../stores/flowStore";
import { ChatMessageType } from "../../../../types/chat";
import { chatViewProps } from "../../../../types/components";
import FlowRunningSqueleton from "../flow-running-squeleton";
import ChatInput from "./chatInput/chat-input";
import useDragAndDrop from "./chatInput/hooks/use-drag-and-drop";
import { useFileHandler } from "./chatInput/hooks/use-file-handler";
import ChatMessage from "./chatMessage/chat-message";
import useTabVisibility from "../../../../../shared/hooks/use-tab-visibility";
import useFlowsManagerStore from "../../../../../stores/flowsManagerStore";
import useFlowStore from "../../../../../stores/flowStore";
import { ChatMessageType } from "../../../../../types/chat";
import { chatViewProps } from "../../../../../types/components";
import FlowRunningSqueleton from "../../flow-running-squeleton";
import ChatInput from "../chatInput/chat-input";
import useDragAndDrop from "../chatInput/hooks/use-drag-and-drop";
import { useFileHandler } from "../chatInput/hooks/use-file-handler";
import ChatMessage from "../chatMessage/chat-message";
const MemoizedChatMessage = memo(ChatMessage, (prevProps, nextProps) => {
return (
@ -157,6 +163,7 @@ export default function ChatView({
};
const flowRunningSkeletonMemo = useMemo(() => <FlowRunningSqueleton />, []);
const soundDetected = useVoiceStore((state) => state.soundDetected);
return (
<div
@ -182,27 +189,29 @@ export default function ChatView({
))}
</>
) : (
<div className="flex h-full w-full flex-col items-center justify-center">
<div className="flex flex-col items-center justify-center gap-4 p-8">
<LangflowLogo
title="Langflow logo"
className="h-10 w-10 scale-[1.5]"
/>
<div className="flex flex-col items-center justify-center">
<h3 className="mt-2 pb-2 text-2xl font-semibold text-primary">
New chat
</h3>
<p
className="text-lg text-muted-foreground"
data-testid="new-chat-text"
>
<TextEffectPerChar>
Test your flow with a chat prompt
</TextEffectPerChar>
</p>
<>
<div className="flex h-full w-full flex-col items-center justify-center">
<div className="flex flex-col items-center justify-center gap-4 p-8">
<LangflowLogo
title="Langflow logo"
className="h-10 w-10 scale-[1.5]"
/>
<div className="flex flex-col items-center justify-center">
<h3 className="mt-2 pb-2 text-2xl font-semibold text-primary">
New chat
</h3>
<p
className="text-lg text-muted-foreground"
data-testid="new-chat-text"
>
<TextEffectPerChar>
Test your flow with a chat prompt
</TextEffectPerChar>
</p>
</div>
</div>
</div>
</div>
</>
))}
<div
className={
@ -217,6 +226,7 @@ export default function ChatView({
flowRunningSkeletonMemo}
</div>
</div>
<div className="m-auto w-full max-w-[768px] md:w-5/6">
<ChatInput
playgroundPage={!!playgroundPage}

View file

@ -144,6 +144,10 @@ export default function IOModal({
),
);
const [sessionId, setSessionId] = useState<string>(currentFlowId);
const setCurrentSessionId = useUtilityStore(
(state) => state.setCurrentSessionId,
);
const { isFetched: messagesFetched } = useGetMessagesQuery(
{
mode: "union",
@ -213,8 +217,10 @@ export default function IOModal({
setSessionId(
`Session ${new Date().toLocaleString("en-US", { day: "2-digit", month: "short", hour: "2-digit", minute: "2-digit", hour12: false, second: "2-digit", timeZone: "UTC" })}`,
);
setCurrentSessionId(currentFlowId);
} else if (visibleSession) {
setSessionId(visibleSession);
setCurrentSessionId(visibleSession);
if (selectedViewField?.type === "Session") {
setSelectedViewField({
id: visibleSession,

View file

@ -0,0 +1,81 @@
import ForwardedIconComponent from "@/components/common/genericIconComponent";
import { Button } from "@/components/ui/button";
import { ICON_STROKE_WIDTH } from "@/constants/constants";
import {
useDeleteGlobalVariables,
useGetGlobalVariables,
} from "@/controllers/API/queries/variables";
import DeleteConfirmationModal from "@/modals/deleteConfirmationModal";
import useAlertStore from "@/stores/alertStore";
import { cn } from "@/utils/utils";
interface GeneralDeleteConfirmationModalProps {
option: string;
onConfirmDelete: () => void;
}
const GeneralDeleteConfirmationModal = ({
option,
onConfirmDelete,
}: GeneralDeleteConfirmationModalProps) => {
const setErrorData = useAlertStore((state) => state.setErrorData);
const { mutate: mutateDeleteGlobalVariable } = useDeleteGlobalVariables();
const { data: globalVariables } = useGetGlobalVariables();
async function handleDelete(key: string) {
if (!globalVariables) return;
const id = globalVariables.find((variable) => variable.name === key)?.id;
if (id !== undefined) {
mutateDeleteGlobalVariable(
{ id },
{
onSuccess: () => {
onConfirmDelete();
},
onError: () => {
setErrorData({
title: "Error deleting variable",
list: [cn("ID not found for variable: ", key)],
});
},
},
);
} else {
setErrorData({
title: "Error deleting variable",
list: [cn("ID not found for variable: ", key)],
});
}
}
return (
<>
<DeleteConfirmationModal
onConfirm={(e) => {
e.stopPropagation();
e.preventDefault();
handleDelete(option);
}}
description={'variable "' + option + '"'}
asChild
>
<button
onClick={(e) => {
e.stopPropagation();
}}
className="pr-1"
>
<ForwardedIconComponent
name="Trash2"
className={cn(
"h-4 w-4 text-primary opacity-0 hover:text-status-red group-hover:opacity-100",
)}
aria-hidden="true"
/>
</button>
</DeleteConfirmationModal>
</>
);
};
export default GeneralDeleteConfirmationModal;

View file

@ -0,0 +1,25 @@
import ForwardedIconComponent from "@/components/common/genericIconComponent";
import GlobalVariableModal from "@/components/core/GlobalVariableModal/GlobalVariableModal";
import { CommandItem } from "@/components/ui/command";
import { cn } from "@/utils/utils";
interface GeneralGlobalVariableModalProps {}
const GeneralGlobalVariableModal = ({}: GeneralGlobalVariableModalProps) => {
return (
<>
<GlobalVariableModal disabled={false}>
<CommandItem value="doNotFilter-addNewVariable">
<ForwardedIconComponent
name="Plus"
className={cn("mr-2 h-4 w-4 text-primary")}
aria-hidden="true"
/>
<span>Add New Variable</span>
</CommandItem>
</GlobalVariableModal>
</>
);
};
export default GeneralGlobalVariableModal;

View file

@ -40,4 +40,7 @@ export const useUtilityStore = create<UtilityStoreType>((set, get) => ({
webhookPollingInterval: 5000,
setWebhookPollingInterval: (webhookPollingInterval: number) =>
set({ webhookPollingInterval }),
currentSessionId: "",
setCurrentSessionId: (sessionId: string) =>
set({ currentSessionId: sessionId }),
}));

View file

@ -0,0 +1,40 @@
import { VoiceStoreType } from "@/types/zustand/voice/voice.types";
import { create } from "zustand";
export const useVoiceStore = create<VoiceStoreType>((set, get) => ({
voices: [],
setVoices: (
voices: {
name: string;
voice_id: string;
}[],
) => set({ voices }),
providersList: [
{ name: "OpenAI", value: "openai" },
{ name: "ElevenLabs", value: "elevenlabs" },
],
setProvidersList: (
providersList: {
name: string;
value: string;
}[],
) => set({ providersList }),
openaiVoices: [
{ name: "alloy", value: "alloy" },
{ name: "ash", value: "ash" },
{ name: "ballad", value: "ballad" },
{ name: "coral", value: "coral" },
{ name: "echo", value: "echo" },
{ name: "sage", value: "sage" },
{ name: "shimmer", value: "shimmer" },
{ name: "verse", value: "verse" },
],
setOpenaiVoices: (
openaiVoices: {
name: string;
value: string;
}[],
) => set({ openaiVoices }),
soundDetected: false,
setSoundDetected: (soundDetected: boolean) => set({ soundDetected }),
}));

View file

@ -1280,6 +1280,10 @@
.btn-add-input-list {
@apply flex h-8 w-full items-center justify-center rounded-md p-2 text-sm hover:bg-muted;
}
.btn-playground-actions {
@apply flex h-[32px] w-[32px] items-center justify-center rounded-md bg-muted font-bold transition-all;
}
}
/* Gradient background */

View file

@ -187,6 +187,12 @@
--slider-input-border: #d4d4d8;
--zinc-foreground: 240 5.9% 90%;
--accent-amber: 21.7 77.8% 26.5%;
--accent-amber-foreground: 26 90.5% 37.1%;
--red-foreground: 0 90.6% 70.8%;
}
.dark {
@ -214,8 +220,7 @@
--placeholder-foreground: 240 4% 46%; /* hsl(240, 4%, 46%) */
--canvas: 0 0% 0%; /* hsl(0, 0%, 0%) */
--canvas-dot: 240 5.3% 26.1%; /* hsl(240, 5.3%, 26.1%) */
--accent-amber: 26 90% 37%; /* hsl(26, 90%, 37%) */
--accent-amber-foreground: 26 90% 37%; /* hsl(26, 90%, 37%) */
--accent-emerald: 164 86% 16%; /* hsl(164, 86%, 16%) */
--accent-emerald-foreground: 158 64% 52%; /* hsl(158, 64%, 52%) */
--accent-emerald-hover: 163.1 88.1% 19.8%; /* hsl(163.1, 88.1%, 19.8%) */
@ -439,5 +444,10 @@
--slider-input-border: #d4d4d8;
--zinc-foreground: 240 5.2% 33.9%;
--accent-amber: 48 96.5% 88.8%;
--accent-amber-foreground: 45.9 96.7% 64.5%;
--red-foreground: 0 72.2% 50.6%;
}
}

View file

@ -47,6 +47,8 @@ export type InputComponentType = {
nodeStyle?: boolean;
isToolMode?: boolean;
popoverWidth?: string;
commandWidth?: string;
blockAddNewGlobalVariable?: boolean;
};
export type DropDownComponent = {
disabled?: boolean;

View file

@ -21,6 +21,8 @@ export type UtilityStoreType = {
setChatValueStore: (value: string) => void;
dismissAll: boolean;
setDismissAll: (dismissAll: boolean) => void;
currentSessionId: string;
setCurrentSessionId: (sessionId: string) => void;
setClientId: (clientId: string) => void;
clientId: string;
};

View file

@ -0,0 +1,34 @@
export type VoiceStoreType = {
voices: {
name: string;
voice_id: string;
}[];
setVoices: (
voices: {
name: string;
voice_id: string;
}[],
) => void;
providersList: {
name: string;
value: string;
}[];
setProvidersList: (
providersList: {
name: string;
value: string;
}[],
) => void;
openaiVoices: {
name: string;
value: string;
}[];
setOpenaiVoices: (
openaiVoices: {
name: string;
value: string;
}[],
) => void;
soundDetected: boolean;
setSoundDetected: (soundDetected: boolean) => void;
};

View file

@ -70,6 +70,10 @@ function getInactiveVertexData(vertexId: string): VertexBuildTypeAPI {
return inactiveVertexData;
}
function logFlowLoad(message: string, data?: any) {
console.log(`[FlowLoad] ${message}`, data || "");
}
export async function updateVerticesOrder(
flowId: string,
startNodeId?: string | null,
@ -82,6 +86,7 @@ export async function updateVerticesOrder(
runId?: string;
verticesToRun: string[];
}> {
logFlowLoad("Updating vertices order");
return new Promise(async (resolve, reject) => {
const setErrorData = useAlertStore.getState().setErrorData;
let orderResponse;
@ -93,7 +98,9 @@ export async function updateVerticesOrder(
nodes,
edges,
);
logFlowLoad("Got vertices order response:", orderResponse);
} catch (error: any) {
logFlowLoad("Error getting vertices order:", error);
setErrorData({
title: MISSED_ERROR_ALERT,
list: [error.response?.data?.detail ?? "Unknown Error"],
@ -130,6 +137,7 @@ export async function updateVerticesOrder(
export async function buildFlowVerticesWithFallback(
params: BuildVerticesParams,
) {
logFlowLoad("Starting flow load");
try {
// Use shouldUsePolling() to determine stream mode
return await buildFlowVertices({ ...params });

View file

@ -0,0 +1,11 @@
export const getLocalStorage = (key: string) => {
return localStorage.getItem(key);
};
export const setLocalStorage = (key: string, value: string) => {
localStorage.setItem(key, value);
};
export const removeLocalStorage = (key: string) => {
localStorage.removeItem(key);
};

View file

@ -1698,17 +1698,49 @@ export function templatesGenerator(data: APIObjectType) {
export function extractFieldsFromComponenents(data: APIObjectType) {
const fields = new Set<string>();
// Check if data exists
if (!data) {
console.warn("[Types] Data is undefined in extractFieldsFromComponenents");
return fields;
}
Object.keys(data).forEach((key) => {
// Check if data[key] exists
if (!data[key]) {
console.warn(
`[Types] data["${key}"] is undefined in extractFieldsFromComponenents`,
);
return;
}
Object.keys(data[key]).forEach((kind) => {
// Check if data[key][kind] exists
if (!data[key][kind]) {
console.warn(
`[Types] data["${key}"]["${kind}"] is undefined in extractFieldsFromComponenents`,
);
return;
}
// Check if template exists
if (!data[key][kind].template) {
console.warn(
`[Types] data["${key}"]["${kind}"].template is undefined in extractFieldsFromComponenents`,
);
return;
}
Object.keys(data[key][kind].template).forEach((field) => {
if (
data[key][kind].template[field].display_name &&
data[key][kind].template[field].show
data[key][kind].template[field]?.display_name &&
data[key][kind].template[field]?.show
)
fields.add(data[key][kind].template[field].display_name!);
});
});
});
return fields;
}
/**

View file

@ -0,0 +1,11 @@
export const getSessionStorage = (key: string) => {
return sessionStorage.getItem(key);
};
export const setSessionStorage = (key: string, value: string) => {
sessionStorage.setItem(key, value);
};
export const removeSessionStorage = (key: string) => {
sessionStorage.removeItem(key);
};

View file

@ -28,11 +28,11 @@ function toKebabCase(str: string): string {
}
function toLowerCase(str: string): string {
return str.toLowerCase();
return str?.toLowerCase();
}
function toUpperCase(str: string): string {
return str.toUpperCase();
return str?.toUpperCase();
}
function noBlank(str: string): string {

View file

@ -33,6 +33,7 @@ import {
ArrowRightLeft,
ArrowUpRight,
ArrowUpToLine,
AudioLines,
Bell,
Binary,
Blocks,
@ -154,6 +155,9 @@ import {
MessageSquare,
MessageSquareMore,
MessagesSquare,
Mic,
Mic2,
MicOff,
Minimize2,
Minus,
Monitor,
@ -1031,6 +1035,10 @@ export const nodeIconsLucide: iconsType = {
ScrapeGraph: ScrapeGraph,
ScrapeGraphSmartScraperApi: ScrapeGraph,
ScrapeGraphMarkdownifyApi: ScrapeGraph,
Mic,
MicOff,
Mic2,
DollarSign,
BookOpenText,
AudioLines,
};

View file

@ -262,6 +262,7 @@ const config = {
"cosmic-void": "hsl(var(--cosmic-void))",
"slider-input-border": "var(--slider-input-border)",
"zinc-foreground": "hsl(var(--zinc-foreground))",
"red-foreground": "hsl(var(--red-foreground))",
},
borderRadius: {
lg: `var(--radius)`,

View file

@ -0,0 +1,60 @@
import { expect, test } from "@playwright/test";
import { awaitBootstrapTest } from "../../utils/await-bootstrap-test";
test(
"should able to see and interact with voice assistant",
{ tag: ["@release", "@workspace", "@api"] },
async ({ page }) => {
test.skip(
!process?.env?.OPENAI_API_KEY,
"OPENAI_API_KEY required to run this test",
);
await awaitBootstrapTest(page);
await page.getByTestId("side_nav_options_all-templates").click();
await page.getByRole("heading", { name: "Basic Prompting" }).click();
await page.getByTestId("playground-btn-flow-io").click();
await expect(page.getByTestId("voice-button")).toBeVisible();
await page.getByTestId("voice-button").click();
try {
const apiKeyInput = page.getByTestId("popover-anchor-openai-api-key");
const isVisible = await apiKeyInput
.isVisible({ timeout: 2000 })
.catch(() => false);
if (isVisible) {
await apiKeyInput.fill(process.env.OPENAI_API_KEY || "");
await page
.getByTestId("voice-assistant-settings-modal-save-button")
.click();
}
} catch (e) {
console.log(e);
}
await expect(page.getByTestId("voice-assistant-container")).toBeVisible();
await page.getByTestId("voice-assistant-settings-icon").click();
await expect(
page.getByTestId("voice-assistant-settings-modal-microphone-select"),
).toBeVisible();
await expect(
page.getByTestId("voice-assistant-settings-modal-header"),
).toBeVisible();
await page.keyboard.press("Escape");
await page.getByTestId("voice-assistant-close-button").click();
await expect(
page.getByTestId("voice-assistant-settings-modal-microphone-select"),
).not.toBeVisible();
await expect(page.getByTestId("input-wrapper")).toBeVisible();
},
);

View file

@ -72,11 +72,15 @@ test(
await page.getByText("openai").last().click();
await page.waitForTimeout(1000);
await page.getByPlaceholder("Fields").waitFor({
state: "visible",
timeout: 30000,
});
await page.waitForTimeout(1000);
await page.getByPlaceholder("Fields").fill("ollama");
await page.keyboard.press("Escape");

330
uv.lock generated
View file

@ -1,4 +1,5 @@
version = 1
revision = 1
requires-python = ">=3.10, <3.14"
resolution-markers = [
"python_full_version >= '3.13'",
@ -663,14 +664,14 @@ wheels = [
[[package]]
name = "blockbuster"
version = "1.5.23"
version = "1.5.24"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "forbiddenfruit" },
{ name = "forbiddenfruit", marker = "implementation_name == 'cpython'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/07/cd/19a5c7cef4ded0d5cab07c3df2195d8f599c33e7c18a362ce059d87ea79a/blockbuster-1.5.23.tar.gz", hash = "sha256:ede6302307e700a60518c99caccfea159485382648e0158131e3506d4ff7b49c", size = 51198 }
sdist = { url = "https://files.pythonhosted.org/packages/35/c8/1e456a043179f2aef10bcaafea79f6d06c0ac45cc994767a54f680509f3b/blockbuster-1.5.24.tar.gz", hash = "sha256:97645775761a5d425666ec0bc99629b65c7eccdc2f770d2439850682567af4ec", size = 51245 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/35/5f/7991d1f3b8d91eddabf883bc52f88f7d8c8f556be0d50ee8b0fa078be8a9/blockbuster-1.5.23-py3-none-any.whl", hash = "sha256:cf4d9df51d0ba5ac9b0f14594a456e42b7a49dcc35819c6b36805ac285b1f6fe", size = 13199 },
{ url = "https://files.pythonhosted.org/packages/a7/c8/57a4c80e5abec29fa9406307a5277527f21210bfc6c2c61c3d8ded36c09b/blockbuster-1.5.24-py3-none-any.whl", hash = "sha256:e703497b55bc72af09d60d1cd746c2f3ba7ce0c446fa256be6ccda5e7d403520", size = 13214 },
]
[[package]]
@ -1243,6 +1244,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/ee/f5/817b4920915d6d24600d2b632098c1e7602b767ca9a4f14ae35047199966/clickhouse_connect-0.7.19-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:4ac0602fa305d097a0cd40cebbe10a808f6478c9f303d57a48a3a0ad09659544", size = 226072 },
]
[[package]]
name = "cloudpickle"
version = "3.1.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/52/39/069100b84d7418bc358d81669d5748efb14b9cceacd2f9c75f550424132f/cloudpickle-3.1.1.tar.gz", hash = "sha256:b216fa8ae4019d5482a8ac3c95d8f6346115d8835911fd4aefd1a445e4242c64", size = 22113 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7e/e8/64c37fadfc2816a7701fa8a6ed8d87327c7d54eacfbfb6edab14a2f2be75/cloudpickle-3.1.1-py3-none-any.whl", hash = "sha256:c8c5a44295039331ee9dad40ba100a9c7297b6f988e50e87ccdf3765a668350e", size = 20992 },
]
[[package]]
name = "codeflash"
version = "0.10.0"
@ -1647,11 +1657,12 @@ wheels = [
[[package]]
name = "datasets"
version = "2.2.1"
version = "3.4.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiohttp" },
{ name = "dill" },
{ name = "filelock" },
{ name = "fsspec", extra = ["http"] },
{ name = "huggingface-hub" },
{ name = "multiprocess" },
@ -1659,14 +1670,14 @@ dependencies = [
{ name = "packaging" },
{ name = "pandas" },
{ name = "pyarrow" },
{ name = "pyyaml" },
{ name = "requests" },
{ name = "responses" },
{ name = "tqdm" },
{ name = "xxhash" },
]
sdist = { url = "https://files.pythonhosted.org/packages/31/64/1e6fb2a0eb6b0d55117233cf33279ba6d680c0f031ebae81281a47c92760/datasets-2.2.1.tar.gz", hash = "sha256:d362717c4394589b516c8f397ff20a6fe720454aed877ab61d06f3bc05df9544", size = 302132 }
sdist = { url = "https://files.pythonhosted.org/packages/99/4b/40cda74a4e0e58450b0c85a737e134ab5df65e6f5c33c5e175db5d6a5227/datasets-3.4.1.tar.gz", hash = "sha256:e23968da79bc014ef9f7540eeb7771c6180eae82c86ebcfcc10535a03caf08b5", size = 566559 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d7/2d/41e8aec8d4bad6f07adfcbc89cf743e0d31c876371d453b2936bcfa7fe34/datasets-2.2.1-py3-none-any.whl", hash = "sha256:1938f3e99599422de50b9b54fe802aca854ed130382dab0b3820c821f7ae6d5e", size = 342193 },
{ url = "https://files.pythonhosted.org/packages/16/44/5de560a2625d31801895fb2663693df210c6465960d61a99192caa9afd63/datasets-3.4.1-py3-none-any.whl", hash = "sha256:b91cf257bd64132fa9d953dd4768ab6d63205597301f132a74271cfcce8b5dd3", size = 487392 },
]
[[package]]
@ -1773,11 +1784,11 @@ wheels = [
[[package]]
name = "dill"
version = "0.3.9"
version = "0.3.8"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/70/43/86fe3f9e130c4137b0f1b50784dd70a5087b911fe07fa81e53e0c4c47fea/dill-0.3.9.tar.gz", hash = "sha256:81aa267dddf68cbfe8029c42ca9ec6a4ab3b22371d1c450abc54422577b4512c", size = 187000 }
sdist = { url = "https://files.pythonhosted.org/packages/17/4d/ac7ffa80c69ea1df30a8aa11b3578692a5118e7cd1aa157e3ef73b092d15/dill-0.3.8.tar.gz", hash = "sha256:3ebe3c479ad625c4553aca177444d89b486b1d84982eeacded644afc0cf797ca", size = 184847 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/46/d1/e73b6ad76f0b1fb7f23c35c6d95dbc506a9c8804f43dda8cb5b0fa6331fd/dill-0.3.9-py3-none-any.whl", hash = "sha256:468dff3b89520b474c0397703366b7b95eebe6303f108adf9b19da1f702be87a", size = 119418 },
{ url = "https://files.pythonhosted.org/packages/c9/7a/cef76fd8438a42f96db64ddaa85280485a9c395e7df3db8158cfec1eee34/dill-0.3.8-py3-none-any.whl", hash = "sha256:c36ca9ffb54365bdd2f8eb3eff7d2a21237f8452b57ace88b1ac615b7e815bd7", size = 116252 },
]
[[package]]
@ -1850,14 +1861,18 @@ wheels = [
[[package]]
name = "dspy"
version = "2.5.7"
version = "2.6.12"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
{ name = "asyncer" },
{ name = "backoff" },
{ name = "cachetools" },
{ name = "cloudpickle" },
{ name = "datasets" },
{ name = "diskcache" },
{ name = "httpx" },
{ name = "joblib" },
{ name = "json-repair" },
{ name = "litellm" },
{ name = "magicattr" },
{ name = "openai" },
@ -1866,13 +1881,13 @@ dependencies = [
{ name = "pydantic" },
{ name = "regex" },
{ name = "requests" },
{ name = "structlog" },
{ name = "tenacity" },
{ name = "tqdm" },
{ name = "ujson" },
]
sdist = { url = "https://files.pythonhosted.org/packages/5f/71/db65b9e1a3f84d5f1e9dc9f110757a09c3e1d01cadbc7ba4d23acf50fcfb/dspy-2.5.7.tar.gz", hash = "sha256:6863f1b9bc561ce272dbcb015954582c0371c9da65e86e22f59880b418e618d5", size = 261009 }
sdist = { url = "https://files.pythonhosted.org/packages/e1/04/ccafde0952819979bf4a7a8dffb2d26c480fb6aaeca9fa641595d29ae6e1/dspy-2.6.12.tar.gz", hash = "sha256:f4be661916caa794a7e9d96726e7d20b5516661059b5efed1917836516c5dc2f", size = 203410 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/60/e3/b167fbc3b5b9b9995eb79644a896bf6411174203af99463c847c1a3daf99/dspy-2.5.7-py3-none-any.whl", hash = "sha256:2b90689ae8de9fe7b16687649d1c137abff724c0b33817fa5228412e95a38294", size = 305025 },
{ url = "https://files.pythonhosted.org/packages/14/0d/67806efa73a79a989af58f42d80463501bd3f13de4aca19d2b572e733474/dspy-2.6.12-py3-none-any.whl", hash = "sha256:960deb7516f216d2954e62f097a02d6e3bc6f60020ac9ccb1282d1ef95244d5f", size = 258958 },
]
[[package]]
@ -1970,16 +1985,16 @@ wheels = [
[[package]]
name = "e2b-code-interpreter"
version = "1.1.0"
version = "1.1.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "attrs" },
{ name = "e2b" },
{ name = "httpx" },
]
sdist = { url = "https://files.pythonhosted.org/packages/71/7d/b0f77fa02ddf5bb55b13e03d28f50cf0a210309f660232ae00ff73e69ffe/e2b_code_interpreter-1.1.0.tar.gz", hash = "sha256:4554eb002f9489965c2e7dd7fc967e62128db69b18dbb64975d4abbc0572e3ed", size = 9246 }
sdist = { url = "https://files.pythonhosted.org/packages/4c/a0/aa992090fc02ea7eeafce9b4fc122546c8dd81e85810c0d06bfd4c29a6a2/e2b_code_interpreter-1.1.1.tar.gz", hash = "sha256:b13091f75fc127ad3a268b8746e5da996c6734f432e606fcd4f3897a5b1c2bf0", size = 9288 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/92/5467fe5bde2db76ca3dbd3b1f85e56155dc85541b0e33b70bbfff72688e3/e2b_code_interpreter-1.1.0-py3-none-any.whl", hash = "sha256:292f8ddbb820475d5ffb1f3f2e67a42001a921d1c8fef40bd97a7f16f13adc64", size = 12012 },
{ url = "https://files.pythonhosted.org/packages/8f/40/dcdc47d039dd85e74df2532a01e9d47031fbfdfdc20c0b2de4e42b557271/e2b_code_interpreter-1.1.1-py3-none-any.whl", hash = "sha256:f56450b192456f24df89b9159d1067d50c7133d587ab12116144638969409578", size = 12049 },
]
[[package]]
@ -2038,6 +2053,23 @@ vectorstore-mmr = [
{ name = "simsimd" },
]
[[package]]
name = "elevenlabs"
version = "1.54.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "httpx" },
{ name = "pydantic" },
{ name = "pydantic-core" },
{ name = "requests" },
{ name = "typing-extensions" },
{ name = "websockets" },
]
sdist = { url = "https://files.pythonhosted.org/packages/6a/af/6a5d45770c3408c1c7587518e6b527b192bffd39ca56e778413129e88554/elevenlabs-1.54.0.tar.gz", hash = "sha256:d559786363dc3fca0121e8baac51d31149cf96284dff04e523e961dc1085dfd4", size = 150920 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/44/8e/fe7f4e6f601db785196cfeb7c2e05ff1cc584e66964fe4e4d33188a1b479/elevenlabs-1.54.0-py3-none-any.whl", hash = "sha256:ea3935e2daa6045471a039a00ad7ae1fd2f1c2fc148ee75af5b99e1a135e1925", size = 347549 },
]
[[package]]
name = "emoji"
version = "2.14.1"
@ -2143,14 +2175,14 @@ wheels = [
[[package]]
name = "faker"
version = "37.0.0"
version = "37.0.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "tzdata" },
]
sdist = { url = "https://files.pythonhosted.org/packages/82/c6/6820408cdd87c11f1fbbd2349b05bbda28174d746e6d708ad0f0a934f9d7/faker-37.0.0.tar.gz", hash = "sha256:d2e4e2a30d459a8ec0ae52a552aa51c48973cb32cf51107dee90f58a8322a880", size = 1875487 }
sdist = { url = "https://files.pythonhosted.org/packages/ad/ab/031aa33d72420f074aceaa77e262476d30992db8b9b1bb2bf5dc9fcd8418/faker-37.0.1.tar.gz", hash = "sha256:3a71763f28d796c1d770b90e6b7519d75120a84b5dc4cdd27237870cc0451ff7", size = 1875530 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c6/03/0ffcbc5ab352c266a648d029f79de54ca205c04661203d46a42e3f03492b/faker-37.0.0-py3-none-any.whl", hash = "sha256:2598f78b76710a4ed05e197dda5235be409b4c291ba5c9c7514989cfbc7a5144", size = 1918764 },
{ url = "https://files.pythonhosted.org/packages/f3/ee/a01924560811622e742d4b9b2e796f481f5852a265515f3e5eab9b97af1e/faker-37.0.1-py3-none-any.whl", hash = "sha256:92bb009dcc708244b446be2f0c11a843fca90ea6e412a2addfef0cf2849c94f9", size = 1918376 },
]
[[package]]
@ -2415,11 +2447,11 @@ wheels = [
[[package]]
name = "fsspec"
version = "2025.3.0"
version = "2024.12.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/34/f4/5721faf47b8c499e776bc34c6a8fc17efdf7fdef0b00f398128bc5dcb4ac/fsspec-2025.3.0.tar.gz", hash = "sha256:a935fd1ea872591f2b5148907d103488fc523295e6c64b835cfad8c3eca44972", size = 298491 }
sdist = { url = "https://files.pythonhosted.org/packages/ee/11/de70dee31455c546fbc88301971ec03c328f3d1138cfba14263f651e9551/fsspec-2024.12.0.tar.gz", hash = "sha256:670700c977ed2fb51e0d9f9253177ed20cbde4a3e5c0283cc5385b5870c8533f", size = 291600 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/56/53/eb690efa8513166adef3e0669afd31e95ffde69fb3c52ec2ac7223ed6018/fsspec-2025.3.0-py3-none-any.whl", hash = "sha256:efb87af3efa9103f94ca91a7f8cb7a4df91af9f74fc106c9c7ea0efd7277c1b3", size = 193615 },
{ url = "https://files.pythonhosted.org/packages/de/86/5486b0188d08aa643e127774a99bac51ffa6cf343e3deb0583956dca5b22/fsspec-2024.12.0-py3-none-any.whl", hash = "sha256:b520aed47ad9804237ff878b504267a3b0b441e97508bd6d2d8774e3db85cee2", size = 183862 },
]
[package.optional-dependencies]
@ -2964,14 +2996,14 @@ wheels = [
[[package]]
name = "griffe"
version = "1.6.0"
version = "1.6.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a0/1a/d467b93f5e0ea4edf3c1caef44cfdd53a4a498cb3a6bb722df4dd0fdd66a/griffe-1.6.0.tar.gz", hash = "sha256:eb5758088b9c73ad61c7ac014f3cdfb4c57b5c2fcbfca69996584b702aefa354", size = 391819 }
sdist = { url = "https://files.pythonhosted.org/packages/6a/ba/1ebe51a22c491a3fc94b44ef9c46a5b5472540e24a5c3f251cebbab7214b/griffe-1.6.1.tar.gz", hash = "sha256:ff0acf706b2680f8c721412623091c891e752b2c61b7037618f7b77d06732cf5", size = 393112 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/bf/02/5a22bc98d0aebb68c15ba70d2da1c84a5ef56048d79634e5f96cd2ba96e9/griffe-1.6.0-py3-none-any.whl", hash = "sha256:9f1dfe035d4715a244ed2050dfbceb05b1f470809ed4f6bb10ece5a7302f8dd1", size = 128470 },
{ url = "https://files.pythonhosted.org/packages/1f/d3/a760d1062e44587230aa65573c70edaad4ee8a0e60e193a3172b304d24d8/griffe-1.6.1-py3-none-any.whl", hash = "sha256:b0131670db16834f82383bcf4f788778853c9bf4dc7a1a2b708bb0808ca56a98", size = 128615 },
]
[[package]]
@ -3327,16 +3359,16 @@ wheels = [
[[package]]
name = "hypothesis"
version = "6.129.3"
version = "6.129.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "attrs" },
{ name = "exceptiongroup", marker = "python_full_version < '3.11'" },
{ name = "sortedcontainers" },
]
sdist = { url = "https://files.pythonhosted.org/packages/6d/7c/46e959a935150f4206c54edcbe36674659614d7ebe5548f26b9366bdf81b/hypothesis-6.129.3.tar.gz", hash = "sha256:8a0cbc7612861f603af3838d282e2f7c47f362e7093fabf3ce928d2c7e3480e3", size = 423742 }
sdist = { url = "https://files.pythonhosted.org/packages/bb/98/0051e770d36f7e0a55bcfa3590790448d57ed2f355da9adbb957b1f545d9/hypothesis-6.129.4.tar.gz", hash = "sha256:e9fd66c25b8f0aa6395ce6728360892c3af22529cc16cae7512a4672776d4781", size = 425235 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cf/e4/64079b604388bffb3a495ef7dda541a2ec3544d8b257e0072d0178751b06/hypothesis-6.129.3-py3-none-any.whl", hash = "sha256:95f8f1f0d6cd29b272ae4b42c5c5d9768aa1f2be3ab1834a961a9d0c2fc4d652", size = 487791 },
{ url = "https://files.pythonhosted.org/packages/8d/7c/7266143385cbd19c839f9b61cc660d74c5ce2626fea41d8b215ccc5cfba3/hypothesis-6.129.4-py3-none-any.whl", hash = "sha256:45a31fe2b936688b2954f375c7f87e9dfefa4f2cddfa31cdeba15d77600e1286", size = 489542 },
]
[[package]]
@ -4336,14 +4368,14 @@ wheels = [
[[package]]
name = "langchain-text-splitters"
version = "0.3.6"
version = "0.3.7"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "langchain-core" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0d/33/89912a07c63e4e818f9b0c8d52e4f9d600c97beca8a91db8c9dae6a1b28f/langchain_text_splitters-0.3.6.tar.gz", hash = "sha256:c537972f4b7c07451df431353a538019ad9dadff7a1073ea363946cea97e1bee", size = 40545 }
sdist = { url = "https://files.pythonhosted.org/packages/5a/e7/638b44a41e56c3e32cc90cab3622ac2e4c73645252485427d6b2742fcfa8/langchain_text_splitters-0.3.7.tar.gz", hash = "sha256:7dbf0fb98e10bb91792a1d33f540e2287f9cc1dc30ade45b7aedd2d5cd3dc70b", size = 42180 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/4c/f8/6b82af988e65af9697f6a2f25373fb173fd32d48b62772a8773c5184c870/langchain_text_splitters-0.3.6-py3-none-any.whl", hash = "sha256:e5d7b850f6c14259ea930be4a964a65fa95d9df7e1dbdd8bad8416db72292f4e", size = 31197 },
{ url = "https://files.pythonhosted.org/packages/d3/85/b7a34b6d34bcc89a2252f5ffea30b94077ba3d7adf72e31b9e04e68c901a/langchain_text_splitters-0.3.7-py3-none-any.whl", hash = "sha256:31ba826013e3f563359d7c7f1e99b1cdb94897f665675ee505718c116e7e20ad", size = 32513 },
]
[[package]]
@ -4460,6 +4492,7 @@ dependencies = [
{ name = "ragstack-ai-knowledge-store" },
{ name = "redis" },
{ name = "ruff" },
{ name = "scipy" },
{ name = "scrapegraph-py" },
{ name = "smolagents" },
{ name = "spider-client" },
@ -4470,6 +4503,7 @@ dependencies = [
{ name = "upstash-vector" },
{ name = "uv" },
{ name = "weaviate-client" },
{ name = "webrtcvad" },
{ name = "wikipedia" },
{ name = "wolframalpha" },
{ name = "yfinance" },
@ -4512,6 +4546,7 @@ dev = [
{ name = "blockbuster" },
{ name = "codeflash" },
{ name = "dictdiffer" },
{ name = "elevenlabs" },
{ name = "faker" },
{ name = "httpx" },
{ name = "hypothesis" },
@ -4521,6 +4556,7 @@ dev = [
{ name = "packaging" },
{ name = "pandas-stubs" },
{ name = "pre-commit" },
{ name = "pydantic-ai" },
{ name = "pytest" },
{ name = "pytest-asyncio" },
{ name = "pytest-codspeed" },
@ -4537,6 +4573,7 @@ dev = [
{ name = "requests" },
{ name = "respx" },
{ name = "ruff" },
{ name = "scrapegraph-py" },
{ name = "types-aiofiles" },
{ name = "types-google-cloud-ndb" },
{ name = "types-markdown" },
@ -4646,6 +4683,7 @@ requires-dist = [
{ name = "ragstack-ai-knowledge-store", specifier = "==0.2.1" },
{ name = "redis", specifier = "==5.2.1" },
{ name = "ruff", specifier = ">=0.9.7" },
{ name = "scipy", specifier = ">=1.14.1" },
{ name = "scrapegraph-py", specifier = ">=1.12.0" },
{ name = "sentence-transformers", marker = "extra == 'local'", specifier = ">=2.3.1" },
{ name = "smolagents", specifier = ">=1.8.0" },
@ -4659,12 +4697,14 @@ requires-dist = [
{ name = "upstash-vector", specifier = "==0.6.0" },
{ name = "uv", specifier = ">=0.5.7" },
{ name = "weaviate-client", specifier = "==4.10.2" },
{ name = "webrtcvad", specifier = ">=2.0.10" },
{ name = "wikipedia", specifier = "==1.4.0" },
{ name = "wolframalpha", specifier = "==5.1.3" },
{ name = "yfinance", specifier = "==0.2.50" },
{ name = "youtube-transcript-api", specifier = "==0.6.3" },
{ name = "zep-python", specifier = "==2.0.2" },
]
provides-extras = ["deploy", "couchbase", "cassio", "local", "clickhouse-connect", "nv-ingest", "postgresql"]
[package.metadata.requires-dev]
dev = [
@ -4672,6 +4712,7 @@ dev = [
{ name = "blockbuster", specifier = ">=1.5.20,<1.6" },
{ name = "codeflash", specifier = ">=0.8.4" },
{ name = "dictdiffer", specifier = ">=0.9.0" },
{ name = "elevenlabs", specifier = ">=1.52.0" },
{ name = "faker", specifier = ">=37.0.0" },
{ name = "httpx", specifier = ">=0.27.0" },
{ name = "hypothesis", specifier = ">=6.123.17" },
@ -4681,6 +4722,7 @@ dev = [
{ name = "packaging", specifier = ">=24.1,<25.0" },
{ name = "pandas-stubs", specifier = ">=2.1.4.231227" },
{ name = "pre-commit", specifier = ">=3.7.0" },
{ name = "pydantic-ai", specifier = ">=0.0.19" },
{ name = "pytest", specifier = ">=8.2.0" },
{ name = "pytest-asyncio", specifier = ">=0.23.0" },
{ name = "pytest-codspeed", specifier = ">=3.0.0" },
@ -4697,6 +4739,7 @@ dev = [
{ name = "requests", specifier = ">=2.32.0" },
{ name = "respx", specifier = ">=0.21.1" },
{ name = "ruff", specifier = ">=0.9.7,<0.10" },
{ name = "scrapegraph-py", specifier = ">=1.10.2" },
{ name = "types-aiofiles", specifier = ">=24.1.0.20240626" },
{ name = "types-google-cloud-ndb", specifier = ">=2.2.0.0" },
{ name = "types-markdown", specifier = ">=3.7.0.20240822" },
@ -4730,6 +4773,7 @@ dependencies = [
{ name = "diskcache" },
{ name = "docstring-parser" },
{ name = "duckdb" },
{ name = "elevenlabs" },
{ name = "emoji" },
{ name = "fastapi" },
{ name = "fastapi-pagination" },
@ -4865,6 +4909,7 @@ requires-dist = [
{ name = "diskcache", specifier = ">=5.6.3,<6.0.0" },
{ name = "docstring-parser", specifier = ">=0.16,<1.0.0" },
{ name = "duckdb", specifier = ">=1.0.0,<2.0.0" },
{ name = "elevenlabs", specifier = ">=1.54.0" },
{ name = "emoji", specifier = ">=2.12.0,<3.0.0" },
{ name = "fastapi", specifier = ">=0.115.2,<1.0.0" },
{ name = "fastapi-pagination", specifier = ">=0.12.29,<1.0.0" },
@ -4926,6 +4971,7 @@ requires-dist = [
{ name = "uvicorn", specifier = ">=0.30.0,<1.0.0" },
{ name = "validators", specifier = ">=0.34.0" },
]
provides-extras = ["postgresql", "deploy", "local", "all"]
[package.metadata.requires-dev]
dev = [
@ -5175,11 +5221,11 @@ wheels = [
[[package]]
name = "logfire-api"
version = "3.8.1"
version = "3.9.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/57/9a/27e94b9a8fe7da16bf8a9087d01f6669dbcbee707421759a4e45905cadeb/logfire_api-3.8.1.tar.gz", hash = "sha256:d87feac59b0acfae587461b7d105c629897d67e34446b38e63fb435f284cb99d", size = 46838 }
sdist = { url = "https://files.pythonhosted.org/packages/5c/ff/0fda08241cc005a7afad3901939b43129869d43640e8f4fb35eac7fd9443/logfire_api-3.9.0.tar.gz", hash = "sha256:b03bdcf368595510b4417270b5f02b268eb571a25692248ac1894b841a983a90", size = 47152 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b0/e4/ebb10bbb2ae9df87484212bc5b87686d533a7d4839930c3bbbd1c162633e/logfire_api-3.8.1-py3-none-any.whl", hash = "sha256:d07bb97284d19e787302fb8776a85bc836634be24c5f3fa2c244044f37f60fd0", size = 77351 },
{ url = "https://files.pythonhosted.org/packages/1d/c3/b05f4ddfc90babaef730e7ddd3b420d024f6ad77a8d1883546eec3b25f9a/logfire_api-3.9.0-py3-none-any.whl", hash = "sha256:a313eba49976ccca62ba6acb2f454d28941e53a114b73a29c50e8c09ea38767d", size = 77962 },
]
[[package]]
@ -5838,22 +5884,20 @@ wheels = [
[[package]]
name = "multiprocess"
version = "0.70.17"
version = "0.70.16"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "dill" },
]
sdist = { url = "https://files.pythonhosted.org/packages/e9/34/1acca6e18697017ad5c8b45279b59305d660ecf2fbed13e5f406f69890e4/multiprocess-0.70.17.tar.gz", hash = "sha256:4ae2f11a3416809ebc9a48abfc8b14ecce0652a0944731a1493a3c1ba44ff57a", size = 1785744 }
sdist = { url = "https://files.pythonhosted.org/packages/b5/ae/04f39c5d0d0def03247c2893d6f2b83c136bf3320a2154d7b8858f2ba72d/multiprocess-0.70.16.tar.gz", hash = "sha256:161af703d4652a0e1410be6abccecde4a7ddffd19341be0a7011b94aeb171ac1", size = 1772603 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f2/97/e57eaa8a4dc4036460d13162470eb0da520e6496a90b943529cf1ca40ebd/multiprocess-0.70.17-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7ddb24e5bcdb64e90ec5543a1f05a39463068b6d3b804aa3f2a4e16ec28562d6", size = 135007 },
{ url = "https://files.pythonhosted.org/packages/8f/0a/bb06ea45e5b400cd9944e05878fdbb9016ba78ffb9190c541eec9c8e8380/multiprocess-0.70.17-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d729f55198a3579f6879766a6d9b72b42d4b320c0dcb7844afb774d75b573c62", size = 135008 },
{ url = "https://files.pythonhosted.org/packages/20/e3/db48b10f0a25569c5c3a20288d82f9677cb312bccbd1da16cf8fb759649f/multiprocess-0.70.17-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c2c82d0375baed8d8dd0d8c38eb87c5ae9c471f8e384ad203a36f095ee860f67", size = 135012 },
{ url = "https://files.pythonhosted.org/packages/e7/a9/39cf856d03690af6fd570cf40331f1f79acdbb3132a9c35d2c5002f7f30b/multiprocess-0.70.17-py310-none-any.whl", hash = "sha256:38357ca266b51a2e22841b755d9a91e4bb7b937979a54d411677111716c32744", size = 134830 },
{ url = "https://files.pythonhosted.org/packages/b2/07/8cbb75d6cfbe8712d8f7f6a5615f083c6e710ab916b748fbb20373ddb142/multiprocess-0.70.17-py311-none-any.whl", hash = "sha256:2884701445d0177aec5bd5f6ee0df296773e4fb65b11903b94c613fb46cfb7d1", size = 144346 },
{ url = "https://files.pythonhosted.org/packages/a4/69/d3f343a61a2f86ef10ed7865a26beda7c71554136ce187b0384b1c2c9ca3/multiprocess-0.70.17-py312-none-any.whl", hash = "sha256:2818af14c52446b9617d1b0755fa70ca2f77c28b25ed97bdaa2c69a22c47b46c", size = 147990 },
{ url = "https://files.pythonhosted.org/packages/c8/b7/2e9a4fcd871b81e1f2a812cd5c6fb52ad1e8da7bf0d7646c55eaae220484/multiprocess-0.70.17-py313-none-any.whl", hash = "sha256:20c28ca19079a6c879258103a6d60b94d4ffe2d9da07dda93fb1c8bc6243f522", size = 149843 },
{ url = "https://files.pythonhosted.org/packages/ae/d7/fd7a092fc0ab1845a1a97ca88e61b9b7cc2e9d6fcf0ed24e9480590c2336/multiprocess-0.70.17-py38-none-any.whl", hash = "sha256:1d52f068357acd1e5bbc670b273ef8f81d57863235d9fbf9314751886e141968", size = 132635 },
{ url = "https://files.pythonhosted.org/packages/f9/41/0618ac724b8a56254962c143759e04fa01c73b37aa69dd433f16643bd38b/multiprocess-0.70.17-py39-none-any.whl", hash = "sha256:c3feb874ba574fbccfb335980020c1ac631fbf2a3f7bee4e2042ede62558a021", size = 133359 },
{ url = "https://files.pythonhosted.org/packages/ef/76/6e712a2623d146d314f17598df5de7224c85c0060ef63fd95cc15a25b3fa/multiprocess-0.70.16-pp310-pypy310_pp73-macosx_10_13_x86_64.whl", hash = "sha256:476887be10e2f59ff183c006af746cb6f1fd0eadcfd4ef49e605cbe2659920ee", size = 134980 },
{ url = "https://files.pythonhosted.org/packages/0f/ab/1e6e8009e380e22254ff539ebe117861e5bdb3bff1fc977920972237c6c7/multiprocess-0.70.16-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:d951bed82c8f73929ac82c61f01a7b5ce8f3e5ef40f5b52553b4f547ce2b08ec", size = 134982 },
{ url = "https://files.pythonhosted.org/packages/bc/f7/7ec7fddc92e50714ea3745631f79bd9c96424cb2702632521028e57d3a36/multiprocess-0.70.16-py310-none-any.whl", hash = "sha256:c4a9944c67bd49f823687463660a2d6daae94c289adff97e0f9d696ba6371d02", size = 134824 },
{ url = "https://files.pythonhosted.org/packages/50/15/b56e50e8debaf439f44befec5b2af11db85f6e0f344c3113ae0be0593a91/multiprocess-0.70.16-py311-none-any.whl", hash = "sha256:af4cabb0dac72abfb1e794fa7855c325fd2b55a10a44628a3c1ad3311c04127a", size = 143519 },
{ url = "https://files.pythonhosted.org/packages/0a/7d/a988f258104dcd2ccf1ed40fdc97e26c4ac351eeaf81d76e266c52d84e2f/multiprocess-0.70.16-py312-none-any.whl", hash = "sha256:fc0544c531920dde3b00c29863377f87e1632601092ea2daca74e4beb40faa2e", size = 146741 },
{ url = "https://files.pythonhosted.org/packages/ea/89/38df130f2c799090c978b366cfdf5b96d08de5b29a4a293df7f7429fa50b/multiprocess-0.70.16-py38-none-any.whl", hash = "sha256:a71d82033454891091a226dfc319d0cfa8019a4e888ef9ca910372a446de4435", size = 132628 },
{ url = "https://files.pythonhosted.org/packages/da/d9/f7f9379981e39b8c2511c9e0326d212accacb82f12fbfdc1aa2ce2a7b2b6/multiprocess-0.70.16-py39-none-any.whl", hash = "sha256:a0bafd3ae1b732eac64be2e72038231c1ba97724b60b09400d68f229fcc2fbf3", size = 133351 },
]
[[package]]
@ -6057,38 +6101,34 @@ wheels = [
[[package]]
name = "nvidia-cublas-cu12"
version = "12.4.5.8"
version = "12.1.3.1"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7f/7f/7fbae15a3982dc9595e49ce0f19332423b260045d0a6afe93cdbe2f1f624/nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_aarch64.whl", hash = "sha256:0f8aa1706812e00b9f19dfe0cdb3999b092ccb8ca168c0db5b8ea712456fd9b3", size = 363333771 },
{ url = "https://files.pythonhosted.org/packages/ae/71/1c91302526c45ab494c23f61c7a84aa568b8c1f9d196efa5993957faf906/nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_x86_64.whl", hash = "sha256:2fc8da60df463fdefa81e323eef2e36489e1c94335b5358bcb38360adf75ac9b", size = 363438805 },
{ url = "https://files.pythonhosted.org/packages/37/6d/121efd7382d5b0284239f4ab1fc1590d86d34ed4a4a2fdb13b30ca8e5740/nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl", hash = "sha256:ee53ccca76a6fc08fb9701aa95b6ceb242cdaab118c3bb152af4e579af792728", size = 410594774 },
]
[[package]]
name = "nvidia-cuda-cupti-cu12"
version = "12.4.127"
version = "12.1.105"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/93/b5/9fb3d00386d3361b03874246190dfec7b206fd74e6e287b26a8fcb359d95/nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_aarch64.whl", hash = "sha256:79279b35cf6f91da114182a5ce1864997fd52294a87a16179ce275773799458a", size = 12354556 },
{ url = "https://files.pythonhosted.org/packages/67/42/f4f60238e8194a3106d06a058d494b18e006c10bb2b915655bd9f6ea4cb1/nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:9dec60f5ac126f7bb551c055072b69d85392b13311fcc1bcda2202d172df30fb", size = 13813957 },
{ url = "https://files.pythonhosted.org/packages/7e/00/6b218edd739ecfc60524e585ba8e6b00554dd908de2c9c66c1af3e44e18d/nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl", hash = "sha256:e54fde3983165c624cb79254ae9818a456eb6e87a7fd4d56a2352c24ee542d7e", size = 14109015 },
]
[[package]]
name = "nvidia-cuda-nvrtc-cu12"
version = "12.4.127"
version = "12.1.105"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/77/aa/083b01c427e963ad0b314040565ea396f914349914c298556484f799e61b/nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_aarch64.whl", hash = "sha256:0eedf14185e04b76aa05b1fea04133e59f465b6f960c0cbf4e37c3cb6b0ea198", size = 24133372 },
{ url = "https://files.pythonhosted.org/packages/2c/14/91ae57cd4db3f9ef7aa99f4019cfa8d54cb4caa7e00975df6467e9725a9f/nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:a178759ebb095827bd30ef56598ec182b85547f1508941a3d560eb7ea1fbf338", size = 24640306 },
{ url = "https://files.pythonhosted.org/packages/b6/9f/c64c03f49d6fbc56196664d05dba14e3a561038a81a638eeb47f4d4cfd48/nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl", hash = "sha256:339b385f50c309763ca65456ec75e17bbefcbbf2893f462cb8b90584cd27a1c2", size = 23671734 },
]
[[package]]
name = "nvidia-cuda-runtime-cu12"
version = "12.4.127"
version = "12.1.105"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a1/aa/b656d755f474e2084971e9a297def515938d56b466ab39624012070cb773/nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_aarch64.whl", hash = "sha256:961fe0e2e716a2a1d967aab7caee97512f71767f852f67432d572e36cb3a11f3", size = 894177 },
{ url = "https://files.pythonhosted.org/packages/ea/27/1795d86fe88ef397885f2e580ac37628ed058a92ed2c39dc8eac3adf0619/nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:64403288fa2136ee8e467cdc9c9427e0434110899d07c779f25b5c068934faa5", size = 883737 },
{ url = "https://files.pythonhosted.org/packages/eb/d5/c68b1d2cdfcc59e72e8a5949a37ddb22ae6cade80cd4a57a84d4c8b55472/nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl", hash = "sha256:6e258468ddf5796e25f1dc591a31029fa317d97a0a94ed93468fc86301d61e40", size = 823596 },
]
[[package]]
@ -6104,28 +6144,23 @@ wheels = [
[[package]]
name = "nvidia-cufft-cu12"
version = "11.2.1.3"
version = "11.0.2.54"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-nvjitlink-cu12" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/7a/8a/0e728f749baca3fbeffad762738276e5df60851958be7783af121a7221e7/nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_aarch64.whl", hash = "sha256:5dad8008fc7f92f5ddfa2101430917ce2ffacd86824914c82e28990ad7f00399", size = 211422548 },
{ url = "https://files.pythonhosted.org/packages/27/94/3266821f65b92b3138631e9c8e7fe1fb513804ac934485a8d05776e1dd43/nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_x86_64.whl", hash = "sha256:f083fc24912aa410be21fa16d157fed2055dab1cc4b6934a0e03cba69eb242b9", size = 211459117 },
{ url = "https://files.pythonhosted.org/packages/86/94/eb540db023ce1d162e7bea9f8f5aa781d57c65aed513c33ee9a5123ead4d/nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl", hash = "sha256:794e3948a1aa71fd817c3775866943936774d1c14e7628c74f6f7417224cdf56", size = 121635161 },
]
[[package]]
name = "nvidia-curand-cu12"
version = "10.3.5.147"
version = "10.3.2.106"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/80/9c/a79180e4d70995fdf030c6946991d0171555c6edf95c265c6b2bf7011112/nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_aarch64.whl", hash = "sha256:1f173f09e3e3c76ab084aba0de819c49e56614feae5c12f69883f4ae9bb5fad9", size = 56314811 },
{ url = "https://files.pythonhosted.org/packages/8a/6d/44ad094874c6f1b9c654f8ed939590bdc408349f137f9b98a3a23ccec411/nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_x86_64.whl", hash = "sha256:a88f583d4e0bb643c49743469964103aa59f7f708d862c3ddb0fc07f851e3b8b", size = 56305206 },
{ url = "https://files.pythonhosted.org/packages/44/31/4890b1c9abc496303412947fc7dcea3d14861720642b49e8ceed89636705/nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl", hash = "sha256:9d264c5036dde4e64f1de8c50ae753237c12e0b1348738169cd0f8a536c0e1e0", size = 56467784 },
]
[[package]]
name = "nvidia-cusolver-cu12"
version = "11.6.1.9"
version = "11.4.5.107"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-cublas-cu12" },
@ -6133,55 +6168,42 @@ dependencies = [
{ name = "nvidia-nvjitlink-cu12" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/46/6b/a5c33cf16af09166845345275c34ad2190944bcc6026797a39f8e0a282e0/nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_aarch64.whl", hash = "sha256:d338f155f174f90724bbde3758b7ac375a70ce8e706d70b018dd3375545fc84e", size = 127634111 },
{ url = "https://files.pythonhosted.org/packages/3a/e1/5b9089a4b2a4790dfdea8b3a006052cfecff58139d5a4e34cb1a51df8d6f/nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_x86_64.whl", hash = "sha256:19e33fa442bcfd085b3086c4ebf7e8debc07cfe01e11513cc6d332fd918ac260", size = 127936057 },
{ url = "https://files.pythonhosted.org/packages/bc/1d/8de1e5c67099015c834315e333911273a8c6aaba78923dd1d1e25fc5f217/nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl", hash = "sha256:8a7ec542f0412294b15072fa7dab71d31334014a69f953004ea7a118206fe0dd", size = 124161928 },
]
[[package]]
name = "nvidia-cusparse-cu12"
version = "12.3.1.170"
version = "12.1.0.106"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-nvjitlink-cu12" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/96/a9/c0d2f83a53d40a4a41be14cea6a0bf9e668ffcf8b004bd65633f433050c0/nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_aarch64.whl", hash = "sha256:9d32f62896231ebe0480efd8a7f702e143c98cfaa0e8a76df3386c1ba2b54df3", size = 207381987 },
{ url = "https://files.pythonhosted.org/packages/db/f7/97a9ea26ed4bbbfc2d470994b8b4f338ef663be97b8f677519ac195e113d/nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_x86_64.whl", hash = "sha256:ea4f11a2904e2a8dc4b1833cc1b5181cde564edd0d5cd33e3c168eff2d1863f1", size = 207454763 },
]
[[package]]
name = "nvidia-cusparselt-cu12"
version = "0.6.2"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/98/8e/675498726c605c9441cf46653bd29cb1b8666da1fb1469ffa25f67f20c58/nvidia_cusparselt_cu12-0.6.2-py3-none-manylinux2014_aarch64.whl", hash = "sha256:067a7f6d03ea0d4841c85f0c6f1991c5dda98211f6302cb83a4ab234ee95bef8", size = 149422781 },
{ url = "https://files.pythonhosted.org/packages/78/a8/bcbb63b53a4b1234feeafb65544ee55495e1bb37ec31b999b963cbccfd1d/nvidia_cusparselt_cu12-0.6.2-py3-none-manylinux2014_x86_64.whl", hash = "sha256:df2c24502fd76ebafe7457dbc4716b2fec071aabaed4fb7691a201cde03704d9", size = 150057751 },
{ url = "https://files.pythonhosted.org/packages/65/5b/cfaeebf25cd9fdec14338ccb16f6b2c4c7fa9163aefcf057d86b9cc248bb/nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl", hash = "sha256:f3b50f42cf363f86ab21f720998517a659a48131e8d538dc02f8768237bd884c", size = 195958278 },
]
[[package]]
name = "nvidia-nccl-cu12"
version = "2.21.5"
version = "2.20.5"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/df/99/12cd266d6233f47d00daf3a72739872bdc10267d0383508b0b9c84a18bb6/nvidia_nccl_cu12-2.21.5-py3-none-manylinux2014_x86_64.whl", hash = "sha256:8579076d30a8c24988834445f8d633c697d42397e92ffc3f63fa26766d25e0a0", size = 188654414 },
{ url = "https://files.pythonhosted.org/packages/4b/2a/0a131f572aa09f741c30ccd45a8e56316e8be8dfc7bc19bf0ab7cfef7b19/nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl", hash = "sha256:057f6bf9685f75215d0c53bf3ac4a10b3e6578351de307abad9e18a99182af56", size = 176249402 },
]
[[package]]
name = "nvidia-nvjitlink-cu12"
version = "12.4.127"
version = "12.8.93"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/02/45/239d52c05074898a80a900f49b1615d81c07fceadd5ad6c4f86a987c0bc4/nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_aarch64.whl", hash = "sha256:4abe7fef64914ccfa909bc2ba39739670ecc9e820c83ccc7a6ed414122599b83", size = 20552510 },
{ url = "https://files.pythonhosted.org/packages/ff/ff/847841bacfbefc97a00036e0fce5a0f086b640756dc38caea5e1bb002655/nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:06b3b9b25bf3f8af351d664978ca26a16d2c5127dbd53c0497e28d1fb9611d57", size = 21066810 },
{ url = "https://files.pythonhosted.org/packages/f6/74/86a07f1d0f42998ca31312f998bd3b9a7eff7f52378f4f270c8679c77fb9/nvidia_nvjitlink_cu12-12.8.93-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl", hash = "sha256:81ff63371a7ebd6e6451970684f916be2eab07321b73c9d244dc2b4da7f73b88", size = 39254836 },
]
[[package]]
name = "nvidia-nvtx-cu12"
version = "12.4.127"
version = "12.1.105"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/06/39/471f581edbb7804b39e8063d92fc8305bdc7a80ae5c07dbe6ea5c50d14a5/nvidia_nvtx_cu12-12.4.127-py3-none-manylinux2014_aarch64.whl", hash = "sha256:7959ad635db13edf4fc65c06a6e9f9e55fc2f92596db928d169c0bb031e88ef3", size = 100417 },
{ url = "https://files.pythonhosted.org/packages/87/20/199b8713428322a2f22b722c62b8cc278cc53dffa9705d744484b5035ee9/nvidia_nvtx_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:781e950d9b9f60d8241ccea575b32f5105a5baf4c2351cab5256a24869f12a1a", size = 99144 },
{ url = "https://files.pythonhosted.org/packages/da/d3/8057f0587683ed2fcd4dbfbdfdfa807b9160b809976099d36b8f60d08f03/nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl", hash = "sha256:dc21cf308ca5691e7c04d962e213f8a4aa9bbfa23d95412f452254c2caeb09e5", size = 99138 },
]
[[package]]
@ -7315,8 +7337,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/b8/2a/25e0be2b509c28375c7f75c7e8d8d060773f2cce4856a1654276e3202339/pycryptodome-3.22.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:d21c1eda2f42211f18a25db4eaf8056c94a8563cd39da3683f89fe0d881fb772", size = 2262255 },
{ url = "https://files.pythonhosted.org/packages/41/58/60917bc4bbd91712e53ce04daf237a74a0ad731383a01288130672994328/pycryptodome-3.22.0-cp37-abi3-win32.whl", hash = "sha256:f02baa9f5e35934c6e8dcec91fcde96612bdefef6e442813b8ea34e82c84bbfb", size = 1763403 },
{ url = "https://files.pythonhosted.org/packages/55/f4/244c621afcf7867e23f63cfd7a9630f14cfe946c9be7e566af6c3915bcde/pycryptodome-3.22.0-cp37-abi3-win_amd64.whl", hash = "sha256:d086aed307e96d40c23c42418cbbca22ecc0ab4a8a0e24f87932eeab26c08627", size = 1794568 },
{ url = "https://files.pythonhosted.org/packages/cd/13/16d3a83b07f949a686f6cfd7cfc60e57a769ff502151ea140ad67b118e26/pycryptodome-3.22.0-pp27-pypy_73-manylinux2010_x86_64.whl", hash = "sha256:98fd9da809d5675f3a65dcd9ed384b9dc67edab6a4cda150c5870a8122ec961d", size = 1700779 },
{ url = "https://files.pythonhosted.org/packages/13/af/16d26f7dfc5fd7696ea2c91448f937b51b55312b5bed44f777563e32a4fe/pycryptodome-3.22.0-pp27-pypy_73-win32.whl", hash = "sha256:37ddcd18284e6b36b0a71ea495a4c4dca35bb09ccc9bfd5b91bfaf2321f131c1", size = 1775230 },
{ url = "https://files.pythonhosted.org/packages/37/c3/e3423e72669ca09f141aae493e1feaa8b8475859898b04f57078280a61c4/pycryptodome-3.22.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:b4bdce34af16c1dcc7f8c66185684be15f5818afd2a82b75a4ce6b55f9783e13", size = 1618698 },
{ url = "https://files.pythonhosted.org/packages/f9/b7/35eec0b3919cafea362dcb68bb0654d9cb3cde6da6b7a9d8480ce0bf203a/pycryptodome-3.22.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2988ffcd5137dc2d27eb51cd18c0f0f68e5b009d5fec56fbccb638f90934f333", size = 1666957 },
{ url = "https://files.pythonhosted.org/packages/b0/1f/f49bccdd8d61f1da4278eb0d6aee7f988f1a6ec4056b0c2dc51eda45ae27/pycryptodome-3.22.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e653519dedcd1532788547f00eeb6108cc7ce9efdf5cc9996abce0d53f95d5a9", size = 1659242 },
@ -8399,19 +8419,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/3f/51/d4db610ef29373b879047326cbf6fa98b6c1969d6f6dc423279de2b1be2c/requests_toolbelt-1.0.0-py2.py3-none-any.whl", hash = "sha256:cccfdd665f0a24fcf4726e690f65639d272bb0637b9b92dfd91a5568ccf6bd06", size = 54481 },
]
[[package]]
name = "responses"
version = "0.18.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "requests" },
{ name = "urllib3" },
]
sdist = { url = "https://files.pythonhosted.org/packages/03/a5/186653e51cb20fe3ac793403334d4d077fbb7bb18a9c5c2fce8304d5a2e2/responses-0.18.0.tar.gz", hash = "sha256:380cad4c1c1dc942e5e8a8eaae0b4d4edf708f4f010db8b7bcfafad1fcd254ff", size = 45885 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/79/f3/2b3a6dc5986303b3dd1bbbcf482022acb2583c428cd23f0b6d37b1a1a519/responses-0.18.0-py3-none-any.whl", hash = "sha256:15c63ad16de13ee8e7182d99c9334f64fd81f1ee79f90748d527c28f7ca9dd51", size = 38735 },
]
[[package]]
name = "respx"
version = "0.22.0"
@ -9128,18 +9135,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/81/69/297302c5f5f59c862faa31e6cb9a4cd74721cd1e052b38e464c5b402df8b/StrEnum-0.4.15-py3-none-any.whl", hash = "sha256:a30cda4af7cc6b5bf52c8055bc4bf4b2b6b14a93b574626da33df53cf7740659", size = 8851 },
]
[[package]]
name = "structlog"
version = "25.2.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/78/b8/d3670aec25747e32d54cd5258102ae0d69b9c61c79e7aa326be61a570d0d/structlog-25.2.0.tar.gz", hash = "sha256:d9f9776944207d1035b8b26072b9b140c63702fd7aa57c2f85d28ab701bd8e92", size = 1367438 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/51/eb/244741c1abf7b4092686db0798a4c43491298f40ddec4226f5c4f6b5d3eb/structlog-25.2.0-py3-none-any.whl", hash = "sha256:0fecea2e345d5d491b72f3db2e5fcd6393abfc8cd06a4851f21fcd4d1a99f437", size = 68448 },
]
[[package]]
name = "supabase"
version = "2.6.0"
@ -9171,14 +9166,14 @@ wheels = [
[[package]]
name = "sympy"
version = "1.13.1"
version = "1.13.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "mpmath" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ca/99/5a5b6f19ff9f083671ddf7b9632028436167cd3d33e11015754e41b249a4/sympy-1.13.1.tar.gz", hash = "sha256:9cebf7e04ff162015ce31c9c6c9144daa34a93bd082f54fd8f12deca4f47515f", size = 7533040 }
sdist = { url = "https://files.pythonhosted.org/packages/11/8a/5a7fd6284fa8caac23a26c9ddf9c30485a48169344b4bd3b0f02fef1890f/sympy-1.13.3.tar.gz", hash = "sha256:b27fd2c6530e0ab39e275fc9b683895367e51d5da91baa8d3d64db2565fec4d9", size = 7533196 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b2/fe/81695a1aa331a842b582453b605175f419fe8540355886031328089d840a/sympy-1.13.1-py3-none-any.whl", hash = "sha256:db36cdc64bf61b9b24578b6f7bab1ecdd2452cf008f34faa33776680c26d66f8", size = 6189177 },
{ url = "https://files.pythonhosted.org/packages/99/ff/c87e0622b1dadea79d2fb0b25ade9ed98954c9033722eb707053d310d4f3/sympy-1.13.3-py3-none-any.whl", hash = "sha256:54612cf55a62755ee71824ce692986f23c88ffa77207b30c1368eda4a7060f73", size = 6189483 },
]
[[package]]
@ -9405,7 +9400,7 @@ wheels = [
[[package]]
name = "torch"
version = "2.6.0"
version = "2.4.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "filelock" },
@ -9421,32 +9416,25 @@ dependencies = [
{ name = "nvidia-curand-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cusolver-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cusparse-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cusparselt-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-nccl-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-nvjitlink-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-nvtx-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "setuptools", marker = "python_full_version >= '3.12'" },
{ name = "sympy" },
{ name = "triton", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "triton", marker = "python_full_version < '3.13' and platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "typing-extensions" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/37/81/aa9ab58ec10264c1abe62c8b73f5086c3c558885d6beecebf699f0dbeaeb/torch-2.6.0-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:6860df13d9911ac158f4c44031609700e1eba07916fff62e21e6ffa0a9e01961", size = 766685561 },
{ url = "https://files.pythonhosted.org/packages/86/86/e661e229df2f5bfc6eab4c97deb1286d598bbeff31ab0cdb99b3c0d53c6f/torch-2.6.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:c4f103a49830ce4c7561ef4434cc7926e5a5fe4e5eb100c19ab36ea1e2b634ab", size = 95751887 },
{ url = "https://files.pythonhosted.org/packages/20/e0/5cb2f8493571f0a5a7273cd7078f191ac252a402b5fb9cb6091f14879109/torch-2.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:56eeaf2ecac90da5d9e35f7f35eb286da82673ec3c582e310a8d1631a1c02341", size = 204165139 },
{ url = "https://files.pythonhosted.org/packages/e5/16/ea1b7842413a7b8a5aaa5e99e8eaf3da3183cc3ab345ad025a07ff636301/torch-2.6.0-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:09e06f9949e1a0518c5b09fe95295bc9661f219d9ecb6f9893e5123e10696628", size = 66520221 },
{ url = "https://files.pythonhosted.org/packages/78/a9/97cbbc97002fff0de394a2da2cdfa859481fdca36996d7bd845d50aa9d8d/torch-2.6.0-cp311-cp311-manylinux1_x86_64.whl", hash = "sha256:7979834102cd5b7a43cc64e87f2f3b14bd0e1458f06e9f88ffa386d07c7446e1", size = 766715424 },
{ url = "https://files.pythonhosted.org/packages/6d/fa/134ce8f8a7ea07f09588c9cc2cea0d69249efab977707cf67669431dcf5c/torch-2.6.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:ccbd0320411fe1a3b3fec7b4d3185aa7d0c52adac94480ab024b5c8f74a0bf1d", size = 95759416 },
{ url = "https://files.pythonhosted.org/packages/11/c5/2370d96b31eb1841c3a0883a492c15278a6718ccad61bb6a649c80d1d9eb/torch-2.6.0-cp311-cp311-win_amd64.whl", hash = "sha256:46763dcb051180ce1ed23d1891d9b1598e07d051ce4c9d14307029809c4d64f7", size = 204164970 },
{ url = "https://files.pythonhosted.org/packages/0b/fa/f33a4148c6fb46ca2a3f8de39c24d473822d5774d652b66ed9b1214da5f7/torch-2.6.0-cp311-none-macosx_11_0_arm64.whl", hash = "sha256:94fc63b3b4bedd327af588696559f68c264440e2503cc9e6954019473d74ae21", size = 66530713 },
{ url = "https://files.pythonhosted.org/packages/e5/35/0c52d708144c2deb595cd22819a609f78fdd699b95ff6f0ebcd456e3c7c1/torch-2.6.0-cp312-cp312-manylinux1_x86_64.whl", hash = "sha256:2bb8987f3bb1ef2675897034402373ddfc8f5ef0e156e2d8cfc47cacafdda4a9", size = 766624563 },
{ url = "https://files.pythonhosted.org/packages/01/d6/455ab3fbb2c61c71c8842753b566012e1ed111e7a4c82e0e1c20d0c76b62/torch-2.6.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:b789069020c5588c70d5c2158ac0aa23fd24a028f34a8b4fcb8fcb4d7efcf5fb", size = 95607867 },
{ url = "https://files.pythonhosted.org/packages/18/cf/ae99bd066571656185be0d88ee70abc58467b76f2f7c8bfeb48735a71fe6/torch-2.6.0-cp312-cp312-win_amd64.whl", hash = "sha256:7e1448426d0ba3620408218b50aa6ada88aeae34f7a239ba5431f6c8774b1239", size = 204120469 },
{ url = "https://files.pythonhosted.org/packages/81/b4/605ae4173aa37fb5aa14605d100ff31f4f5d49f617928c9f486bb3aaec08/torch-2.6.0-cp312-none-macosx_11_0_arm64.whl", hash = "sha256:9a610afe216a85a8b9bc9f8365ed561535c93e804c2a317ef7fabcc5deda0989", size = 66532538 },
{ url = "https://files.pythonhosted.org/packages/24/85/ead1349fc30fe5a32cadd947c91bda4a62fbfd7f8c34ee61f6398d38fb48/torch-2.6.0-cp313-cp313-manylinux1_x86_64.whl", hash = "sha256:4874a73507a300a5d089ceaff616a569e7bb7c613c56f37f63ec3ffac65259cf", size = 766626191 },
{ url = "https://files.pythonhosted.org/packages/dd/b0/26f06f9428b250d856f6d512413e9e800b78625f63801cbba13957432036/torch-2.6.0-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:a0d5e1b9874c1a6c25556840ab8920569a7a4137afa8a63a32cee0bc7d89bd4b", size = 95611439 },
{ url = "https://files.pythonhosted.org/packages/c2/9c/fc5224e9770c83faed3a087112d73147cd7c7bfb7557dcf9ad87e1dda163/torch-2.6.0-cp313-cp313-win_amd64.whl", hash = "sha256:510c73251bee9ba02ae1cb6c9d4ee0907b3ce6020e62784e2d7598e0cfa4d6cc", size = 204126475 },
{ url = "https://files.pythonhosted.org/packages/88/8b/d60c0491ab63634763be1537ad488694d316ddc4a20eaadd639cedc53971/torch-2.6.0-cp313-none-macosx_11_0_arm64.whl", hash = "sha256:ff96f4038f8af9f7ec4231710ed4549da1bdebad95923953a25045dcf6fd87e2", size = 66536783 },
{ url = "https://files.pythonhosted.org/packages/41/05/d540049b1832d1062510efc6829634b7fbef5394c757d8312414fb65a3cb/torch-2.4.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:362f82e23a4cd46341daabb76fba08f04cd646df9bfaf5da50af97cb60ca4971", size = 797072810 },
{ url = "https://files.pythonhosted.org/packages/a0/12/2162df9c47386ae7cedbc938f9703fee4792d93504fab8608d541e71ece3/torch-2.4.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:e8ac1985c3ff0f60d85b991954cfc2cc25f79c84545aead422763148ed2759e3", size = 89699259 },
{ url = "https://files.pythonhosted.org/packages/5d/4c/b2a59ff0e265f5ee154f0d81e948b1518b94f545357731e1a3245ee5d45b/torch-2.4.1-cp310-cp310-win_amd64.whl", hash = "sha256:91e326e2ccfb1496e3bee58f70ef605aeb27bd26be07ba64f37dcaac3d070ada", size = 199433813 },
{ url = "https://files.pythonhosted.org/packages/dc/fb/1333ba666bbd53846638dd75a7a1d4eaf964aff1c482fc046e2311a1b499/torch-2.4.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:d36a8ef100f5bff3e9c3cea934b9e0d7ea277cb8210c7152d34a9a6c5830eadd", size = 62139309 },
{ url = "https://files.pythonhosted.org/packages/ea/ea/4ab009e953bca6ff35ad75b8ab58c0923308636c182c145dc63084f7d136/torch-2.4.1-cp311-cp311-manylinux1_x86_64.whl", hash = "sha256:0b5f88afdfa05a335d80351e3cea57d38e578c8689f751d35e0ff36bce872113", size = 797111232 },
{ url = "https://files.pythonhosted.org/packages/8f/a1/b31f94b4631c1731261db9fdc9a749ef58facc3b76094a6fe974f611f239/torch-2.4.1-cp311-cp311-manylinux2014_aarch64.whl", hash = "sha256:ef503165f2341942bfdf2bd520152f19540d0c0e34961232f134dc59ad435be8", size = 89719574 },
{ url = "https://files.pythonhosted.org/packages/5a/6a/775b93d6888c31f1f1fc457e4f5cc89f0984412d5dcdef792b8f2aa6e812/torch-2.4.1-cp311-cp311-win_amd64.whl", hash = "sha256:092e7c2280c860eff762ac08c4bdcd53d701677851670695e0c22d6d345b269c", size = 199436128 },
{ url = "https://files.pythonhosted.org/packages/1f/34/c93873c37f93154d982172755f7e504fdbae6c760499303a3111ce6ce327/torch-2.4.1-cp311-none-macosx_11_0_arm64.whl", hash = "sha256:ddddbd8b066e743934a4200b3d54267a46db02106876d21cf31f7da7a96f98ea", size = 62145176 },
{ url = "https://files.pythonhosted.org/packages/cc/df/5204a13a7a973c23c7ade615bafb1a3112b5d0ec258d8390f078fa4ab0f7/torch-2.4.1-cp312-cp312-manylinux1_x86_64.whl", hash = "sha256:fdc4fe11db3eb93c1115d3e973a27ac7c1a8318af8934ffa36b0370efe28e042", size = 797019590 },
{ url = "https://files.pythonhosted.org/packages/4f/16/d23a689e5ef8001ed2ace1a3a59f2fda842889b0c3f3877799089925282a/torch-2.4.1-cp312-cp312-manylinux2014_aarch64.whl", hash = "sha256:18835374f599207a9e82c262153c20ddf42ea49bc76b6eadad8e5f49729f6e4d", size = 89613802 },
{ url = "https://files.pythonhosted.org/packages/a8/e0/ca8354dfb8d834a76da51b06e8248b70fc182bc163540507919124974bdf/torch-2.4.1-cp312-cp312-win_amd64.whl", hash = "sha256:ebea70ff30544fc021d441ce6b219a88b67524f01170b1c538d7d3ebb5e7f56c", size = 199387694 },
{ url = "https://files.pythonhosted.org/packages/ac/30/8b6f77ea4ce84f015ee024b8dfef0dac289396254e8bfd493906d4cbb848/torch-2.4.1-cp312-none-macosx_11_0_arm64.whl", hash = "sha256:72b484d5b6cec1a735bf3fa5a1c4883d01748698c5e9cfdbeb4ffab7c7987e0d", size = 62123443 },
]
[[package]]
@ -9566,13 +9554,15 @@ wheels = [
[[package]]
name = "triton"
version = "3.2.0"
version = "3.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "filelock", marker = "python_full_version < '3.13'" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/01/65/3ffa90e158a2c82f0716eee8d26a725d241549b7d7aaf7e4f44ac03ebd89/triton-3.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b3e54983cd51875855da7c68ec05c05cf8bb08df361b1d5b69e05e40b0c9bd62", size = 253090354 },
{ url = "https://files.pythonhosted.org/packages/a7/2e/757d2280d4fefe7d33af7615124e7e298ae7b8e3bc4446cdb8e88b0f9bab/triton-3.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8009a1fb093ee8546495e96731336a33fb8856a38e45bb4ab6affd6dbc3ba220", size = 253157636 },
{ url = "https://files.pythonhosted.org/packages/06/00/59500052cb1cf8cf5316be93598946bc451f14072c6ff256904428eaf03c/triton-3.2.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d9b215efc1c26fa7eefb9a157915c92d52e000d2bf83e5f69704047e63f125c", size = 253159365 },
{ url = "https://files.pythonhosted.org/packages/c7/30/37a3384d1e2e9320331baca41e835e90a3767303642c7a80d4510152cbcf/triton-3.2.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e5dfa23ba84541d7c0a531dfce76d8bcd19159d50a4a8b14ad01e91734a5c1b0", size = 253154278 },
{ url = "https://files.pythonhosted.org/packages/45/27/14cc3101409b9b4b9241d2ba7deaa93535a217a211c86c4cc7151fb12181/triton-3.0.0-1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e1efef76935b2febc365bfadf74bcb65a6f959a9872e5bddf44cc9e0adce1e1a", size = 209376304 },
{ url = "https://files.pythonhosted.org/packages/33/3e/a2f59384587eff6aeb7d37b6780de7fedd2214935e27520430ca9f5b7975/triton-3.0.0-1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:5ce8520437c602fb633f1324cc3871c47bee3b67acf9756c1a66309b60e3216c", size = 209438883 },
{ url = "https://files.pythonhosted.org/packages/fe/7b/7757205dee3628f75e7991021d15cd1bd0c9b044ca9affe99b50879fc0e1/triton-3.0.0-1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:34e509deb77f1c067d8640725ef00c5cbfcb2052a1a3cb6a6d343841f92624eb", size = 209464695 },
]
[[package]]
@ -9970,27 +9960,27 @@ wheels = [
[[package]]
name = "uv"
version = "0.6.7"
version = "0.6.8"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/8e/58/917d104f948aea6580d789189028f02835f5c57729c5bc8f7f4a2e588456/uv-0.6.7.tar.gz", hash = "sha256:aa558764265fb69c89c6b5dc7124265d74fb8265d81a91079912df376ff4a3b2", size = 3096401 }
sdist = { url = "https://files.pythonhosted.org/packages/91/cd/51dc5cad69ba2df4bfb8442af18e7e53a8a7c77d221a26b3903a9dc4e5ce/uv-0.6.8.tar.gz", hash = "sha256:45ecd70cfe42132ff84083ecb37fe7a8d2feac3eacd7a5872e7a002fb260940f", size = 3097793 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e5/14/988fceeca6a73cb68fa89d4d4e252b13de456ba7cfed6cccf1f7d5c68d09/uv-0.6.7-py3-none-linux_armv6l.whl", hash = "sha256:d069bf5f02a5ccc7bff5f4a047e9b11569cb8c1f26db5ec3ee78e30b36ade0a6", size = 15779511 },
{ url = "https://files.pythonhosted.org/packages/d1/24/43951bfb8af81149a93dddde3aab6a1dfbda3d39eeb5e80445614e1b7bfa/uv-0.6.7-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b4beed4004f3cc9b2691d21c40a9a2ff3ddb0e2bb42cacc9545b58bec19c9df7", size = 15862170 },
{ url = "https://files.pythonhosted.org/packages/1a/32/14cc6acf5179eca4a595ae90a08178e8e1741b9a62ea5688a16f01250bfb/uv-0.6.7-py3-none-macosx_11_0_arm64.whl", hash = "sha256:33707fba877cf58cc47406d5910cbfd22cdb2e19451e8b79857d4699650ed37c", size = 14671575 },
{ url = "https://files.pythonhosted.org/packages/f9/79/477931ef7169dcb2378711c99540657fe04fd6f0ee45a6647f0a0be5bc83/uv-0.6.7-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:04125921e6c670480254f8e63b863b1040bc84d6286f7a8c0b5a4d29f0aa55e9", size = 15116241 },
{ url = "https://files.pythonhosted.org/packages/99/6e/7a7c811e3220f62464390134c7c68b167785d96fc10c5d03a9b99b83fe72/uv-0.6.7-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2f09db1158bcc3edad033ee0b5b6a4848af8291e3b271cd76ace3524825d84ea", size = 15500884 },
{ url = "https://files.pythonhosted.org/packages/bc/95/8123440acb3efa4f5779026d27b9a54da76c75fdc74aa53e2243b4c3e1cc/uv-0.6.7-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:32ba45607c9140e8d391a5fd22d5d0b22fc7e0ce76988a39c6aeeb0065bc348a", size = 16150549 },
{ url = "https://files.pythonhosted.org/packages/6c/c5/71eeeee0626719d47dc3cdb563f3b04c46bba566917f8aa572d76e1decbb/uv-0.6.7-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:02bcb6e57aaa147b89bdcd55f5ef6c23b18883c8ce0859dafb2f1cf32ae010e3", size = 17065356 },
{ url = "https://files.pythonhosted.org/packages/bc/a9/2b509469393b27380fa20e08c800898c07427887eb46f6472df69253ac29/uv-0.6.7-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:04832e48d87328f75d7b59a2d00ee3ed3060eaca4777924dba1515f0c00ff5ac", size = 16824191 },
{ url = "https://files.pythonhosted.org/packages/85/fb/6bd6006ac1832ccb93aab19d154b75bbcab543183f3faf15be24145e2bc7/uv-0.6.7-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8efd1da986f1380d4b225e1a2e39d5870697487775a3db5a24358b09946a431d", size = 20906822 },
{ url = "https://files.pythonhosted.org/packages/fb/ff/4d56098d39638b69255b4e2377bbf0243a177745f703ab2af8c26002c071/uv-0.6.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:840aa6212289f27d56b2c0cbeb4e95cb5726da8674663ab27d4ec80e3be2a081", size = 16478439 },
{ url = "https://files.pythonhosted.org/packages/15/55/fddb1bb590e6d9782b16ab120e5c96ec95a29663c9b9e55e7655475767e7/uv-0.6.7-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:97e57773e6107ee578d2483e2cb1342dc2b9379d20f9e559668f053599347caf", size = 15388620 },
{ url = "https://files.pythonhosted.org/packages/6b/c5/091019d977fddb22a5c3fa1109959a23ca6e8467b4bcb24daa7926157e96/uv-0.6.7-py3-none-musllinux_1_1_armv7l.whl", hash = "sha256:2cfc48a4b0cd10df5950d41503798f1b785f33eb0ab1cadf9ceb4a03839e5a48", size = 15481199 },
{ url = "https://files.pythonhosted.org/packages/b5/33/435769870724bcace2b7feb94db1563fc44cb8142c864fef9cffe88f0eb9/uv-0.6.7-py3-none-musllinux_1_1_i686.whl", hash = "sha256:a572ce4c1286092414ada69ed05de4b2aca3666f30aa5b119191ccb30c7d96d6", size = 15735517 },
{ url = "https://files.pythonhosted.org/packages/a0/3f/a9cb127a8a27a8f11554363f247e7a999ffd5710817c5a3b93d5be817415/uv-0.6.7-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:57be4e71104bf0244c9b19940bc877d1a7966c0ef43851950f56d2b8d18a8a5b", size = 16609927 },
{ url = "https://files.pythonhosted.org/packages/42/7c/2b5613e08cb21696e1a5fc39a5a223bfec65f3ca9e33a46756760d11dfc9/uv-0.6.7-py3-none-win32.whl", hash = "sha256:10465c6ec8a02b75deeef45f7b97fe74ae1ee9148b8f6fdd4c84fc4876de5745", size = 15932231 },
{ url = "https://files.pythonhosted.org/packages/da/88/f4801ec3a702d62d3f8ccb07ff01a80ed191deb7d0dd698928a289c2b18b/uv-0.6.7-py3-none-win_amd64.whl", hash = "sha256:9bccdef3983f0d31830f3cbe6d00384a1d2d5a7175023ba6c5a8acea2804106a", size = 17309492 },
{ url = "https://files.pythonhosted.org/packages/e6/ae/7272683b14691e80ef840bed206cc5530727d07a98af6f9c4844315ee07d/uv-0.6.7-py3-none-win_arm64.whl", hash = "sha256:8c968ecabb39f3a6909435afc1ed84dc58cae05c99398f1975a0c5e22e4e8b1e", size = 16068268 },
{ url = "https://files.pythonhosted.org/packages/e3/1a/551ff0892e3ae06a1c42c2cfa4ed87c06cdd3d573aa3a5c0ffa2388c60c2/uv-0.6.8-py3-none-linux_armv6l.whl", hash = "sha256:ec3838ff7d7313076700ad89b5254548988b0c4e98d215bb0064b7d872166566", size = 15770071 },
{ url = "https://files.pythonhosted.org/packages/36/21/293c29deaaa4c28887e984eda96ce16372bb4cf4537469b14844aea37d57/uv-0.6.8-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:f284e418727f1242e17dd7e0ab09525aa6543953bfc4886fa364d24b8612c4fa", size = 15863073 },
{ url = "https://files.pythonhosted.org/packages/a9/b4/f8f3c71dc812418c5d18816b2bf1675511e3dd4c47dcfecca2096dc3f073/uv-0.6.8-py3-none-macosx_11_0_arm64.whl", hash = "sha256:6847cdeca38236316ff91bfd155f018990f99809c9b3c13f6f4c1aa9d1f16277", size = 14701715 },
{ url = "https://files.pythonhosted.org/packages/f1/a8/150c9e43090b308d9d4d006dd8394597bf3e2dc62c59d64a13718dcc635a/uv-0.6.8-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:26b9af3c0572d283e58938e598be06d5391893647edd1e15d3c66a60ec458f5f", size = 15123381 },
{ url = "https://files.pythonhosted.org/packages/85/aa/91db63e92da8c0fd720baf7b36503ae03da65da439fffc748eef0992f965/uv-0.6.8-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:696121507b28ef286fd144581f9f0a8bd4929efa3dcc78c787ce06304912c087", size = 15507911 },
{ url = "https://files.pythonhosted.org/packages/95/19/13845489eaa944a10373835a4e6e55b192b36c068a2235cab78ada4e8916/uv-0.6.8-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:42af4d5919f8499354322845e4d35d5dfdd8f06e93548d99e6d5a533806fa06d", size = 16158094 },
{ url = "https://files.pythonhosted.org/packages/f4/f2/1e440eb31e466cd81b228db8a4b1595fe0693531a4bd44c7e1c2192b48d8/uv-0.6.8-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:bdeaefb8ce828cc9b01888979f10c0d0a3896b08d3370f2234687a9ce016697a", size = 17070920 },
{ url = "https://files.pythonhosted.org/packages/e2/b4/04a4487856779f7a05d3689ce426344ac978496715c8a2c37cf9c31460e3/uv-0.6.8-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6366dd9b248246961539093e7762096eb50eced3bdb9058184c66948e14cb559", size = 16804347 },
{ url = "https://files.pythonhosted.org/packages/b5/e9/146c6380cf415bdf1a6355b6ba39e4045a89cfd7853503c650d4c11bcae2/uv-0.6.8-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8a1cdd1629a90647b1eb33192aa72a9956509d2c1349650fe859c80ae229a69b", size = 20985997 },
{ url = "https://files.pythonhosted.org/packages/5b/f2/fd4999d53d2bef1893a87ab624963afba73a26947578452368365fd3b05e/uv-0.6.8-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2670ff0546aea85fc0337e651851d18b7ce41ffc3189ae556fcd99ccd152a61", size = 16501131 },
{ url = "https://files.pythonhosted.org/packages/77/9d/262bf1b883032821ded6443efd1c27930a5ba5a84112ea603683e7f6b906/uv-0.6.8-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:451bc30583398718f60033679f558ada57c5924b4f1980ca0af67fb6b3aca320", size = 15375574 },
{ url = "https://files.pythonhosted.org/packages/6e/c6/45e37888e02ac54f300f89559e7c2d4e4b2147e802fe84a041f623ea1ba1/uv-0.6.8-py3-none-musllinux_1_1_armv7l.whl", hash = "sha256:f0ae6320d14de75e5ca12cb6a4d896ad9a8c682e2e42a40977cb5e6cc147ebe7", size = 15467826 },
{ url = "https://files.pythonhosted.org/packages/87/0c/14f1cf0be81e5fa0520842cbe334b0935628b092352189f63a6e6621567f/uv-0.6.8-py3-none-musllinux_1_1_i686.whl", hash = "sha256:edcd6d54ad8f71e3c306cbcf2159055674d54354a5b332be2586f279e9403070", size = 15751008 },
{ url = "https://files.pythonhosted.org/packages/19/fe/45bfc6240c8e4272f08d0c5549b7dd5de13e8621bc9249fcda2256f2a2b7/uv-0.6.8-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:23958fbefce5e167f0dd513908cf1641276601c79496143f8558b7e2a43c8648", size = 16643754 },
{ url = "https://files.pythonhosted.org/packages/e8/ef/1a0e08807592992a79791f2407725594317d8ed5b4f9dbbaa50dc44dc9e8/uv-0.6.8-py3-none-win32.whl", hash = "sha256:e3ab6d0cf20cb33e6d04c431e5f22ce25741f5111c9706ba431bb92e1e29b273", size = 15890116 },
{ url = "https://files.pythonhosted.org/packages/11/2c/84b571dc167a294f8df4891c573fad7bb4127b5489f87c91666507f2ab29/uv-0.6.8-py3-none-win_amd64.whl", hash = "sha256:3d0f35004feea5bc936939cb4d2f67c440345b594acd1400bc0dc3c7f7398a7c", size = 17335091 },
{ url = "https://files.pythonhosted.org/packages/ca/17/e3eaecd3363302a8f860626ce668f706dab3807912f4c4d24520ff92be21/uv-0.6.8-py3-none-win_arm64.whl", hash = "sha256:f9430336c6657ed44816fe62cfcdafa644b1c14ab218687aa043648e0f382933", size = 16090332 },
]
[[package]]
@ -10195,6 +10185,12 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/f4/24/2a3e3df732393fed8b3ebf2ec078f05546de641fe1b667ee316ec1dcf3b7/webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78", size = 11774 },
]
[[package]]
name = "webrtcvad"
version = "2.0.10"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/89/34/e2de2d97f3288512b9ea56f92e7452f8207eb5a0096500badf9dfd48f5e6/webrtcvad-2.0.10.tar.gz", hash = "sha256:f1bed2fb25b63fb7b1a55d64090c993c9c9167b28485ae0bcdd81cf6ede96aea", size = 66156 }
[[package]]
name = "websocket-client"
version = "1.8.0"