* 📝 (chat-view-wrapper.tsx): Refactor ChatViewWrapper component to improve code readability and maintainability 📝 (chat-input.tsx): Add functionality to set voice assistant active state when showAudioInput is true 📝 (voice-assistant.tsx): Add functionality to set voice assistant active state and scroll to bottom when closing audio input 📝 (chat-view.tsx): Update ChatView component to consider sidebarOpen and isVoiceAssistantActive states 📝 (voiceStore.ts): Add isVoiceAssistantActive state and setIsVoiceAssistantActive function to voice store 📝 (index.ts, voice.types.ts): Update types to include sidebarOpen prop in chatViewProps and isVoiceAssistantActive state in VoiceStoreType * url change * 🔧 (chat-input.tsx): Add new session close voice assistant functionality to chat input component 🔧 (voice-button.tsx): Update voice button to set new session close voice assistant state 🔧 (sidebar-open-view.tsx): Update sidebar open view to set new session close voice assistant state 🔧 (voiceStore.ts, voice.types.ts): Add new session close voice assistant state and setter to voice store and types * ♻️ (chat-input.tsx): remove unused setNewSessionCloseVoiceAssistant function to clean up code and improve readability * url * merge * 📝 (chat-view-wrapper.tsx): Refactor ChatViewWrapper component to improve code readability and maintainability 📝 (chat-input.tsx): Add functionality to set voice assistant active state when showAudioInput is true 📝 (voice-assistant.tsx): Add functionality to set voice assistant active state and scroll to bottom when closing audio input 📝 (chat-view.tsx): Update ChatView component to consider sidebarOpen and isVoiceAssistantActive states 📝 (voiceStore.ts): Add isVoiceAssistantActive state and setIsVoiceAssistantActive function to voice store 📝 (index.ts, voice.types.ts): Update types to include sidebarOpen prop in chatViewProps and isVoiceAssistantActive state in VoiceStoreType * new endpoint * 🐛 (voice_mode.py): remove unnecessary comment and update input parameter in speech creation function 🐛 (use-start-conversation.ts): update WebSocket URL to use flow_tts endpoint and add support for audio language and input audio transcription model in WebSocket session update configuration * 📝 (voice_mode.py): add "voice" attribute with value "coral" to TTSConfig class for voice mode customization ♻️ (use-start-conversation.ts): refactor code to use "transcription_session.update" type and update session attributes based on audioSettings and audioLanguage variables * ✨ (voice_mode.py): introduce a new 'voice' attribute with the value "coral" to specify the voice used for text-to-speech conversion in TTSConfig class * ♻️ (voice_mode.py): remove unused 'voice' variable assignment to improve code cleanliness and readability * ✨ (voice_mode.py): update voice parameter value to "coral" for better voice quality and clarity in TTS websocket flow. * ✨ (voice_mode.py): add support for configuring OpenAI voice setting in TTSConfig class 🐛 (voice_mode.py): fix updating OpenAI voice setting in flow_tts_websocket function 📝 (use-start-conversation.ts): update voice settings format for flow_tts endpoint to include voice and provider information * 🐛 (use-start-conversation.ts): fix issue with immediate startRecording function call by adding a 300ms delay to ensure proper initialization * refactor * refactor * 🔧 (audio-settings-dialog.tsx): add isPlayingRef prop to SettingsVoiceModal component for managing audio playback state 🔧 (voice-assistant.tsx): pass isPlayingRef prop to VoiceAssistant component for controlling audio playback state * [autofix.ci] apply automated fixes * fixed tts and 11l * lint fix --------- Co-authored-by: cristhianzl <cristhian.lousa@gmail.com> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> |
||
|---|---|---|
| .devcontainer | ||
| .github | ||
| .vscode | ||
| deploy | ||
| docker | ||
| docker_example | ||
| docs | ||
| scripts | ||
| src | ||
| test-results | ||
| .composio.lock | ||
| .env.example | ||
| .eslintrc.json | ||
| .gitattributes | ||
| .gitignore | ||
| .pre-commit-config.yaml | ||
| CODE_OF_CONDUCT.md | ||
| CONTRIBUTING.md | ||
| DEVELOPMENT.md | ||
| eslint.config.js | ||
| LICENSE | ||
| Makefile | ||
| pyproject.toml | ||
| README.md | ||
| render.yaml | ||
| uv.lock | ||
Langflow is a powerful tool for building and deploying AI-powered agents and workflows. It provides developers with both a visual authoring experience and a built-in API server that turns every agent into an API endpoint that can be integrated into applications built on any framework or stack. Langflow comes with batteries included and supports all major LLMs, vector databases and a growing library of AI tools.
✨ Highlight features
- Visual Builder to get started quickly and iterate.
- Access to Code so developers can tweak any component using Python.
- Playground to immediately test and iterate on their flows with step-by-step control.
- Multi-agent orchestration and conversation management and retrieval.
- Deploy as an API or export as JSON for Python apps.
- Observability with LangSmith, LangFuse and other integrations.
- Enterprise-ready security and scalability.
⚡️ Quickstart
Langflow works with Python 3.10 to 3.13.
Install with uv (recommended)
uv pip install langflow
Install with pip
pip install langflow
📦 Deployment
Self-managed
Langflow is completely open source and you can deploy it to all major deployment clouds. Follow this guide to learn how to use Docker to deploy Langflow.
Fully-managed by DataStax
DataStax Langflow is a full-managed environment with zero setup. Developers can sign up for a free account to get started.
⭐ Stay up-to-date
Star Langflow on GitHub to be instantly notified of new releases.
👋 Contribute
We welcome contributions from developers of all levels. If you'd like to contribute, please check our contributing guidelines and help make Langflow more accessible.