🤖 Build voice-based LLM agents. Modular + open source. https://vocode.dev
Find a file
2023-03-18 15:59:55 -07:00
vocode add support for user twilio config 2023-03-18 15:40:14 -07:00
.gitignore poetry 2023-02-25 20:57:55 -08:00
poetry.lock update requirements and poetry deps 2023-03-18 15:45:25 -07:00
pyproject.toml set up optional dependencies 2023-03-18 15:59:55 -07:00
README.md Update README.md 2023-03-01 08:16:25 -08:00
requirements.txt update requirements and poetry deps 2023-03-18 15:45:25 -07:00
simple_conversation.py add model name to chatgptagentconfig 2023-03-15 14:33:54 -07:00
simple_inbound_call_server.py inbound calls 2023-03-04 13:59:33 -08:00
simple_outbound_call.py zoom support 2023-03-17 20:07:22 -07:00
simple_user_implemented_agent.py sets up websocket client streaming 2023-03-14 00:32:20 -07:00

vocode Python SDK

pip install vocode
import asyncio
import signal

from vocode.conversation import Conversation
from vocode.helpers import create_microphone_input_and_speaker_output
from vocode.models.transcriber import DeepgramTranscriberConfig
from vocode.models.agent import LLMAgentConfig
from vocode.models.synthesizer import AzureSynthesizerConfig

if __name__ == "__main__":
    microphone_input, speaker_output = create_microphone_input_and_speaker_output(use_first_available_device=True)

    conversation = Conversation(
        input_device=microphone_input,
        output_device=speaker_output,
        transcriber_config=DeepgramTranscriberConfig.from_input_device(microphone_input),
        agent_config=LLMAgentConfig(prompt_preamble="The AI is having a pleasant conversation about life."),
        synthesizer_config=AzureSynthesizerConfig.from_output_device(speaker_output)
    )
    signal.signal(signal.SIGINT, lambda _0, _1: conversation.deactivate())
    asyncio.run(conversation.start())