🤖 Build voice-based LLM agents. Modular + open source. https://vocode.dev
Find a file
2023-03-04 14:00:02 -08:00
vocode inbound calls 2023-03-04 13:59:33 -08:00
.gitignore poetry 2023-02-25 20:57:55 -08:00
poetry.lock fix requirements 2023-03-03 00:05:05 -08:00
pyproject.toml poetry 2023-03-04 14:00:02 -08:00
README.md Update README.md 2023-03-01 08:16:25 -08:00
requirements.txt fix requirements 2023-03-03 00:05:05 -08:00
simple_conversation.py inbound calls 2023-03-04 13:59:33 -08:00
simple_inbound_call_server.py inbound calls 2023-03-04 13:59:33 -08:00
simple_outbound_call.py inbound calls 2023-03-04 13:59:33 -08:00
user_implemented_agent.py inbound calls 2023-03-04 13:59:33 -08:00

vocode Python SDK

pip install vocode
import asyncio
import signal

from vocode.conversation import Conversation
from vocode.helpers import create_microphone_input_and_speaker_output
from vocode.models.transcriber import DeepgramTranscriberConfig
from vocode.models.agent import LLMAgentConfig
from vocode.models.synthesizer import AzureSynthesizerConfig

if __name__ == "__main__":
    microphone_input, speaker_output = create_microphone_input_and_speaker_output(use_first_available_device=True)

    conversation = Conversation(
        input_device=microphone_input,
        output_device=speaker_output,
        transcriber_config=DeepgramTranscriberConfig.from_input_device(microphone_input),
        agent_config=LLMAgentConfig(prompt_preamble="The AI is having a pleasant conversation about life."),
        synthesizer_config=AzureSynthesizerConfig.from_output_device(speaker_output)
    )
    signal.signal(signal.SIGINT, lambda _0, _1: conversation.deactivate())
    asyncio.run(conversation.start())