🤖 Build voice-based LLM agents. Modular + open source. https://vocode.dev
Find a file
2023-03-20 20:07:15 -07:00
vocode add logger 2023-03-20 20:07:15 -07:00
.gitignore poetry 2023-02-25 20:57:55 -08:00
poetry.lock downgrade pyflakes 2023-03-19 15:46:26 -07:00
pyproject.toml allow api_key assignment to occur after imports 2023-03-20 10:50:03 -07:00
README.md first pass at turn based conversation 2023-03-20 15:37:23 -07:00
requirements.txt allow api_key assignment to occur after imports 2023-03-20 10:50:03 -07:00
simple_inbound_call_server.py first pass at turn based conversation 2023-03-20 15:37:23 -07:00
simple_outbound_call.py first pass at turn based conversation 2023-03-20 15:37:23 -07:00
simple_streaming_conversation.py first pass at turn based conversation 2023-03-20 15:37:23 -07:00
simple_turn_based_conversation.py add logger 2023-03-20 20:07:15 -07:00
simple_user_implemented_agent.py first pass at turn based conversation 2023-03-20 15:37:23 -07:00

vocode Python SDK

pip install vocode
import asyncio
import signal

from vocode.conversation import Conversation
from vocode.helpers import create_microphone_input_and_speaker_output
from vocode.streaming.models.transcriber import DeepgramTranscriberConfig
from vocode.streaming.models.agent import LLMAgentConfig
from vocode.streaming.models.synthesizer import AzureSynthesizerConfig

if __name__ == "__main__":
    microphone_input, speaker_output = create_microphone_input_and_speaker_output(use_first_available_device=True)

    conversation = Conversation(
        input_device=microphone_input,
        output_device=speaker_output,
        transcriber_config=DeepgramTranscriberConfig.from_input_device(microphone_input),
        agent_config=LLMAgentConfig(prompt_preamble="The AI is having a pleasant conversation about life."),
        synthesizer_config=AzureSynthesizerConfig.from_output_device(speaker_output)
    )
    signal.signal(signal.SIGINT, lambda _0, _1: conversation.deactivate())
    asyncio.run(conversation.start())