A fast local neural text to speech engine for Mycroft
Find a file
2022-04-23 17:39:38 -04:00
debian Fix binary platforms in script 2022-04-11 18:22:14 -04:00
docker Merge mimic3-client into main CLI 2022-04-11 14:50:48 -04:00
examples Merge mimic3-client into main CLI 2022-04-11 14:50:48 -04:00
img Add voice parts image to README 2022-04-11 14:03:39 -04:00
mimic3-http Get Docker-based dist build to work 2022-04-23 17:39:38 -04:00
mimic3-tts Get Docker-based dist build to work 2022-04-23 17:39:38 -04:00
opentts-abc Add missing py.typed files 2022-04-21 17:30:43 -04:00
pyinstaller Merge mimic3-client into main CLI 2022-04-11 14:50:48 -04:00
tests Get Docker-based dist build to work 2022-04-23 17:39:38 -04:00
.dockerignore Get Docker-based dist build to work 2022-04-23 17:39:38 -04:00
.gitignore Ignore .spec files 2022-04-07 14:45:31 -04:00
.isort.cfg Add script to generate deterministic samples and hashes 2022-04-20 11:43:15 -04:00
.projectile Initial commit 2022-03-16 17:22:15 -04:00
build-dist.sh Forgot script 2022-04-20 15:39:02 -04:00
check.sh Get Docker-based dist build to work 2022-04-23 17:39:38 -04:00
Dockerfile Add 32-bit ARM to Dockerfiles 2022-04-01 18:47:40 -04:00
Dockerfile.binary Add Farsi/Persian support with hazm 2022-04-19 10:45:41 -04:00
Dockerfile.dist Add Dockerfile for creating sdists 2022-04-21 17:08:23 -04:00
Dockerfile.gpu Add CUDA acceleration (requires onnxruntime-gpu) 2022-04-07 16:54:48 -04:00
install.sh Add Dockerfile for creating sdists 2022-04-21 17:08:23 -04:00
LICENSE Fix license 2022-03-25 16:57:31 -04:00
Makefile Use script for building sdist 2022-04-20 15:29:46 -04:00
pylintrc Add script to generate deterministic samples and hashes 2022-04-20 11:43:15 -04:00
README.md Add Docker and Debian package instructions 2022-04-11 15:30:03 -04:00
requirements_dev.txt Working single-file binary build 2022-03-30 17:08:56 -04:00
setup.cfg Add script to generate deterministic samples and hashes 2022-04-20 11:43:15 -04:00

Mimic 3

mimic 3 mark 2

A fast and local neural text to speech system for Mycroft and the Mark II.

Use Cases

Dependencies

Mimic 3 requires:

Installation

eSpeak

Some voices depend on eSpeak-ng, specifically libespeak-ng.so. For those voices, make sure that libespeak-ng is installed with:

sudo apt-get install libespeak-ng1

Mycroft TTS Plugin

Install the plugin:

mycroft-pip install plugin-tts-mimic3[all]

Enable the plugin in your mycroft.conf file:

mycroft-config set tts.module mimic3_tts_plug

See the plugin's documentation for more options.

Docker image

A pre-built Docker image is available for the following platforms:

  • linux/amd64
    • For desktops and laptops (x86_64 CPUs)
  • linux/arm64
  • linux/arm/v7
    • For Raspberry Pi 1/2/3/4 and Zero 2 with 32-bit Pi OS

Install/update with:

docker pull mycroftai/mimic3

Once installed, check out the following scripts for running:

* [`mimic3`](docker/mimic3)
* [`mimic3-server`](docker/mimic3-server)
* [`mimic3-download`](docker/mimic3-download)

Or you can manually run the web server with:

docker run \
       -it \
       -p 59125:59125 \
       -v "${HOME}/.local/share/mimic3:/home/mimic3/.local/share/mimic3" \
       'mycroftai/mimic3'

Debian Package

Grab the Debian package from the latest release for your platform:

  • mimic3-tts_<version>_amd64.deb
    • For desktops and laptops (x86_64 CPUs)
  • mimic3-tts_<version>_arm64.deb
  • mimic3-tts_<version>_armhf.deb
    • For Raspberry Pi 1/2/3/4 and Zero 2 with 32-bit Pi OS

Once downloaded, install the package with:

sudo apt install ./mimic3-tts_<version>_<platform>.deb

Once installed, the following commands will be available:

* `mimic3`
* `mimic3-server`
* `mimic3-download`

Using pip

Install the command-line tool:

pip install mimic3-tts[all]

Once installed, the following commands will be available:

* `mimic3`
* `mimic3-download`

Install the HTTP web server:

pip install mimic3-http[all]

Once installed, the following commands will be available: * mimic3-server

Language support can be selectively installed by replacing all with:

  • de - German
  • es - Spanish
  • fr - French
  • it - Italian
  • nl - Dutch
  • ru - Russian
  • sw - Kiswahili

Excluding [..] entirely will install support for English only.

From Source

Clone the repository:

git clone https://github.com/MycroftAI/mimic3.git

Run the install script:

cd mimic3/
./install.sh

A virtual environment will be created in mimic3/.venv and each of the Python modules will be installed in editiable mode (pip install -e).

Once installed, the following commands will be available in .venv/bin: * mimic3 * mimic3-server * mimic3-download

Voice Keys

Mimic 3 references voices with the format:

  • <language>_<region>/<dataset>_<quality> for single speaker voices, and
  • <language>_<region>/<dataset>_<quality>#<speaker> for multi-speaker voices
    • <speaker> can be a name or number starting at 0
    • Speaker names come from a voice's speakers.txt file

parts of a mimic 3 voice

For example, the default Alan Pope voice key is en_UK/apope_low. The CMU Arctic voice contains multiple speakers, with a commonly used voice being en_US/cmu-arctic_low#slt.

Voices are automatically downloaded from Github and stored in ${HOME}/.local/share/mimic3 (technically ${XDG_DATA_HOME}/mimic3). You can also manually download them.

Running

Command-Line Tools

The mimic3 command can be used to synthesize audio on the command line:

mimic3 --voice 'en_UK/apope_low' 'My hovercraft is full of eels.' > hovercraft_eels.wav

See voice keys for how to reference voices and speakers.

See mimic3 --help or the CLI documentation for more details.

Downloading Voices

Mimic 3 automatically downloads voices when they're first used, but you can manually download them too with mimic3-download.

For example:

mimic3-download 'en_US/*'

will download all U.S. English voices to ${HOME}/.local/share/mimic3.

See mimic3-download --help for more options.

Web Server and Client

Start a web server with mimic3-server and visit http://localhost:59125 to view the web UI.

screenshot of web interface

The following endpoints are available:

  • /api/tts
    • POST text or SSML and receive WAV audio back
    • Use ?voice= to select a different voice/speaker
    • Set Content-Type to application/ssml+xml (or use ?ssml=1) for SSML input
  • /api/voices
    • Returns a JSON list of available voices

An OpenAPI test page is also available at http://localhost:59125/openapi

See mimic3-server --help for the web server documentation for more details.

Web Client

The mimic3 program provides an interface to the Mimic 3 web server when the --remote option is given.

Assuming you have started mimic3-server and can access http://localhost:59125, then:

mimic3 --remote --voice 'en_UK/apope_low' 'My hovercraft is full of eels.' > hovercraft_eels.wav

If your server is somewhere besides localhost, use mimic3 --remote <URL> ...

See mimic3 --help for more options.

CUDA Acceleration

If you have a GPU with support for CUDA, you can accelerate synthesis with the --cuda flag when running mimic3 or mimic3-server. This requires you to install the onnxruntime-gpu Python package.

Using nvidia-docker is highly recommended. See Dockerfile.gpu for an example of how to build a compatible container.

MaryTTS Compatibility

Use the Mimic 3 web server as a drop-in replacement for MaryTTS, for example with Home Assistant.

Make sure to use a compatible voice key like en_UK/apope_low.

For Mycroft, you can use this instead of the plugin by running:

mycroft-config edit user

and then adding the following:

"tts": {
"module": "marytts",
"marytts": {
    "url": "http://localhost:59125",
    "voice": "en_UK/apope_low"
}

SSML

A subset of SSML (Speech Synthesis Markup Language) is supported.

For example:

<speak>
  <voice name="en_UK/apope_low">
    <s>
      Welcome to the world of speech synthesis.
    </s>
  </voice>
  <break time="3s" />
  <voice name="en_US/cmu-arctic_low#slt">
    <s>
      <prosody volume="soft" rate="150%">
        This is a <say-as interpret-as="number" format="ordinal">2</say-as> voice.
      </prosody>
    </s>
  </voice>
</speak>

will speak the two sentences with different voices and a 3 second second pause in between. The second sentence will also have the number "2" pronounced as "second" (ordinal form).

SSML <say-as> support varies between voice types:

  • gruut
  • eSpeak-ng
  • Character-based voices do not currently support <say-as>

License

See license file