Text-To-Speech
Text-To-Speech (TTS) is the process of synthesizing audio from text. Mycroft uses our own TTS engines by default, however we also support a range of third party services.
Last updated
Was this helpful?
Text-To-Speech (TTS) is the process of synthesizing audio from text. Mycroft uses our own TTS engines by default, however we also support a range of third party services.
Last updated
Was this helpful?
Mycroft has two open source TTS engines.
Mimic 1 is a fast, light-weight engine based on . Whilst the original Mimic may sound more robotic, it is able to be synthesized on your device.
is an implementation of Tacotron speech synthesis. It is a fork of with additional tooling and code enhancements. Mimic 2 provides a much more natural sounding voice, however requires significant processing power to do so and is therefore cloud-based.
The engine that will be used depends on the voice selected in your .
Currently:
British Male is Mimic 1
American Female is Mimic 1
American Male is Mimic 2
Google Voice uses the Google Translate TTS API.
As Mimic 1 voices can be synthesized on device, the British Male voice will be used anytime the device cannot reach your preferred TTS service. This allows Mycroft to continue to speak even if it is not connected to a network.
A multi-lingual software speech synthesizer for Linux and Windows.
uses a "formant synthesis" method. This allows many languages to be provided in a small size. The speech is clear, and can be used at high speeds, but is not as natural or smooth as larger synthesizers which are based on human speech recordings.
First, ensure that the espeak package is installed on your system.
To our existing configuration values we will add the following:
Added in mycroft-core v21.2.2
You can further customize the amplitude, gap, capital, pitch and speed of espeak voices by adding them to your TTS configuration parameters.
Example Config:
For more information on the values for these parameters see espeak --help
The multilingual open-source MARY text-to-speech platform. MaryTTS is a client-server system written in pure Java, so it runs on many platforms.
To our existing configuration values we will add the following:
To our existing configuration values we will add the following:
You will need to take note of your private "Access Key ID" and "Secret Access Key".
Then, install the boto3
python module in the Mycroft virtual environment:
or
To our existing configuration values we will add the following:
If the voice
, region
, and engine
attributes are ommitted the defaults of Matthew
, us-east-1
and standard
will be used. This is an English (US) voice.
Google Translate's text-to-speech API.
IBM keeps a log of all requests in the lite plan unless you turn it off explicitly by setting "X-Watson-Learning-Opt-Out" to true. We have set Mycroft to Opt-Out by default, so if you want to share data with IBM then you must set this to false.
To our existing configuration values we will add the following:
To our existing configuration values we will add the following:
To our existing configuration values we will add the following:
Coqui TTS is an actively maintained fork of the Mozilla TTS project. A Coqui TTS server can be run locally without internet connection.
Coqui TTS is based on Python3 so it's recommended to setup a new virtual environment (venv) for the TTS server.
Then within that environment install the TTS server.
To run the server we need to know two things:
Which TTS model to use.
Running tts --list_models
within the venv shows the TTS models available in the current release.
Example output:
Within the venv we can now start the TTS server by running:
Example commands:
English: tts-server --use_cuda=true --model_name tts_models/en/ek1/tacotron2
German: tts-server --use_cuda=true --model_name tts_models/de/thorsten/tacotron2-DCA
By default a Coqui TTS server uses the best vocoder for the selected TTS model. However you can override the default using the --vocoder_name
parameter when starting your server.
Once the TTS server is running you can test it by opening http://localhost:5002
in your browser and try synthesizing a test sentence.
See the plugin's Github repository for suggested configurations:
Install the speech-dispatcher
package using your systems package manager. For example: sudo apt-get install speech-dispatcher
Speech services from Yandex, one of the largest cloud platforms in Russia.
Register an account at Yandex.
You can activate a free trial period in the console.
Create first "folder" in cloud.
To our existing configuration values we will add the following:
Then, using the we can edit the mycroft.conf
file by running:
The latest installation instructions can be found on the .
Using the we can edit the mycroft.conf
file by running:
Produced by , it is based off Mary TTS.
The latest installation instructions can be found on the .
Using the we can edit the mycroft.conf
file by running:
text-to-speech service.
and add the Polly service.
First, check the . Note that Polly does not provide a separate language
attribute like other TTS options. The language is determined by which voice is chosen.
Finally, using the we can edit the mycroft.conf
file by running:
The Google TTS module uses the gTTS Python package which interfaces with the Google Translate text-to-speech API. This is not intended for commercial or production usage. The service may break at any time, and you are subject to their Terms of Service that can be found at:
Using the we can edit the mycroft.conf
file by running:
Create an account at . Once you add the TTS service to your account, you will receive an API key and unique API url.
You can find a list of available voices at . For example, "en-US_MichaelV3Voice".
Using the we can edit the mycroft.conf
file by running:
This TTS service requires a subscription to Microsoft Azure and the creation of a Speech resource () The free plan is more than able to handle domestic usage (5 million character per month, or 0.5 million with neural TTS voice)
You can choose your voice here in the column "voice name" () Neural voices are much better, but cost more.
Create a and get a server access token.
Using the we can edit the mycroft.conf
file by running:
Instructions for setting up a Mozilla TTS server are .
Using the we can edit the mycroft.conf
file by running:
By default the url
is set to the localhost: So if you are running the server on the same machine as your Mycroft instance, only the module
attribute needs to be set. This can also be done with a single command:
Pretrained TTS models are available based on open voice datasets (eg. LJSpeech, LibriTTS, Thorsten-DE, Mai, ...). The shows a complete list of available TTS models.
Insallation of the tts python package does not yet work with Python 3.10. .
Whether we have a CUDA enabled GPU. Synthesizing voice is significantly faster when run on a enabled GPU compared to a CPU.
After your TTS server setup is finished you can to use it with the same configuration as Mozilla TTS.
Lifelike human digital voices from .
Using the we can edit the mycroft.conf
file by running:
A common high-level interface to speech synthesis from .
Using the we can edit the mycroft.conf
file by running:
Create billing account:
Create service account for you Mycroft instance with role editor:
Create API key for service account:
Using the we can edit the mycroft.conf
file by running: