Text-To-Speech
Text-To-Speech (TTS) is the process of synthesizing audio from text. Mycroft uses our own TTS engines by default, however we also support a range of third party services.
Last updated
Text-To-Speech (TTS) is the process of synthesizing audio from text. Mycroft uses our own TTS engines by default, however we also support a range of third party services.
Last updated
Mycroft has two open source TTS engines.
Mimic 1 is a fast, light-weight engine based on Carnegie Mellon University's FLITE software. Whilst the original Mimic may sound more robotic, it is able to be synthesized on your device.
Mimic 2 is an implementation of Tacotron speech synthesis. It is a fork of Keith Ito's project with additional tooling and code enhancements. Mimic 2 provides a much more natural sounding voice, however requires significant processing power to do so and is therefore cloud-based.
The engine that will be used depends on the voice selected in your Device Settings at Home.mycroft.ai.
Currently:
British Male is Mimic 1
American Female is Mimic 1
American Male is Mimic 2
Google Voice uses the Google Translate TTS API.
As Mimic 1 voices can be synthesized on device, the British Male voice will be used anytime the device cannot reach your preferred TTS service. This allows Mycroft to continue to speak even if it is not connected to a network.
A multi-lingual software speech synthesizer for Linux and Windows.
eSpeak uses a "formant synthesis" method. This allows many languages to be provided in a small size. The speech is clear, and can be used at high speeds, but is not as natural or smooth as larger synthesizers which are based on human speech recordings.
First, ensure that the espeak package is installed on your system.
Then, using the Configuration Manager we can edit the mycroft.conf
file by running:
To our existing configuration values we will add the following:
Added in mycroft-core v21.2.2
You can further customize the amplitude, gap, capital, pitch and speed of espeak voices by adding them to your TTS configuration parameters.
Example Config:
For more information on the values for these parameters see espeak --help
The multilingual open-source MARY text-to-speech platform. MaryTTS is a client-server system written in pure Java, so it runs on many platforms.
The latest installation instructions can be found on the MaryTTS Github repository.
Using the Configuration Manager we can edit the mycroft.conf
file by running:
To our existing configuration values we will add the following:
Produced by Mivoq, it is based off Mary TTS.
The latest installation instructions can be found on the Mivoq FA TTS Github repository.
Using the Configuration Manager we can edit the mycroft.conf
file by running:
To our existing configuration values we will add the following:
Amazon Polly's text-to-speech service.
Create an AWS account and add the Polly service.
You will need to take note of your private "Access Key ID" and "Secret Access Key".
First, check the list of available voices and languages. Note that Polly does not provide a separate language
attribute like other TTS options. The language is determined by which voice is chosen.
Then, install the boto3
python module in the Mycroft virtual environment:
or
Finally, using the Configuration Manager we can edit the mycroft.conf
file by running:
To our existing configuration values we will add the following:
If the voice
, region
, and engine
attributes are ommitted the defaults of Matthew
, us-east-1
and standard
will be used. This is an English (US) voice.
Google Translate's text-to-speech API.
The Google TTS module uses the gTTS Python package which interfaces with the Google Translate text-to-speech API. This is not intended for commercial or production usage. The service may break at any time, and you are subject to their Terms of Service that can be found at: https://policies.google.com/terms
Using the Configuration Manager we can edit the mycroft.conf
file by running:
Create an account at IBM.com/cloud. Once you add the TTS service to your account, you will receive an API key and unique API url.
You can find a list of available voices at Languages and Voices. For example, "en-US_MichaelV3Voice".
IBM keeps a log of all requests in the lite plan unless you turn it off explicitly by setting "X-Watson-Learning-Opt-Out" to true. We have set Mycroft to Opt-Out by default, so if you want to share data with IBM then you must set this to false.
Using the Configuration Manager we can edit the mycroft.conf
file by running:
To our existing configuration values we will add the following:
Note: This is a Community provided TTS plugin and is not controlled by Mycroft AI. Updates for this plugin may not have been reviewed by the Mycroft team. We strongly recommend reviewing any code you intend to install from outside Mycroft's official channels.
This TTS service requires a subscription to Microsoft Azure and the creation of a Speech resource (https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/overview#create-the-azure-resource) The free plan is more than able to handle domestic usage (5 million character per month, or 0.5 million with neural TTS voice)
You can choose your voice here in the column "voice name" (https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#text-to-speech) Neural voices are much better, but cost more.
Create a Microsoft Azure account and get a server access token.
Using the Configuration Manager we can edit the mycroft.conf
file by running:
To our existing configuration values we will add the following:
Instructions for setting up a Mozilla TTS server are available on the projects wiki.
Using the Configuration Manager we can edit the mycroft.conf
file by running:
To our existing configuration values we will add the following:
By default the url
is set to the localhost: http://0.0.0.0:5002
So if you are running the server on the same machine as your Mycroft instance, only the module
attribute needs to be set. This can also be done with a single command:
Coqui TTS is an actively maintained fork of the Mozilla TTS project. A Coqui TTS server can be run locally without internet connection.
Pretrained TTS models are available based on open voice datasets (eg. LJSpeech, LibriTTS, Thorsten-DE, Mai, ...). The Coqui release page shows a complete list of available TTS models.
Coqui TTS is based on Python3 so it's recommended to setup a new virtual environment (venv) for the TTS server.
Then within that environment install the TTS server.
Insallation of the tts python package does not yet work with Python 3.10. See the Coqui TTS issues page for more information.
To run the server we need to know two things:
Whether we have a CUDA enabled GPU. Synthesizing voice is significantly faster when run on a CUDA enabled GPU compared to a CPU.
Which TTS model to use.
Running tts --list_models
within the venv shows the TTS models available in the current release.
Example output:
Within the venv we can now start the TTS server by running:
Example commands:
English: tts-server --use_cuda=true --model_name tts_models/en/ek1/tacotron2
German: tts-server --use_cuda=true --model_name tts_models/de/thorsten/tacotron2-DCA
By default a Coqui TTS server uses the best vocoder for the selected TTS model. However you can override the default using the --vocoder_name
parameter when starting your server.
Once the TTS server is running you can test it by opening http://localhost:5002
in your browser and try synthesizing a test sentence.
After your TTS server setup is finished you can configure Mycroft to use it with the same configuration as Mozilla TTS.
Note: This is a Community provided TTS plugin and is not controlled by Mycroft AI. The code in this plugin has not been reviewed by the Mycroft team. We strongly recommend reviewing any code you intend to install from outside Mycroft's official channels.
Lifelike human digital voices from ResponsiveVoice.org.
Using the Configuration Manager we can edit the mycroft.conf
file by running:
See the plugin's Github repository for suggested configurations:
https://github.com/OpenVoiceOS/ovos-tts-plugin-responsivevoice#configuration
A common high-level interface to speech synthesis from Free(B)Soft.
Install the speech-dispatcher
package using your systems package manager. For example: sudo apt-get install speech-dispatcher
Using the Configuration Manager we can edit the mycroft.conf
file by running:
Speech services from Yandex, one of the largest cloud platforms in Russia.
Register an account at Yandex.
Create billing account: https://cloud.yandex.com/docs/billing/quickstart/#create_billing_account
You can activate a free trial period in the console.
Create first "folder" in cloud.
Create service account for you Mycroft instance with role editor: https://cloud.yandex.com/docs/iam/operations/sa/create
Create API key for service account: https://cloud.yandex.com/docs/iam/operations/api-key/create
Using the Configuration Manager we can edit the mycroft.conf
file by running:
To our existing configuration values we will add the following: