Mycroft AI
  • Documentation
  • About Mycroft AI
    • Why use Mycroft AI?
    • Glossary of terms
    • Contributing
    • FAQ
  • Using Mycroft AI
    • Get Mycroft
      • Mark II
        • Mark II Dev Kit
      • Mark 1
      • Picroft
      • Linux
      • Mac OS and Windows with VirtualBox
      • Docker
      • Android
    • Pairing Your Device
    • Basic Commands
    • Installing New Skills
    • Customizations
      • Configuration Manager
      • mycroft.conf
      • Languages
        • Français (French)
        • Deutsch (German)
      • Using a Custom Wake Word
      • Speech-To-Text
      • Text-To-Speech
    • Troubleshooting
      • General Troubleshooting
      • Audio Troubleshooting
      • Wake Word Troubleshooting
      • Log Files
      • Support Skill
      • Getting more support
  • Skill Development
    • Voice User Interface Design Guidelines
      • What can a Skill do?
      • Design Process
      • Voice Assistant Personas
      • Interactions
        • Intents
        • Statements and Prompts
        • Confirmations
      • Conversations
      • Error Handling
      • Example Interaction Script
      • Prototyping
      • Design to Development
    • Development Setup
      • Python Resources
      • Your First Skill
    • Skill Structure
      • Lifecycle Methods
      • Logging
      • Skill Settings
      • Dependencies
        • Manifest.yml
        • Requirements files
      • Filesystem access
      • Skill API
    • Integration Tests
      • Test Steps
      • Scenario Outlines
      • Test Runner
      • Reviewing the Report
      • Adding Custom Steps
      • Old Test System
    • User interaction
      • Intents
        • Padatious Intents
        • Adapt Intents
      • Statements
      • Prompts
      • Parsing Utterances
      • Confirmations
      • Conversational Context
      • Converse
    • Displaying information
      • GUI Framework
      • Show Simple Content
      • Mycroft-GUI on a PC
      • Mark 1 Display
    • Advanced Skill Types
      • Fallback Skill
      • Common Play Framework
      • Common Query Framework
      • Common IoT Framework
    • Mycroft Skills Manager
      • Troubleshooting
    • Marketplace Submission
      • Skills Acceptance Process
        • Information Review Template
        • Code Review Template
        • Functional Review Template
        • Combined Template
      • Skill README.md
    • FAQ
  • Mycroft Technologies
    • Technology Overview
    • Roadmap
    • Mycroft Core
      • MessageBus
      • Message Types
      • Services
        • Enclosure
        • Voice Service
        • Audio Service
        • Skills Service
      • Plugins
        • Audioservice Plugins
        • STT Plugins
        • TTS Plugins
        • Wake Word Plugins
      • Testing
      • Legacy Repo
    • Adapt
      • Adapt Examples
      • Adapt Tutorial
    • Lingua Franca
    • Mimic TTS
      • Mimic 3
      • Mimic 2
      • Mimic 1
      • Mimic Recording Studio
    • Mycroft GUI
      • Remote STT and TTS
    • Mycroft Skills Kit
    • Mycroft Skills Manager
    • Padatious
    • Precise
    • Platforms
Powered by GitBook
On this page
  • How to set up the Audio Service
  • Starting playback
  • More technical information
  • The backends
  • PulseAudio features
  • AudioService Technical Documentation

Was this helpful?

  1. Mycroft Technologies
  2. Mycroft Core
  3. Services

Audio Service

The audio service handles playback and queueing of tracks.

The mycroft-core distribution of Mycroft includes a Playback Skill which can be used to control playback after it has been started. This means that playback only needs to be started in the Skill. Controlling playback can then be done through the Playback Skill.

How to set up the Audio Service

First, import the

AudioService

class.

from mycroft.skills.audioservice import AudioService

Then in the initialize() method of your Skill, instantiate an AudioService object:

    def initialize(self):
        self.audio_service = AudioService(self.bus)

        #Other initialize code
        [...]

Starting playback

Once the AudioService instance is created, you can start playback by simply calling the play() method with a track URI:

        self.audio_service.play('file:///path/to/my/track.mp3')

or with a list of tracks

        self.audio_service.play(['file:///path/to/my/track.mp3', 'http://tracks-online.com/my/track.mp3'])

The play method has an optional second argument to further process the user's Utterance. Currently this can only be used to select backend (where you want to send the audio) but in the future it will be able to handle requests like

Hey Mycroft, play Hello Nasty by the Beastie Boys at half volume. We don't want to wake the neighbours

To use this feature the Utterance received from the intent service must be passed

    def play_playlist_handler(self, message):
        self.audioservice.play(PLAYLIST, message.data['utterance'])

More technical information

The backends

The default backend is still mpg123 for mp3 files which is very limited but is the most generally available or multiple platforms.

Included in this release there's also

  • VLC (a very general purpose media player)

  • mopidy (a common audio server in the Raspberry Pi community)

  • chromecast (experimental)

These haven't been extensively tested on the Mark 1 yet.

PulseAudio features

The audio service hooks into the PulseAudio controls and can mute playback of sound streams beyond Mycroft's control. This is currently deactivated by default but can be enabled by changing the mycroft.conf configuration found in mycroft/configuration/mycroft.conf

  "play_wav_cmdline": "paplay %1 --stream-name=mycroft-voice",
  "Audio": {
    "pulseaudio": "mute"
  }

AudioService Technical Documentation

PreviousVoice ServiceNextSkills Service

Last updated 4 years ago

Was this helpful?

see the for information about configuring supported uris.

More information on AudioService methods can be found in the .

AudioServicePlugins
Mycroft Technical Documentation