Mycroft AI
  • Documentation
  • About Mycroft AI
    • Why use Mycroft AI?
    • Glossary of terms
    • Contributing
    • FAQ
  • Using Mycroft AI
    • Get Mycroft
      • Mark II
        • Mark II Dev Kit
      • Mark 1
      • Picroft
      • Linux
      • Mac OS and Windows with VirtualBox
      • Docker
      • Android
    • Pairing Your Device
    • Basic Commands
    • Installing New Skills
    • Customizations
      • Configuration Manager
      • mycroft.conf
      • Languages
        • Français (French)
        • Deutsch (German)
      • Using a Custom Wake Word
      • Speech-To-Text
      • Text-To-Speech
    • Troubleshooting
      • General Troubleshooting
      • Audio Troubleshooting
      • Wake Word Troubleshooting
      • Log Files
      • Support Skill
      • Getting more support
  • Skill Development
    • Voice User Interface Design Guidelines
      • What can a Skill do?
      • Design Process
      • Voice Assistant Personas
      • Interactions
        • Intents
        • Statements and Prompts
        • Confirmations
      • Conversations
      • Error Handling
      • Example Interaction Script
      • Prototyping
      • Design to Development
    • Development Setup
      • Python Resources
      • Your First Skill
    • Skill Structure
      • Lifecycle Methods
      • Logging
      • Skill Settings
      • Dependencies
        • Manifest.yml
        • Requirements files
      • Filesystem access
      • Skill API
    • Integration Tests
      • Test Steps
      • Scenario Outlines
      • Test Runner
      • Reviewing the Report
      • Adding Custom Steps
      • Old Test System
    • User interaction
      • Intents
        • Padatious Intents
        • Adapt Intents
      • Statements
      • Prompts
      • Parsing Utterances
      • Confirmations
      • Conversational Context
      • Converse
    • Displaying information
      • GUI Framework
      • Show Simple Content
      • Mycroft-GUI on a PC
      • Mark 1 Display
    • Advanced Skill Types
      • Fallback Skill
      • Common Play Framework
      • Common Query Framework
      • Common IoT Framework
    • Mycroft Skills Manager
      • Troubleshooting
    • Marketplace Submission
      • Skills Acceptance Process
        • Information Review Template
        • Code Review Template
        • Functional Review Template
        • Combined Template
      • Skill README.md
    • FAQ
  • Mycroft Technologies
    • Technology Overview
    • Roadmap
    • Mycroft Core
      • MessageBus
      • Message Types
      • Services
        • Enclosure
        • Voice Service
        • Audio Service
        • Skills Service
      • Plugins
        • Audioservice Plugins
        • STT Plugins
        • TTS Plugins
        • Wake Word Plugins
      • Testing
      • Legacy Repo
    • Adapt
      • Adapt Examples
      • Adapt Tutorial
    • Lingua Franca
    • Mimic TTS
      • Mimic 3
      • Mimic 2
      • Mimic 1
      • Mimic Recording Studio
    • Mycroft GUI
      • Remote STT and TTS
    • Mycroft Skills Kit
    • Mycroft Skills Manager
    • Padatious
    • Precise
    • Platforms
Powered by GitBook
On this page

Was this helpful?

  1. Skill Development
  2. User interaction

Conversational Context

A Skill can add context to the Intent Parser to create more natural interaction with Mycroft.

PreviousConfirmationsNextConverse

Last updated 4 years ago

Was this helpful?

NOTE: Conversational context is currently only available with the Intent Parser, and is not yet available for

How tall is John Cleese?

"John Cleese is 196 centimeters"

Where's he from?

"He's from England"

Context is added manually by the Skill creator using either the self.set_context() method or the @adds_context() decorator.

Consider the following intent handlers:

    @intent_handler(IntentBuilder().require('PythonPerson').require('Length'))
    def handle_length(self, message):
        python = message.data.get('PythonPerson')
        self.speak('{} is {} cm tall'.format(python, length_dict[python]))

    @intent_handler(IntentBuilder().require('PythonPerson').require('WhereFrom'))
    def handle_from(self, message):
        python = message.data.get('PythonPerson')
        self.speak('{} is from {}'.format(python, from_dict[python]))

To interact with the above handlers the user would need to say

User: How tall is John Cleese?
Mycroft: John Cleese is 196 centimeters
User: Where is John Cleese from?
Mycroft: He's from England

To get a more natural response the functions can be changed to let Mycroft know which PythonPerson we're talking about by using the self.set_context() method to give context:

    @intent_handler(IntentBuilder().require('PythonPerson').require('Length'))
    def handle_length(self, message):
        # PythonPerson can be any of the Monty Python members
        python = message.data.get('PythonPerson')
        self.speak('{} is {} cm tall'.format(python, length_dict[python]))
        self.set_context('PythonPerson', python)

    @intent_handler(IntentBuilder().require('PythonPerson').require('WhereFrom'))
    def handle_from(self, message):
        # PythonPerson can be any of the Monty Python members
        python = message.data.get('PythonPerson')
        self.speak('He is from {}'.format(from_dict[python]))
        self.set_context('PythonPerson', python)

When either of the methods are called the PythonPerson keyword is added to Mycroft's context, which means that if there is a match with Length but PythonPerson is missing Mycroft will assume the last mention of that keyword. The interaction can now become the one described at the top of the page.

User: How tall is John Cleese?

Mycroft detects the Length keyword and the PythonPerson keyword

Mycroft: 196 centimeters

John Cleese is added to the current context

User: Where's he from?

Mycroft detects the WhereFrom keyword but not any PythonPerson keyword. The Context Manager is activated and returns the latest entry of PythonPerson which is John Cleese

Mycroft: He's from England

The context isn't limited by the keywords provided by the current Skill. For example

    @intent_handler(IntentBuilder().require(PythonPerson).require(WhereFrom))
    def handle_from(self, message):
        # PythonPerson can be any of the Monty Python members
        python = message.data.get('PythonPerson')
        self.speak('He is from {}'.format(from_dict[python]))
        self.set_context('PythonPerson', python)
        self.set_context('Location', from_dict[python])

Enables conversations with other Skills as well.

User: Where is John Cleese from?
Mycroft: He's from England
User: What's the weather like over there?
Mycroft: Raining and 14 degrees...

Using context to enable Intents

To make sure certain Intents can't be triggered unless some previous stage in a conversation has occured. Context can be used to create "bubbles" of available intent handlers.

User: Hey Mycroft, bring me some Tea
Mycroft: Of course, would you like Milk with that?
User: No
Mycroft: How about some Honey?
User: All right then
Mycroft: Here you go, here's your Tea with Honey
from mycroft.skills.context import adds_context, removes_context

class TeaSkill(MycroftSkill):
    @intent_handler(IntentBuilder('TeaIntent').require("TeaKeyword"))
    @adds_context('MilkContext')
    def handle_tea_intent(self, message):
        self.milk = False
        self.speak('Of course, would you like Milk with that?',
                   expect_response=True)

    @intent_handler(IntentBuilder('NoMilkIntent').require("NoKeyword").
                                  require('MilkContext').build())
    @removes_context('MilkContext')
    @adds_context('HoneyContext')
    def handle_no_milk_intent(self, message):
        self.speak('all right, any Honey?', expect_response=True)

    @intent_handler(IntentBuilder('YesMilkIntent').require("YesKeyword").
                                  require('MilkContext').build())
    @removes_context('MilkContext')
    @adds_context('HoneyContext')
    def handle_yes_milk_intent(self, message):
        self.milk = True
        self.speak('What about Honey?', expect_response=True)

    @intent_handler(IntentBuilder('NoHoneyIntent').require("NoKeyword").
                                  require('HoneyContext').build())
    @removes_context('HoneyContext')
    def handle_no_honey_intent(self, message):
        if self.milk:
            self.speak('Heres your Tea with a dash of Milk')
        else:
            self.speak('Heres your Tea, straight up')

    @intent_handler(IntentBuilder('YesHoneyIntent').require("YesKeyword").
                                require('HoneyContext').build())
    @removes_context('HoneyContext')
    def handle_yes_honey_intent(self, message):
        if self.milk:
            self.speak('Heres your Tea with Milk and Honey')
        else:
            self.speak('Heres your Tea with Honey')

When starting up only the TeaIntent will be available. When that has been triggered and MilkContext is added the MilkYesIntent and MilkNoIntent are available since the MilkContext is set. when a yes or no is received the MilkContext is removed and can't be accessed. In it's place the HoneyContext is added making the YesHoneyIntent and NoHoneyIntent available.

You can find an example .

As you can see, Conversational Context lends itself well to implementing a .

Adapt
Padatious
Tea Skill using conversational context on Github
dialog tree or conversation tree