You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Purpose

This document outlines the Speech EG’s commitment proposal to CES Demo 2019. It also highlights the dependencies on Audio High Level, Application Framework and Native App Integration teams in order to deliver a successful demo that demonstrates our vision of multi agent architecture.

Background

The Speech EG presented the Voice Services architecture v1.2 at AGL F2F meeting on September 11th, 2018. The architecture was actively reviewed and consensus has been met on many areas. The latest updated version v1.3 incorporates all the comments.


Use cases

Supported

  • Multiple voice agents installed but one active voice agent running on the system and triggered through Tap-To-Talk button

  • The Amazon team will deliver Alexa voice agent.

  • Active voice agent will be selected through the Settings Application Menu on reference AGL platform.

Out of Scope

Wake word detection will not be supported. As of today, AGL Audio 4a, doesn’t yet support storing a persistent audio input buffer can be shared between multiple consumers. In this case wake word module and high-level voice services. We discussed in detail about the audio design to support wake word use cases. However there is no further information on the timeline for having this support to be baked into AGL Audio framework.


Major Tasks

#

Component

Ownership

1

Voice Service High Level (VSHL) Development

Amazon to deliver first draft the can be open sourced and submitted to AGL repository.

2

Alexa Voice Agent Development

Amazon

3

HTML5 Test Application Development to test VSHL

Amazon

4

Native App Development and Integration With Voice Service High Level

Linux Foundation / IOT.BZH


5

Audio Input Output Support

Linux Foundation / IOT.BZH


6

Application Framework Support

Linux Foundation / IOT.BZH



External Dependencies

  • Applications should be able to launch themselves when they receive intents from Voice Service Interaction Manager.

  • Audio High level needs to create 4 audio roles for Alexa to do audio output.

  • Audio High Level needs to create 1 audio role for High Level Voice Service to do audio input.

  • Speech Chrome Application needs to be implemented to display different dialog states (IDLE, LISTENING, THINKING, SPEAKING) of the voice agent.

  • Template Runtime Application to show the templates that are delivered as responses by each voice agents. If we can't standardize the language for this template, then as a workaround, Amazon will implement Alexa UI Template Runtime Application that can render Alexa templates for CES Demo 2019.


Proposed Work Flows

General - At System Start Up,

  • Speech Chrome Application subscribes to OnDialogEvent = (IDLE, LISTENING, SPEAKING, THINKING) with Voice Services High Level.

  • Voice Services High Level subscribes to OnDialogEvent = (IDLE, LISTENING, SPEAKING, THINKING) with all the voice agents.

  • Navigation Application on the system will subscribe to Navigation messages on the Voice Interaction Manager.

  • Template Run-time Application will subscribe to Template Run-time messages on the Voice Interaction Manager.


General - Before user starts speaking, 

  • User selects Alexa from Settings

  • User presses Tap to Talk button

  • Voice Service High Level is in IDLE state and will listen to tap to talk signal

  • Voice Service High Level will automatically signal Alexa Voice Agent.

  • After few milliseconds, Alexa Voice Agent will publish OnDialogEvent = LISTENING.

  • Voice Service High Level will receive it and propagate the same event OnDialogEvent = LISTENING to Speech Chrome App.

  • Speech Chrome App receives the event and displays a UI to indicate that user can start speaking.

  • At this point, the user is ready to start speaking.


Domain Specific Flows (Alexa Commitments)

Navigation

  • User starts speaking and says, “Alexa, Navigate me to nearest Star Bucks.” or “Navigate me to nearest Star Bucks”

  • Alexa Voice Agent will call Voice Interaction Manager’s Navigation::Publish API to publish navigation message with the Geo-code of the destination.

  • Navigation App will receive the message and launch itself or ask Homescreen to launch it.

Weather

  • User starts speaking and says, “Alexa, What’s the weather” or “What’s the weather.”

  • Alexa Voice Agent will speak TTS about weather.

  • Alexa Voice Agent will publish OnDialogStateEvent = SPEAKING so that the Speech Chrome can show appropriate UI.

  • Alexa Voice Agent will call Voice Interaction Manager’s TemplateRuntime::Publish API to publish UI template.

  • Alexa UI Template Run-time Application will receive the message and launch itself or ask Homescreen to launch show the template.

Alerts

  • User starts speaking and says, “Alexa, Set an Alert for next 1 minute” or “Set an Alert for next 1 minute.”

  • Alexa Voice Agent will prompt TTS that Alert is set.

  • Alexa Voice Agent will call Voice Interaction Manager’s Alerts::Publish API to publish new Alert state.

  • Alexa Voice Agent will play Alerts Audio after one min.

Online Music Playback

  • User starts speaking and says, “Alexa, Play Beatles” or “Play Beatles.”

  • Alexa Voice Agent will create an new audio output channel to play the Beatles.

  • Phone call control (Based on Alexa contacts only). Not syncing with local contacts.

  • User starts speaking and says "Alexa, Call Bob"

  • Alexa Voice Agent will prompt TTS to disambiguate the contact request.

  • Alexa Voice Agent will call Voice Interaction Manager's Call:: Publish API to publish a DIAL event.

  • Dialer app on the AGL reference platform will pick the event and initiate call based on the event payload.

  • Dialer app will call Voice Interaction Manager's Call:: Publish API to publish a CALL_ACTIVATED downstream event for Alexa Voice Agent to update it's context.

  • No labels