Skip to content
This repository has been archived by the owner on Jun 30, 2022. It is now read-only.

Darrenj/docs0502 #1242

Merged
merged 8 commits into from
May 3, 2019
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file.
62 changes: 62 additions & 0 deletions docs/advanced/assistant/underthecovers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
## Architecture

An Architecture diagram of your Virtual Assistant created through the template is shown below along with a detailed explanation.

![Virtual Assistant Architecture](/docs/media/virtualassistant-architecture.jpg)

## Client Integration
End-Users can make use of the Virtual Assistant through the support [Azure Bot Service Channels](https://docs.microsoft.com/en-us/azure/bot-service/bot-service-manage-channels?view=azure-bot-service-4.0) including WebChat or through the Direct Line API that provides the ability to integrate your assistant directly into a device, mobile app or any other client experience.

Device integration requires creation of a lightweight host app which runs on the device. We have successfully built native applications across multiple embedded platforms including HTML5 applications.

The host app is responsible for the following capabilities. These can of course be extended depending on the device capabilities.
- Open and closing the microphone has indicated through the InputHint on messages returned by the Assistant
- Audio playback of responses created by the Text-to-Speech service
- Rendering of Adaptive Cards on the device through a broad range of renderers supplied with the Adaptive Cards SDK
- Processing events received from the Assistant, often to perform on device operations (e.g. change navigation destination)
- Accessing the on-device secret store to store and retrieve a token for communication with the assistant
- Integration with the Unified Speech SDK where on-device speech capabilities are required
- Interface to the Direct-Link REST API or SDKs
- Authenticating the end user of the device and providing a unique userId to the Assistant. Microsoft has capabilities to help with this if needed.

## Assistant Middleware

The Assistant makes use of a number of Middleware Components to process incoming messages:
- Telemetry Middleware leverages Application Insights to store telemetry for incoming messages, LUIS evaluation and QNA activities. PowerBI can then use this data to surface conversational insights.
- Event Processing Middleware processes events sent by the device
- Content Moderator Middleware is an optional component that uses the Content Moderator Cognitive Service to detect inappropriate / PII content]]

## Advanced Conversational Analytics
The Assistant is configured to collect telemetry into Application Insights. This can be imported into a PowerBI dashboard to view [advanced conversational analytics](https://aka.ms/botPowerBiTemplate).

## Dispatcher

The Dispatcher is trained across a variety of Natural Language data sources to provide a unified NLU powered dispatch capability. LUIS models from the Assistant, each configured Skill and questions from QnAMaker are all ingested as part of the dispatcher training process. This training process can also provide evaluation reports to identify confusion and overlap.

This training process creates a Dispatcher LUIS model which is then used by the Assistant to identify the component that should handle a given utterance. When a dialog is active the Dispatcher model is only used to identify top level intents such as Cancel for interruption scenarios.

## Dialogs

Dialogs represent conversational topics that the Assistant can handle, the template provides a `MainDialog`, `CancelDialog` and example `EscalateDialog`, `OnboardingDialog` dialogs.

## Integration

The Assistant and Skills can then make use of any APIs or Data Sources in the same way any web-page or service. This enables your Assistant to make use of existing capabilities and data sources as part of conversation flow.

## Authentication

The Assistant and associated Skills often need access to end-user authentication tokens in order to perform operations on behalf of the user. OAuth authentication providers are supported by the Azure Bot Service and provide the ability for you to configure providers such as Active Directory (for Office 365), Facebook or your own.

Authentication connections are created on the Azure Bot Service and the Assistant makes use of these to initiate an authentication request (generating an OAuth signin card) or retrieve a token from the Azure Bot Service provided secure token store.

Skills can request Authentication tokens for a given user when they are activated, this request is passed as an event to the Assistant which then uses the specific Authentication connection to surface an authentication request to the user if a token isn't found in the secure store. More detail on this is available [here](/docs/reference/skills/skilltokenflow.md)

## Linked Accounts
Linked Accounts is a supporting web application that demonstrates how a user can link their Assistant to their digital properties (e.g. Office 365, Google, etc.) on a companion device (mobile phone or website). This would be done as part of the on-boarding process and avoids authentication prompts during voice scenarios.

This integrates with the Authentication capability detailed above and provides a mechanism for a user to unlink all accounts which can be used as part of a device *forget me* feature.

## Edge Enablement
Many assistant scenarios require cloud-connectivity to access down-stream APIs or data sources (e.g. Office 365, Navigation data, Music Services, etc.). There are however a class of assistant scenarios especially those running on devices that may have periods of poor connectivity where pushing Speech, Language Processing and Dialog management onto the Edge (device) is needed.

We have a number of options to address this depending on platform and are working with initial customers to deliver this capability.
61 changes: 48 additions & 13 deletions docs/advanced/skills/addingskills.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,51 @@
# ![Conversational AI Solutions](/docs/media/conversationalai_solutions_header.png)

## Pre-requisites
- TBC Skill CLI
- [Node.js](https://nodejs.org/) version 10.8 or higher
- Install the [Azure Command Line Tools (CLI)](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-windows?view=azure-cli-latest)


## Installation
To install using npm
```bash
npm install -g botdispatch, botskills
```
This will install botdispatch and botskills into your global path.
To uninstall using npm
```bash
npm uninstall -g botdispatch, botskills
```
> Your Virtual Assistant must have been deployed using the [deployment tutorial](/docs/tutorials/assistantandskilldeploymentsteps.md) before using the `botskills` tool as it relies on the Dispatch models being available and a deployed Bot for authentication connection information.

## Skill Deployment

See the [Skills Overview](/docs/skills/csharp/README.md) section for details on the Skills provided as part of the Virtual Assistant Solution Accelerator. Follow the deployment instructions required for each skill you wish to use and then return to this section to add these skills to your Virtual Assistant.
See the [Skills Overview](/docs/README.md#skills) section for details on the Skills provided as part of the Virtual Assistant Solution Accelerator. Follow the deployment instructions required for each skill you wish to use and then return to this section to add these skills to your Virtual Assistant.

## Skill CLI

The Skill CLI provides automation of all key steps required to add a Skill to your project

1. Retrieve the Skill Manifest from the remote Skill through the `/api/skill/manifest` endpoint.
2. Identify which Language Models are used by the Skill and resolve the triggering utterances either through local LU file resolution or through inline trigger utterances if requested.
3. Add a new dispatch target using the `dispatch` tool using the trigger utterances retrieved in the previous step.
4. Refresh the dispatch LUIS model with the new utterances
5. In the case of Active Directory Authentication Providers, an authentication connection will be added to your Bot automatically and the associated Scopes added to your Azure AD application that backs your deployed Assistant.

## Adding Skills to your Virtual Assistant

1. In **PowerShell Core** (pwsh.exe), change to the project directory for your Virtual Assistant
2. Run the following command passing the name of your Bot and a pointer to the manifest endpoint of your Skill.
```shell
.\Deployment\scripts\add_remote_skill.ps1 -botName "YOUR_BOT_NAME" -manifestUrl https://YOUR_SKILL.azurewebsites.net/api/skill/manifest
```
Run the following command to add each Skill to your Virtual Assistant. This assumes you are running the CLI within the project directory and have created your Bot through the template and therefore have a `skills.json` file present.

The `--luisFolder` parameter can be used to point the Skill CLI at the source LU files for trigger utterances. For Skills provided within this repo these can be found in the `Deployment\Resources\LU` folder of each Skill. The CLI will automatically traverse locale folder hierarchies.

```bash
botskills connect --botName YOUR_BOT_NAME --remoteManifest "http://<YOUR_SKILL_MANIFEST>.azurewebsites.net/api/skill/manifest" --luisFolder [path] --cs
```

See the [Skill CLI documentation](/lib/typescript/botskills/docs/connect-disconnect.md) for detailed CLI documentation,

## Manual Authentication Connection configuration.

If a Skill requires Authentication connections to Office/Office 365 in most cases the above script will automatically add this configuration to your Bot and associated Azure AD Application.

In the case that your Azure AD application has allowed users outside of your tenant to access the Application this auto-provisioning isn't possible and the above script may warn that it wasn't able to configure Scopes and provides the Scopes you should manually add. An example of this is shown below:
In the case that your Azure AD application has allowed users outside of your tenant to access the Application this auto-provisioning isn't possible and the CLI may warn that it wasn't able to configure Scopes and provides the Scopes you should manually add. An example of this is shown below:
```
Could not configure scopes automatically. You must configure the following scopes in the Azure Portal to use this skill: User.Read, User.ReadBasic.All, Calendars.ReadWrite, People.Read
```
Expand All @@ -30,12 +58,19 @@ In this situation for Microsoft Graph based skills follow the instructions below

For Skills that require other Authentication connection configuration please follow the skill specific configuration information.

## Removing a Skill from your Virtual Assistant

To disconnect a skill from your Virtual Assistant use the following command, passingthe id of the Skill as per the manifest (e.g. calendarSkill).

```bash
botskills disconnect --skillId SKILL_ID
```

## Updating an existing Skill to reflect changes to Actions or LUIS model

TBC
> A botskills refresh command will be added shortly. In the meantime, run the above disconnect command and then connect the skill again.



## Removing a Skill from your Virtual Assistant

TBC

Binary file added docs/media/skillarchitecturedispatchexample.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/media/vabotintrocard.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/media/virtualassistant-SkillFlow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 4 additions & 3 deletions docs/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,16 +45,17 @@ How-to guides on achieving more complex scenarios.

|Name|Description|Link|
|-------------|-----------|----|
|Template Outline|An outline of what the Virtual Assistant template provides| [View](/docs/advanced/assistant/templateoutline.md)
|Under the covers|Detailed documentation covering what the template provides and how it works| [View](/docs/advanced/assistant/underthecovers.md)
|Enhancing your Assistant with additional Skills|Adding the out of the box Skills to your Virtual Assistant|[View](/docs/skills/common/addingskill.md)
|Enhancing your Assistant with additional Skills|Adding the out of the box Skills to your Virtual Assistant|[View](/docs/advanced/skills/addingskills.md)
|Migration from Enterprise Template|Guidance on how to move from an Enterprise Template based Bot to the new Template|[C#](/docs/advanced/assistant/csharp/ettovamigration.md)
|Migration from the old Virtual Assistant solution|Guidance on how to move from the original Virtual Assistant solution to the new Template|[C#](/docs/advanced/assistant/csharp/oldvatovamigration.md)
|Proactive Messaging|Adding proactive experiences to your assistant|[View](/docs/advanced/assistant/csharp/proactivemessaging.md)
|Linked Accounts|Enable users to link 3rd party accounts (e.g. o365) to their assistant|[View](/docs/advanced/assistant/linkedaccounts.md)
|Stitching together Bots into one conversational experience|Create one central Bot which hands-off to child bots, a common enterprise scenario.|[View](/docs/advanced/assistant/parentchildbotpattern.md)
|Configuring Deployment|How to customise the provided ARM template for different deployment scenarios.|[View](/docs/advanced/assistant/customisingdeployment.md)
|Adding Authentication to your assistant |How to add Authentication support to your Assistant| [C#](/docs/advanced/assistant/csharp/addauthentication.md), [TS](/docs/advanced/assistant/typescript/addauthentication.md)
|Adding KeyVault |How to add KeyVault support| [C#](/docs/advanced/assistant/csharp/keyvault.md), [TS](/docs/advanced/assistant/typescript/keyvault.md)
|Adding KeyVault |How to add KeyVault support| :construction_worker_woman:

## Skills

Expand All @@ -79,10 +80,10 @@ Reference documentation providing more insight into key concepts across the Virt

|Name|Description|Link|
|-------------|-----------|----|
|Virtual Assistant Architecture|Detailed exploration of the overall Virtual Assistant Architecture|[View](/docs/reference/assistant/architecture.md)
|Project Structure|Walkthrough of your Virtual Assistant project|[View](/docs/reference/assistant/projectstructure.md)
|Responses|Your Virtual Assistant can respond in a variety of ways depending on the scenario and the users active device or conversation canvas|[View](/docs/reference/assistant/responses.md)
|Handling events|Events enable custom apps or device experiences to pass device or contextual user information to an assistant behind the scenes.|[View](/docs/reference/assistant/events.md)|
|Dispatcher|Detailed exploration of the important role the Dispatcher plays in the Virtual Assistant template|[View](/docs/reference/assistant/dispatcher.md)|
|Speech Enablement|Ensure your Virtual Assistant and Experiences work well in Speech scenarios|[View](/docs/reference/assistant/speechenablement.md)
|Deployment script approach|Walkthrough of the deployment script approach used in the Virtual Assistant|[View](/docs/reference/assistant/deploymentscriptapproach.md)

Expand Down
Loading