Interacting with Cortana in UWP


Cortana is Microsoft’s voice interactive personal assistant similar to Google Now and Siri. As a side bit of trivia, Cortana was named after an artificial intelligence character in the Halo game franchise and both are voiced by the same person, Jen Talyor.

You can provide information to Cortana and have Cortana open your app just as Siri or Google Now though as of this date the Cortana Actions is under Developer Preview. However it is open to enroll in it if you want to get started right now.

As a developer you have two ways to interact with Cortana, a Cortana Action or a Voice Command.


Cortana Actions are a specific action your app can take and can be viewed by the user via Cortana or Cortana can directly call this action. There are two types of actions, predefined actions or customized actions.

Based on Insights that Cortana gathers will pass this information to the Action. At least one Insight must be mapped to an Action. In the example they give in the documentation if you consider an Action “Order Food” that might be mapped when a user’s location is the office, its lunchtime and there is a meeting scheduled during lunch.

Your Action is launched via Deep Link. Check out Registering A Uri Scheme for a walk through on setting up deep linking compatible with Cortana.

Cortana Actions is under developer preview at the moment, I have yet to personally try out and test this feature and hence no code samples to show here.

Voice Commands

Voice commands are for registering phrases that Cortana understands to launch a specific task or function in your app. I will now go through the AdventureWorks example from Microsoft. The Link to the entire code sample is down below. This is a travel app example.

Create Phrases

First we want to create an XML file with the phrases we want our app to respond to. For example, AdventureWorksCommands.xml

<VoiceCommands xmlns="">
 <CommandSet xml:lang="en-us" Name="AdventureWorksCommandSet_en-us">
 <AppName> Adventure Works </AppName>
 <Example> Show trip to London </Example>

 <Command Name="showTripToDestination">
 <Example> Show trip to London </Example>
 <ListenFor RequireAppName="BeforeOrAfterPhrase"> show [my] trip to {destination} </ListenFor>
 <ListenFor RequireAppName="ExplicitlySpecified"> show [my] {builtin:AppName} trip to {destination} </ListenFor>
 <Feedback> Showing trip to {destination} </Feedback>
 <Navigate />

Register Phrase List

Next we want to register that Phrase list when the app is launched so Cortana know about it. In your App.xaml.cs in your OnLaunched after Window.Current.Activate load your phrase list.

StorageFile vcdStorageFile = await Package.Current.InstalledLocation.GetFileAsync(@"AdventureWorksCommands.xml");

await Windows.ApplicationModel.VoiceCommands.VoiceCommandDefinitionManager.InstallCommandDefinitionsFromStorageFileAsync(vcdStorageFile);

Create BackgroundTask Service

Next you need to create a background task to handle the Cortana requests. Its best to put this in another project and reference it from your main UWP project.

public sealed class AdventureWorksVoiceCommandService : IBackgroundTask
     // This will be called when Cortana passes a query to the app.
     // Have a look at the code sample to see how you can handle it.
     public async void Run(IBackgroundTaskInstance taskInstance) {}

Set Declarations

In your UWP project you need to add two declarations, the App Service and Personal Launch Assistant. The Personal Launch Assistant doesn’t need any details in it, but the App Service needs to reference the Background Task you created above.


Handle Cortana Requests

Finally we want to handle Cortana requests. We can do this in App.xaml.cs in the OnActivated method.

protected override void OnActivated(IActivatedEventArgs args)

   // IMPORTANT: Read to see where each launch goes
   // If the app was launched via a Voice Command, this corresponds to the "show trip to <location>" command. 
   // Protocol activation occurs when a tile is clicked within Cortana (via the background task)

    if (args.Kind == ActivationKind.VoiceCommand)
         var commandArgs = args as VoiceCommandActivatedEventArgs;

         Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = commandArgs.Result;

         // Get the name of the voice command and the text spoken. See AdventureWorksCommands.xml for
         // the <Command> tags this can be filled with.
         string voiceCommandName = speechRecognitionResult.RulePath[0];
         string textSpoken = speechRecognitionResult.Text;

         // The commandMode is either "voice" or "text", and it indictes how the voice command
         // was entered by the user.
         // Apps should respect "text" mode by providing feedback in silent form.
         string commandMode = this.SemanticInterpretation("commandMode", speechRecognitionResult);
         switch (voiceCommandName)
             case "showTripToDestination":
                   // Access the value of the {destination} phrase in the voice command
                   string destination = this.SemanticInterpretation("destination", speechRecognitionResult);
                   // Do navigation and pass parameters as required.
    else if (args.Kind == ActivationKind.Protocol)
       var commandArgs = args as ProtocolActivatedEventArgs;
       Windows.Foundation.WwwFormUrlDecoder decoder = new Windows.Foundation.WwwFormUrlDecoder(commandArgs.Uri.Query);
       var destination = decoder.GetFirstValueByName("LaunchContext");

       // Do navigation and pass parameters as required.

Learn More

If you want to find out more you can have a look at the Cortana Getting Started guide and at the Code Samples for Cortana Voice Command.