Understanding User Satisfaction with Intelligent Assistants Julia Kiseleva1;∗ Kyle Williams2;∗ Ahmed Hassan Awadallah3 Aidan C. Crook3 Imed Zitouni3 Tasos Anastasakos3 1Eindhoven University of Technology,
[email protected] 2Pennsylvania State University,
[email protected] 3Microsoft, {hassanam, aidan.crook, izitouni, tasos.anastasakos}@microsoft.com ABSTRACT Q1: how is the weather in Chicago User’s dialog Voice-controlled intelligent personal assistants, such as Cortana, Q2: how is it this weekend with Cortana: (A) Task is “Finding a Google Now, Siri and Alexa, are increasingly becoming a part of Q3: find me hotels in Chicago hotel in Chicago” users’ daily lives, especially on mobile devices. They allow for a Q4: which one of these is the cheapest radical change in information access, not only in voice control and Q5: which one of these has at least 4 stars touch gestures but also in longer sessions and dialogues preserving context, necessitating to evaluate their effectiveness at the task or Q6: find me direc>ons from the Chicago airport to number one session level. However, in order to understand which type of user User’s dialog Q1: find me a pharmacy nearby interactions reflect different degrees of user satisfaction we need with Cortana: (B) explicit judgements. In this paper, we describe a user study that Q2: which of these is highly rated Task is “Finding a pharmacy” was designed to measure user satisfaction over a range of typical Q3: show more informa>on about number 2 scenario’s of use: controlling a device, web search, and structured Q4: how long will it take me to get there search dialog.