Alexa, What Is Going on with the Impeachment?'' Evaluating Smart Speakers for News Quality

Alexa, What Is Going on with the Impeachment?'' Evaluating Smart Speakers for News Quality

“Alexa, what is going on with the impeachment?” Evaluating smart speakers for news quality. Henry K. Dambanemuya Nicholas Diakopoulos [email protected] [email protected] Northwestern University Northwestern University ABSTRACT As an initial effort, in this work we propose a framework Smart speakers are becoming ubiquitous in daily life. The for evaluating voice assistants for information quality. Infor- widespread and increasing use of smart speakers for news mation quality is especially important for voice assistants and information in society presents new questions related because typically only one voice query result is provided to the quality, source diversity and credibility, and reliability instead of a selection of results to choose from. Our aim in of algorithmic intermediaries for news consumption. While this work is not to characterise the state of smart speaker user adoption rates soar, audit instruments for assessing in- information quality per se, since it is constantly evolving, formation quality in smart speakers are lagging. As an initial but rather to provide a framework for auditing information effort, we present a conceptual framework and data-driven quality on voice assistants that can be used over time to char- approach for evaluating smart speakers for information qual- acterise and track changes. We demonstrate our framework ity. We demonstrate the application of our framework on the using the Amazon Alexa voice assistant on the Echo Plus Amazon Alexa voice assistant and identify key information device. provenance and source credibility problems as well as sys- Our key contributions are 4-fold. First, we provide a data- tematic differences in the quality of responses about hard driven approach for evaluating information quality in voice and soft news. Our study has broad implications for news me- assistants. Our approach relies on crowd sourced question dia and society, content production, and information quality phrasings that people commonly use to elicit answers to assessment. general questions or seek news information from voice as- sistants. Second, we address the complexities of evaluating KEYWORDS information quality in voice assistants due to inherent errors of speech synthesis and transcription algorithms that voice algorithmic accountability, smart speakers, information qual- assistants depend on to listen, process, and respond to users’ ity, audit framework queries. We do this by asserting the boundaries of our evalu- ation framework through a combination of experimental and 1 INTRODUCTION crowd-based evaluation methods. Third, we demonstrate the By mid-2019, approximately 53 million Americans already application of our framework on the Amazon Alexa voice owned voice assistants [11], commonly known as smart assistant and report the findings. Finally, we identify key speakers. Of these people, 74% rely on voice assistants to information provenance and source credibility problems of answer general questions while 42% often seek news infor- voice assistants as well as systematic differences in the qual- mation. The widespread and increasing use of smart speakers ity of voice assistant responses to hard and soft news. This for news and information in society presents new questions helps to demonstrate a significant technological gap in the related to the quality, source diversity, and comprehensive- ability of voice assistants to provide timely and reliable news ness of information conveyed by these devices. At the same information about important societal and political issues. time, studies of their patterns of use, reliability, information quality, and potential biases are still lagging behind. A few recent studies examine potential security risks and privacy 2 DATA DRIVEN EXPERIMENT DESIGN concerns [3–5], effects of uncertainty and capability expec- Our user queries are composed of a query topic and a query tations on users’ intrinsic motivations [6] as well as user phrasing. To generate query topics, we fetched the top 20 interactions [8, 9, 13] and bias against specific language or US trending topics from Google daily search trends for each accent of different groups of people7 [ ]. While other algo- day of the study. We used Amazon Mechanical Turk (AMT) rithmic intermediaries for news information, such as Google to crowd-source query phrasings based on 144 query topics Search [14], and Apple News [1] have recently been audited, collected over a 1 week period. From these query phrasings, similar audits examining the sources and quality of informa- we used n-gram and a word tree visualisation [15] methods to tion curated by smart speaker devices are still lacking. identify the most frequent common query phrasings below: Computation + Journalism Symposium 2020, March 20 – 21, 2020, Boston, MA Dambanemuya and Diakopoulos. • What happened {during / to / in / on / with / at the } that invalid (µ = 2:01; σ = 1:19) query topic transcrip- ? tions had higher pronunciation difficulty compared to valid • What is going on {during / with / at / in / on} ? (µ = 1:44; σ = 0:755) query topic transcriptions i.e. the • Can you tell me about {the} ? more difficult a query topic was to pronounce, themore • What is new {with / on} ? likely that the query topic was incorrectly pronounced and hence incorrectly transcribed (t¹254º = −6:14;p < 0:001). We then combined the query topics and query phrasings Compared to query topics that were correctly transcribed with the appropriate preposition and, for each query topic, (µ = 4:87; σ = 0:370), the crowd workers had lower classi- we generated four user queries based on the query phrasings fication confidence (t¹224º = 5:85;p < 0:001) in response to above. For example, if one of the Google daily trending trends incorrectly transcribed query topics (µ = 4:52; σ = 0:819). was Ilhan Omar, the four queries for that topic would be: (i) Finally, compared to query topics that were correctly tran- “Alexa, what happened to Ilhan Omar?” (ii) "Alexa, what is scribed (µ = 4:81; σ = 0:477), we further observed that the going on with Ilhan Omar?" (iii) "Alexa, can you tell me about crowd workers had lower pronunciation confidence (t¹244º = Ilhan Omar?" and (iv) "Alexa, what is new with Ilhan Omar?" 5:02;p < 0:001) in their ability to pronounce incorrectly tran- These user queries therefore allow us to investigate whether scribed query topic (µ = 4:49; σ = 0:825). While this finding the way that a question about the same topic is phrased underscores the technical challenges in speech synthesis and affects the type of response provided by voice assistants. its use in auditing smart speakers, it also indicates the po- tential limitations of crowd-sourcing real voice utterances for query topics that are difficult to pronounce. Speech Synthesis and Transcription Audit We conducted a small experiment to assert the boundaries of 3 EVALUATION FRAMEWORK our audit method by evaluating Amazon’s speech synthesis For each day of our study, we queried the 20 daily trending and transcription capabilities. We began by synthesising topics from Google for that day, between 6:00pm and 9:00pm the query topics to speech using Amazon Polly, followed CST, using all four query phrasings above. We conducted our by transcribing the synthesised speech back to text using study over a two week period from October 28 to November Amazon Transcribe. A comparison of the transcribed text to 11, 2019. The resulting data set consists of 1; 112 queries and that of the original query topics using a verbatim query topic responses. To automate the data collection process, we used match shows that 77:1% of the query topics were correctly the Amazon Web Services (AWS) technology stack. We used transcribed. 75% of the incorrectly transcribed query topics Amazon Polly to synthesise the text of the user queries to were a result of a combination of slang, nicknames rhyming speech. Because there is no API, we then used the synthesised words such as Yung Miami (Young Miami, born Caresha speech to query a physical voice-controlled digital assistant Romeka Brownlee), Yeezy (Easy, born Kanye Omari West), device. We used Amazon Transcribe to transcribe the audio Bugha (Bugger, born Kyle Giersdorf), Lori Harvey (Laurie responses. We recorded both the queries and responses in a Harvey), and Dustin May (Just in May). database for later analysis. We conducted another AMT survey to investigate the Our query and response data covered 7 entity categories source of the transcription errors. In this survey, we played (Table 1). These categories covered a variety of issues rang- audio clips of the voice-synthesised query topics to crowd ing from sport and entertainment to business and politics workers and asked them to classify the pronunciation ac- and coincided with popular holidays such as Diwali, and Hal- curacy of the voice-synthesised text on an ordinal scale of loween as well as prominent events such as the 2019 Rugby 1 to 3 (1=Completely Incorrect; 2=Mostly Correct; 3=Com- World Cup and US impeachment probe. The wide range of pletely Correct). On a scale of 1 to 5 (least to most confident topics enable us to evaluate information quality within each or difficult), we further asked the crowd workers torank entity category and between hard and soft news. We rely on how confident they were in their classification response, Reinemann’s et al [12] definition of hard news as politically how difficult the query topic was to pronounce, andhow relevant information with broad societal impacts and soft confident they were that they could correctly pronounce the news as such information as the personal lives of celebrities, query topic. We observed a significant difference in pronun- sports, or entertainment that have no widespread political ciation accuracy means between valid (µ = 2:85; σ = 0:378) impact. and invalid (µ = 2:63; σ = 0:614) query topic transcrip- We begin by evaluating the response rate measured by the tions (t¹250º = 4:65;p < 0:001).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us