Improving Automatic Speech Recognition for Mobile Learning of Mathematics Through Incremental Parsing

Total Page:16

File Type:pdf, Size:1020Kb

Improving Automatic Speech Recognition for Mobile Learning of Mathematics Through Incremental Parsing View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Kingston University Research Repository Improving Automatic Speech Recognition for Mobile Learning of Mathematics Through Incremental Parsing Marina ISAACa, Eckhard PFLUEGELa, Gordon HUNTERa, James DENHOLM-PRICEa, Dilaksha ATTANAYAKEa and Guillaume COTERb a Kingston University, London, U.K. b Ecole des Mines de Douai, Douai, France Abstract. Knowledge of at least elementary mathematics is essential for success in many areas of study and a large number of careers. However, many people find this subject difficult, partly due to specialised language and notation. These problems tend to be even more marked for people with disabilities, including blindness or impaired vision, and people with limited motor control or use of their hands. Learning activities are increasingly carried out in mobile computing environments, replacing the roles of textbooks or handwritten lecture notes. In the near future we expect the modality of use of mobile devices to shift from predominantly output to both input and output. The generally poor performance and usability of text input on these devices motivates the quest for alternative input technologies. One viable alternative input modality is automatic speech recognition, which has matured significantly over the last decade. We discuss tools that allow the creation and modification of mathematical content using speech, which could be of particular benefit to the groups of disabled people mentioned above. We review previously proposed systems, including our own software for use on desktop PCs, and conclude that there is currently a lack of tools implementing these requirements to a satisfactory level. We address some technical challenges that need to be solved, including the problem of defining and parsing of suitable languages for spoken mathematics, and focus on the parsing of structured mathematical input, highlighting the importance of incrementality in the process. We believe our incremental parsing algorithm for expressions in operator precedence grammar languages will improve upon previous algorithms for such tasks in terms of performance. Keywords. Learning mathematics, mobile learning, disabled students, Automatic Speech Recognition, parsing Introduction Learning is increasingly enhanced by technology, and the surge in use of mobile devices such as Tablet PCs and smart phones offers new learning opportunities [1]. Students benefiting from mobile learning technology include, for example, online (distance) learners, people working or studying “on the move”, and sometimes disadvantaged users such as people with physical disabilities or other special needs [2]. Until relatively recently, automatic speech recognition was only available to a small group of highly specialised scientists. Even around twenty five years ago, in order to use what was then considered state-of-the-art speech recognition software, highly expensive equipment was required. Today, even hand-held devices such as iOS or Android smartphones, tablet PCs and game consoles such as the Xbox and Nintendo Wii are often equipped with some form of speech recognition software. The reasons behind this rapid evolution are the tremendous progress that has been made in speech recognition technology, the advances in available memory and processing power of personal computers and the fact that complex and computationally intensive tasks can now be carried out with the help of powerful cloud-based applications. At present, speech recognition is used in a large number of application domains. As a consequence, speech input plays an increasingly important role in making computer tasks accessible to users who wish or need to rely on input modalities other than conventional keyboard and mouse. However, despite its great importance within the curriculum, both at school level and in higher education, for specialists and non-specialists (including students of Science, Engineering, Economics and Business subjects), the development of such technology for making Mathematics more accessible both to disabled and other users lags a long way behind its use in more conventionally alphabetic text-based subject disciplines. One of the main motivations behind our work is to make speech-driven systems as suitable as possible for users who typically might wish to engage in hands-free computing, and in particular to facilitate mobile learning of Mathematics. This is realised through our TalkMaths project [3] which has delivered several prototype systems and interfaces, as described in Section 2, the latest under development being a mobile app. The main contribution of this paper is, as part of a progress report of recent activities within the TalkMaths project, a high-level description of an efficient implementation of incremental processing of speech-input, commonly referred to as incremental parsing. Our algorithm, an improvement of an existing algorithm by Heeman, consumes less battery power and provides a more responsive interface, and is hence suitable for a mobile version of the existing TalkMaths system. This paper is organised as follows: in Section 1, after briefly reviewing existing commercial and freely available automatic speech recognition tools, we discuss past work on speech-driven editors for mathematics and the challenge of defining spoken mathematics. In Section 2, we present the current architecture of the TalkMaths system and the used parsing technique. Section 3 describes our main contribution of this paper. We conclude our findings and propose future developments in Section 4. 1. Interfacing Mathematics using Speech Technology In this section, we discuss issues that arise when creating mathematical content using speech as input. We review the state of the art of automatic speech recognition technology and discuss approaches to formalised spoken mathematics, including more in detail the language used in the TalkMaths system. 1.1. Review of Speech Recognition Technologies Automatic speech recognition (ASR) is the process of converting human spoken words to human- or computer-readable text [4]. Most ASR systems are speaker dependent – i.e. they require to be trained by a particular user in order to provide the best and most accurate recognition of that user’s speech. There are several commercial ASR packages available in the current market, with the most widely available ones being sold by Nuance and Microsoft. With claims of 99% word accuracy under good conditions, Nuance Dragon NaturallySpeaking (DNS) [5] is the market leader. Originally developed for Windows, recent versions of DNS also run on Mac OS X. Microsoft has included Windows Speech in Windows 7 onwards, and their speech recognition solution is likely to be further improved in future versions of their operating systems. At present, it remains unclear which of these two main players will eventually dominate the market. A free alternative to the commercial products is Sphinx, an Open Source toolkit for speech recognition [6] developed at Carnegie Mellon University. With Sphinx, additional data such as prosodic and intonation information can be captured. Currently, the recognition accuracy is inferior to that of the commercial ASRs. Increasingly, vendors are adding speech support to their software. For example, Google has enabled speech recognition in its Chrome web browser, specifically designed for speech-enabled applications, such as “Google Voice Search” [7]. In particular, mobile devices and gaming consoles are widely speech-enabled, with speech controlled assistants being offered by Google and Apple. Complementary to ASR technology is speech synthesis (or Text To Speech, TTS) technology, whereby a computer system tries to create a comprehensible set of sounds approximating the way a person would read a particular word, or sequence or words, aloud. However, detailed discussion of TTS technology is beyond the scope of this paper, and we refer the reader to [8] for further details. 1.2. Previous Work and Overview of Existing Systems We are aware of a number of systems translating spoken mathematical input to different output formats and displaying the structure of the mathematical expressions. These work together with ASR systems installed on the user's machine. MathTalk [9] is a commercially available system that implements speech macros for use within the Scientific Notebook environment. Its functionality, even when compared with the other academic prototype systems mentioned below, is quite limited. Fateman has undertaken work leading to Math Speak & Write [20], a multimodal approach combining spoken input with handwriting recognition. Unfortunately, the ambitious aim of simultaneous multimodal input was not achieved. Another system is CamMath, described in [10], which needs the same support environment as MathTalk but seems to offer a better developed command language. Previous approaches to allowing spoken input of mathematics include the research prototype systems of Bernareggi & Brigatti [11] (which only works in Italian) and Hanakovič and Nagy [12] (restricted to use with the Opera web browser due to it using X+V (XML + voice) technology). While various mobile apps exist that parse mathematical input, entered either via an on-screen keyboard or by drawing symbols on the screen, none offer dictation as an input modality. Although speech to text input is widely available, its functionality conforms to standard ASR offerings, and does not handle specialised languages such as those needed
Recommended publications
  • Keyword Based Speech Recognition Technique Using Python for C
    Science, Technology and Development ISSN : 0950-0707 Keyword Based Speech Recognition Technique Using Python for C# Somnath Hase Dr. Sunil Nimbhore Department of CS Department of CS & IT Smt. S. K. Gandhi Arts, Amolak Science & Dr.Babasaheb Ambedkar Marathwada P. H. Gandhi Commerce College, Kada, University, Aurangabad, (MS), India. (MS), India. Abstract: awareand lacking what RSI is or how The Programming Language is serious it can be. text-oriented to develop software, with a Braille, gives the idea that no keyboard for input and as of typing development has empowered visually program source code.This content- impaired and outwardly disabled arranged nature of programming dialects individuals for contributing assistive is a barrier to people experiencing arms innovations that have madeComputers handicap. A person with sound and the Internet accessible. In a knowledge, splendid brain, and potential programming language, a visually for programming aptitudes also impaired programmer takes more time to experiences arm wounds or being fragile find syntactic errors than a sighted couldn't turn into a good software programmer. Novice programmer takes engineer. In case, if theprogrammer more time as it has to learn syntax as memorizes the syntax and keywords of well as programming skills the programming language then he will simultaneously.The text editor is an good programmer. Inthe essential instrument for writing computer proposedresearch programmer will speak programs. The text Editors are in English like a statement and code in manipulated entirely via keyboard and C# language will be written accordingly. mouse. Nowadays most peopleuse Keyword: Integrated Development Environment Text to Speech, SAPI,ASR, RSI,C# (IDE) that enables not only writing code but help to compile and debug a Introduction program.This is usually for any The software development programming language.
    [Show full text]
  • Article Intelligent Personal Assistants and the Intercultural Negotiations Of
    ! ! Intelligent Personal Assistants and the Article Intercultural Negotiations of Dataveillance in Platformed Households Jason Pridmore Michael Zimmer ! ! "#$%&'%!()*+,#%*-./!01,!2,-1,#3$)4%! ! ! ! ()*+,#%*-.!56!7*%85)%*)/!(9:! ;#*4&5#,<,%188=,'#=)3! !!! ! ! ! >*&&,#&<'?&=,4'!! Jessica Vitak Anouk Mols ! ! ! ()*+,#%*-.!56!@$#.3$)4/!(9:!! ! ! ! "#$%&'%!()*+,#%*-./!01,!2,-1,#3$)4%!!! ! A+*-$B<'&4=,4'!!! ! ! ! ! ! &53%<,%188=,'#=)3!! Daniel Trottier Priya C. Kumar ! ! "#$%&'%!()*+,#%*-./!01,!2,-1,#3$)4%! ! ! ! ()*+,#%*-.!56!@$#.3$)4/!(9:! -#5--*,#<,%188=,'#=)3!!! ! ! ! ! ;B'&$#CD<'&4=,4'! Yuting Liao ! ! ()*+,#%*-.!56!@$#.3$)4/!(9:!! ! ! ! .3*$5EFG<'&4=,4'!!!! ! ! ! ! ! ! ! Abstract! ! 01,! ;3$-65#&*>$-*5)! 56! 15'%,1534%! *%! *)8#,$%*)H3.! ;5%%*I3,! ?*-1! -1,! *)-#54'8-*5)! 56! J*)-,33*H,)-! ;,#%5)$3! $%%*%-$)-%K! LMN:%O! ,&I,44,4!*)!%&$#-/!$3?$.%P3*%-,)*)H!%;,$B,#%!$)4!%8#,,)%/!%'81!$%!Q55H3,!R5&,!$)4!-1,!:&$>5)!"815=!01,%,!4,+*8,%!,S,&;3*6.! T'I566U%! J%'#+,*33$)8,! 8$;*-$3*%&K! I.! 85&&54*6.*)H! 6$&*3*$3! $)4! %58*$3! %;$8,%! $)4! 6')),3*)H! 4$-$! *)-5! 85#;5#$-,! ),-?5#B%=! R5?,+,#/!-1,!&5-*+$-*5)%!4#*+*)H!-1,!4,+,35;&,)-!56!-1,%,!;3$-65#&%V$)4!-1,!4$-$+,*33$)8,!-1,.!$665#4V+$#.W!:&$>5)!$;;,$#%! 658'%,4!5)!8533,8-*)H!'%,#!4$-$!-5!4#*+,!;,#%5)$3*>,4!%$3,%!$8#5%%!*-%!%15;;*)H!;3$-65#&/!?1*3,!Q55H3,!#,3*,%!5)!*-%!+$%-!4$-$+,*33$)8,! *)6#$%-#'8-'#,!-5!I'*34!*-%!:MP4#*+,)!-$#H,-,4!$4+,#-*%*)H!;3$-65#&=!01*%!;$;,#!4#$?%!5)!8#5%%P8'3-'#$3!658'%!H#5';%!#,H$#4*)H!MN:%! *)!-1,!2,-1,#3$)4%!$)4!-1,!()*-,4!9-$-,%=!M-!#,+,$3%!15?!#,%;5)4,)-%!*)!-1,%,!-?5!85')-#*,%!$#-*8'3$-,!4*+,#H,)-!?$.%!56!),H5-*$-*)H!
    [Show full text]
  • Usefulness of Artificial Intelligence-Based Virtual
    ORIGINAL RESEARCH Usefulness of Artificial Intelligence-based Virtual Assistants in Oral and Maxillofacial Radiology Report Writing Kavya S Muttanahally1, Rutvi Vyas2, Jyoti Mago3, Aditya Tadinada4 ABSTRACT Aim and objective: This study aimed to evaluate the usefulness of four voice-based virtual assistants in oral and maxillofacial radiology report writing. Materials and methods: A questionnaire consisting of 100 questions was queried to 4 commercially available voice-based virtual assistants namely Alexa, Siri, Cortana, and Google Assistant. The questions were divided based on five categories. The categorization was based on the frequency and reason for a radiologist to refer to either a textbook or an online resource before diagnosing and finalizing a radiology report. Two evaluators queried the devices and rated them on a 4-point modified Likert scale. Results: In the order of efficiency, Google Assistant was the most efficient followed by Cortana, Siri, and Alexa. A significant difference between the examiners was observed with Cortana in anatomy, dental anatomy, differential diagnosis, and pathology. Conclusion: In this small study that queried only four voice-powered virtual assistants, it showed that they were helpful and convenient in responding to questions regarding oral and maxillofacial radiology. But there is significant scope for expansion in the number of topics and type of information delivered before these can be used specifically in oral and maxillofacial radiology report writing. Clinical significance: Oral radiologists often gather additional and updated information regarding various topics like disease-specific features, genetic mutations, and differential diagnoses which they typically get from a textbook or a website. Artificial intelligence-based virtual assistants offer radiologists a simple voice-activated interface to gather this information and can immensely help when additional information is required.
    [Show full text]
  • Virtual Voice Assistants Win Shih and Erin Rivero Table 4.1 Platform and UI Shifts
    Chapter 4 Future Actions Win Shih Search is moving from a place of answers to a state of serving their customers in this transformation, iden- action. tifying a new service model centered on these new —Microsoft Voice Report 1 technologies. he AI-driven voice technology is described as a major disruptive technology trend.2 In this chap- Platform Shift Tter, we first review the future trends of voice technology and conversation AI. We then offer guid- Our relationship with computers and technology has ance and suggestions for preparing such changes at been evolving over time. Based on the way we interact the organizational and leadership level. with technology, Kinsella identified three technology platform and user interface shifts since the advent of the World Wide Web more than twenty-five years ago Looking Forward (see table 4.1).5 Websites and hyperlinks are the first generation of technology platforms and user inter- After a decade of development, conversational AI and faces that we use to interact with digital content and voice technology have progressed beyond the early services. The introduction of mobile devices and apps infancy stage and moved into a phase of mass adop- in the late 2000s expanded the apparatus for us to tion. According to Gartner’s Hype Cycle, which charts engage with the digital world. Conversational AI and the maturity, adoption, and business applications and voice technology elevate us to a new experiential level values of emerging technologies and innovations, of human-computer interaction. Although we are still voice assistant technology has now surpassed the ini- in the midst of voice-based platforms, Deloitte Con- tial proof-of-concept and marketing hype stages and sulting has already predicted that intelligent inter- entered into a zone that demands further realization faces will be the next breakthrough of human expe- May/June 2020 May/June of its value and fulfillment of expectations.
    [Show full text]
  • Smart Speakers' Adoption: Technology Acceptance Model and the Role Of
    Department of Business and Management Master’s Degree in Marketing Analytics & Metrics Chair of Marketing Metrics Smart speakers’ adoption: Technology Acceptance Model and the role of Conversational Style Prof. Michele Costabile Prof. Maria Giovanna Devetag SUPERVISOR CO-SUPERVISOR Flavia Pigliacelli ID 712581 CANDIDATE Academic Year 2019/2020 1 A mio padre, tra le cui braccia calde anche l’ultima paura morì. A mia madre, forte e debole compagna. A mia sorella Claudia, vento e magica corrente. Alla mia famiglia, carburante del mio fare e paracadute del mio disfare. 2 “Do not be fobbed off with mere personal success or acceptance. You will make all kinds of mistakes; but as long as you are generous and true, and also fierce, you cannot hurt the world or even seriously distress her.” Winston Churchill 3 TABLE OF CONTENTS INTRODUCTION ......................................................................................................................................................... 5 CHAPTER ONE ............................................................................................................................................................ 7 1.1 Smart speakers: home voices paving the way to the Internet of Things .......................................................... 7 1.2 Artificial Intelligence and Machine Learning in voice search: disruptive ingredients for the IoT business revolution ................................................................................................................................................................
    [Show full text]
  • Serverless Computing, Artificial Intelligence, and Vmware Collaboration: Amazon Is the Public Cloud Vendor to Beat!
    Serverless Computing, Artificial Intelligence, and VMware Collaboration: Amazon is the Public Cloud Vendor to Beat! Introduction 55% year over year revenue growth and approximately $13 billion in 2016 total revenue (over three times the size of its closest competitor, Microsoft Azure) demonstrate Amazon Web Services’ (AWS) strong momentum. Amazon claims that new customers are mostly attracted by two core advantages: rapid provisioning speed of easily consumable services and cost savings. Research by Enterprise Management Associates (EMA) confirms this claim and EMA continues to be impressed by Amazon’s pace of innovation and bold investments in future technologies, such as serverless computing, artificial intelligence and the Internet of Things. Exactly this innovation-centric strategy made Amazon.com the sustained and undisputed market leader in online retail, e-readers and ebooks. The sheer number of significant fourth-quarter announcements (28), combined with this year’s size of the AWS re:Invent show in Las Vegas, NV, (32,000 attendees, 562 sessions, and 996 speakers) demonstrate that Amazon will not take their foot off the gas. This EMA Impact Brief provides an overview of the most strategically significant announcements and what they mean for customers and competition. The VMware Collaboration: Bringing vSphere Customers VMware’s upcoming into the AWS Universe Cross-Cloud Architecture Later in 2017, customers will be able to run their vSphere-based applications inside Amazon’s data centers. VMware will offer this option will tie the vSphere and the to its customers so they can rapidly create a hybrid or public cloud AWS universe together. while still using the same VMware management and provisioning tools.
    [Show full text]
  • Formatvorlage Für Beiträge Zur Konferenz
    ‘ALEXA, WHO ARE YOU?’ – ANALYSING ALEXA’S, CORTANA’S AND SIRI’S VOCAL PERSONALITY Anabell Hacker Technische Universität Berlin, FG Kommunikationswissenschaft [email protected] Abstract: The paper answers the research question of whether listeners assign a per- sonality to synthetic voices by using the example of Alexa by Amazon, Cortana by Microsoft and Siri by Apple. Moreover, the voices were assessed regarding their at- tractiveness. A perception experiment was conducted with 37 participants (28 female, nine male). Listeners were asked to rate the voice recordings using the 30-item short version of the NEO-FFI personality test and additionally on a seven-point attractive- ness scale. The participants were able to assign personality to the synthetic voices. In the dimension of Extraversion, Siri showed the highest values, followed by Alexa. Alexa was deemed the most conscientious, with Siri and Cortana after that. Cortana had the highest Neuroticism values, followed by Siri and then Alexa. Siri was rated the most open to experience while Cortana received the lowest scores. Siri was also perceived as the most agreeable, Cortana had lower values in this dimension with Alexa in between. Siri and Alexa were rated significantly more attractive than Cortana. Lastly, an acoustic analysis of the recordings showed parallels to observations from research on the subject of voice and personality. 1 Introduction Smart speech assistants are becoming more popular because of their ease of use and by con- ducting tasks automatically just by voice command [1] [2]. Due to the growing acceptance and use of these artificial intelligences, this paper answers the research question of whether users assign a personality to synthetic voices.
    [Show full text]
  • Weekly Wireless Report July 14, 2017
    Week Ending: Weekly Wireless Report July 14, 2017 This Week’s Stories Millions Of Verizon Customers Exposed By Third-Party Data Inside This Issue: Leak This Week’s Stories July 14, 2017 Millions Of Verizon Customers It's not easy for businesses to protect their customers' data these days, particularly when they share Exposed By Third-Party Data it with their partners. When that data is shared, keeping it secure can become a Herculean effort and Leak sometimes those efforts come up short. That's when you find yourself reading yet another headline Microsoft Eyes White Spaces For about an information leak that affects millions of people. $10B Plan To Bring Broadband To Rural Users According to the "cyber resilience" experts at UpGuard, a configuration oversight allowed data on upwards of 14 million Verizon customers. Products & Services Google Play Music Gets More A third-party analytics provider, NICE Systems, was using Amazon's S3 cloud platform to store Personalized With New Release "customer call data" from telelcom providers including Verizon. Radio, Customized To Your Tastes A statement provided by NICE downplays the severity of the leak. Amazon Wants To Disrupt The "A human error that is not related to any of our products or our production environments nor their Television Industry All Over level of security, but rather to an isolated staging area with limited information for a specific project, Again allowed data to be made public for a limited period of time." Suppose.tv Launches A Service Ultimately, the S3 data "bucket" that contained data on millions of individuals was unintentionally That Helps Cord Cutters Find The left exposed and discovered by UpGuard's Chris Vickery.
    [Show full text]
  • The Next Generation Personal Virtual Assistant Journal Of
    Journal of Telecommunications System & Management Abstract The next generation personal virtual assistant Mohamed Ben Haddou Mentis S.A, Brussels 1000, Belgium Abstract: Virtual Personal assistants are changing the way that we interact with our environment. The rise of voice-enabled devices, such as Amazon Alexa and Google Home, offer a natural and intuitive way to interact with machines to speed up and improve daily tasks. Today’s Smart Speak- ers are marketed as intelligent assistants able to under- stand, take decisions and support people in several tasks. However, this vision has yet to fully materialize. The next expert positition in the Flanders National Science Foun- generation of VPA will be capable of carrying out com- dation and in the European commission. plex tasks and non-routine work. They will to be able to Publication of speakers: follow entire professional conversations, formulate replies to business messages and seek documents in a company 1. Mohamed Ben Haddou, A systematic review and me- information system related to specific requests, follow ta-analysis of the effects of mastication on sustained up and manage users’ task and help plan meetings ac- attention in healthy adults, Physiology & Behavior cordingly. The new generation should bring a paradigm Volume 202, 1 April 2019, Pages 101 115 change of how VPAs are designed and used, that could 2. Myasthénie et dilatation de bronches : défi diagnos- have profound implications on the way the business in tique et thérapeutique, Revue Neurologique, Volume conducted in the era of Voice computing. 174, Supplement 1, April 2018, Page S47 Biography: 3. Multiple sclerosis: Clinical characteristics and disabil- Mohamed holds a PhD in Artificial intelligence from the ity progression in Moroccan children, Journal of the one of the most known AI lab in Belgium and a Theo- Neurological Sciences, Volume 346, Issues 1–2, 15 ritical Physics Master from Brussesls university.
    [Show full text]
  • 1803 NL March 2.Pages
    VOLUME 29, NUMBER 3 MAIN LINE MACINTOSH USERS GROUP MARCH 2018 R COMMENTS FOUNDED MAY 1989 The HomePod could leave rings MEETINGS - SECOND w h e n p l a c e d o n w o o d e n SATURDAY OF THE MONTH fur niture. It’s preventable. TidBITS. tinyurl.com/y7puh58r. You can stop “Upgrade to M a c O S H i g h S i e r r a “ HOMEPOD FOCUS n o t i fi c a t i o n s . OSXDaily. tinyurl.com/y89d5jxy. BASICS OF SOCIAL MEDIA P D F O P E N A S A B L A C K RECTANGLE? Try recalibrating Our presenter at this Saturday’s meeting will be Stan Horwitz y o u r d i s p l a y . O S X D a i l y . who will discuss the Basics of Social Media. He will explain tinyurl.com/y9mxsr6r. what social networking is and provide a brief overview of his Want to learn to use Snapchat? favorite social networks (e.g., Facebook, Twitter, Strava, and Lots of fun! Go to tinyurl.com/ Meetup) and how to use social networks to support personal yax675j8. The New York Times. and professional interests. Should you quit Amazon Prime? Stan Horwitz is a long-time MLMUG member and has been an The discussion at tinyurl.com/ unofficial Mac evangelist for many years. A veteran Mac user, yd7sb9yz gives you reasons. The Stan uses his Macs for everything from personal taxes, Washington Post. banking, and investments to interactive messaging, digital You can hide all other windows photography, music, and keeping in touch with distant friends and family.
    [Show full text]
  • Conception and Implementation of a Vocal Assistant for the Use in Vehicle Diagnostics
    Institut f¨urMaschinelle Sprachverarbeitung Universit¨atStuttgart Pfaffenwaldring 5B D-70569 Stuttgart Conception and implementation of a vocal assistant for the use in vehicle diagnostics Mousa Yacoub Master thesis Pr¨ufer: Prof. Dr. Ngoc Thang Vu Pr¨uferin: Dr. Antje Schweitzer Betreuer: Dipl. Wirt-Inf. Benjamin Leinenbach Betreuerin: Dr. Antje Schweitzer Beginn der Arbeit: 03.12.2018 Ende der Arbeit: 31.05.2019 Erkl¨arung(Statement of Authorship) Hiermit erkl¨areich, dass ich die vorliegende Arbeit selbstst¨andigverfasst habe und dabei keine andere als die angegebene Literatur verwendet habe. Alle Zitate und sinngem¨aßenEntlehnungen sind als solche unter genauer Angabe der Quelle gekennzeichnet. Die eingereichte Arbeit ist weder vollst¨andig noch in wesentlichen Teilen Gegenstand eines anderen Pr¨ufungsverfahrens gewesen. Sie ist weder vollst¨andignoch in Teilen bereits ver¨offentlicht. Die beigef¨ugteelektronische Version stimmt mit dem Druckexemplar ¨uberein. 1 (Mousa Yacoub) 1Non-binding translation for convenience: This text is the result of my own work, and any material from published or unpublished work of others which is used either verbatim or indirectly in the text is credited to the author including details about the exact source in the text. This work has not been part of any other previous examination, neither completely nor in parts. It has neither completeley nor partially been published before. The submitted electronic version is identical to this print version. 2 Abstract One of the most important fields have been researched and de- veloped actively lately is the virtual/vocal assistants field. Developers and companies are extending its ability in a rapid rate. Such assistants could nowadays perform any daily task the user tends to do like book- ing a flight, ordering products, managing appointments, etc.
    [Show full text]
  • Eliciting and Analysing Users' Envisioned Dialogues with Perfect Voice Assistants
    Eliciting and Analysing Users’ Envisioned Dialogues with Perfect Voice Assistants Sarah Theres Völkel Daniel Buschek Malin Eiband [email protected] [email protected] [email protected] LMU Munich Research Group HCI + AI, LMU Munich Munich, Germany Department of Computer Science, Munich, Germany University of Bayreuth Bayreuth, Germany Benjamin R. Cowan Heinrich Hussmann [email protected] [email protected] University College Dublin LMU Munich Dublin, Ireland Munich, Germany ABSTRACT 1 INTRODUCTION We present a dialogue elicitation study to assess how users envision Voice assistants are ubiquitous. They are conversational agents conversations with a perfect voice assistant (VA). In an online sur- available through a number of devices such as smartphones, com- vey, N=205 participants were prompted with everyday scenarios, puters, and smart speakers [16, 68], and are widely used in a number and wrote the lines of both user and VA in dialogues that they imag- of contexts such as domestic [68] and automotive settings [9]. A ined as perfect. We analysed the dialogues with text analytics and recent analysis of more than 250,000 command logs of users in- qualitative analysis, including number of words and turns, social teracting with smart speakers [1] showed that, whilst people use aspects of conversation, implied VA capabilities, and the influence them for functional requests (e.g., playing music, switching the light of user personality. The majority envisioned dialogues with a VA on/off), they also request more social forms of interaction with as- that is interactive and not purely functional; it is smart, proactive, sistants (e.g., asking to tell a joke or a good night story).
    [Show full text]