TAUS Speech-to-Speech Translation Technology Report March 2017 1 Authors: Mark Seligman, Alex Waibel, and Andrew Joscelyne Contributor: Anne Stoker COPYRIGHT © TAUS 2017 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system of any nature, or transmit- ted or made available in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of TAUS. TAUS will pursue copyright infringements. In spite of careful preparation and editing, this publication may contain errors and imperfections. Au- thors, editors, and TAUS do not accept responsibility for the consequences that may result thereof. Design: Anne-Maj van der Meer Published by TAUS BV, De Rijp, The Netherlands For further information, please email [email protected] 2 Table of Contents Introduction 4 A Note on Terminology 5 Past 6 Orientation: Speech Translation Issues 6 Present 17 Interviews 19 Future 42 Development of Current Trends 43 New Technology: The Neural Paradigm 48 References 51 Appendix: Survey Results 55 About the Authors 56 3 Introduction The dream of automatic speech-to-speech devices enabling on-the-spot exchanges using translation (S2ST), like that of automated trans- voice or text via smartphones – has helped lation in general, goes back to the origins of promote the vision of natural communication computing in the 1950s. Portable speech trans- on a globally connected planet: the ability to lation devices have been variously imagined speak to someone (or to a robot/chatbot) in as Star Trek’s “universal translator” to negoti- your language and be immediately understood ate extraterrestrial tongues, Douglas Adams’ in a foreign language. For many commentators Babel Fish in the Hitchhiker’s Guide to the and technology users, inspired by new models Galaxy, and more. Over the past few decades, of deep learning, cognitive computing, and big the concept has become an influential meme data – and despite the inevitable doubts about and a widely desired solution – not far behind translation quality – it seems only a question the video phone (it’s here!) and the flying car of time until S2ST becomes a trusted, and even (any minute now). required, communication support technology. Back on planet Earth, real-world S2ST appli- In view of this general interest in instant auto- cations have been tested locally over the past matic speech translation services, TAUS decade to help medical staff talk with oth- believes that developers, enterprises, and the er-language patients; to support military language technology supply community now personnel in various theaters of war; to support need: humanitarian missions; and in general-pur- • a clear picture of the technological state- pose consumer products. A prominent recent of-play in S2ST project aims to build S2ST devices to enable • information on the history of this technol- cross-language communications at the 2020 ogy program Olympics in Tokyo, with many more projects • an informed overview of the drivers and and use cases in the offing. Automated speech enablers in the field translation has arrived: the tech’s entry into • the near-term predictions of major and widespread use has begun, and enterprises, minor players concerning solutions and app developers, and government agencies are services, along with their assessments of weak- alive to its potential. nesses and threats More broadly, the recent spread of technolo- Accordingly, this TAUS report on S2ST pro- gies for real-time communication – “smart” vides an up-to-date account of the field’s 4 technologies, approaches, companies, proj- ects, and target use cases. The report is part of an ongoing series (including the TAUS Translation Technology Landscape Report (2013 and 2016) and the TAUS Translation Data Landscape Report (2015)) providing state-of-the-art surveys of the relevant technologies, players, underlying vision, and market strengths and weaknesses. It doesn’t predict market size or specific eco- nomic benefits, but does survey experimental business models. Chapters follow on the Past, Present, and Future of speech-to-speech translation. The chapter on the Present contains interviews with 13 representative participants in the devel- oping scene. An Appendix displays the results of a survey of potential users concerning antic- ipated uses of the technology. A Note on Terminology So far, there’s no standardized way of talking about automatic speech-to-speech trans- lation. Candidate terms include “speech translation” and “spoken (language) transla- tion (SLT),” but these don’t underscore the automaticity or underlying digital technol- ogy. “Automatic interpretation” (as inspired by human interpreting, e.g. in conferences) hasn’t caught on, possibly because “inter- pretation” has other distracting meanings in English. We’ll use S2ST here for maximum clarity, but for variety will alternate with all of the above terms when the meaning is clear. 5 Past This chapter of the TAUS report on S2ST recaps utter confusion: when a misrecognized seg- the history of the technology. Later chapters ment (e.g. “Far East” 4 “forest”) is translated will survey the present and look toward the into another language (becoming e.g. Spanish: future. “selva”), only consternation can result, since the confused translation bears neither seman- The field ofspeech – as opposed to text – trans- tic nor acoustic resemblance to the correct lation has an extensive history which deserves one. to be better known und understood. Text translation is already quite difficult, in view Orientation: Speech Translation Issues of the ambiguities of language; but attempts As orientation and preparation for our histori- to automatically translate spoken rather than cal survey of the speech translation field, it will written language add the considerable difficul- be helpful to review the issues confronting any ties of converting the spoken word into text (or speech translation system. We’ll start by con- into a semantic or other internal representa- sidering several dimensions of design choice, tion). Beyond the need to distinguish different and then give separate attention to matters of meanings, systems also risk additional errors human interface and multimodality. and ambiguity concerning what was actually said – due to noise, domain context, disfluency Dimensions of Design Choice (errors, repetitions, false starts, etc.), dialog Because of its dual difficulties – those of speech effects, and many more sources of uncertainty. recognition and machine translation – the field has progressed in stages. At each stage, They must not only determine the appropri- attempts have been made to reduce the com- ate meaning of “bank” – whether “financial plexity of the task along several dimensions: institution,” “river bank,” or other; they also range (supported linguistic flexibility, sup- run the risk of misrecognizing the word itself, ported topic or domain); speaking style (read in the face of sloppy speech, absence of word vs. conversational); pacing (consecutive vs. boundaries, noise, and intrinsic acoustic con- simultaneous); speed and latency (real-time fusability. “Did you go to the bank?” becomes vs. delayed systems); microphone handling; / /, and each segment dɪd͡ʒəgowdəðəbæŋk architecture (embedded vs. server-based sys- may be misheard in various ways: / / 4 bæŋk tems); sourcing (choice among providers of “bang”; / / 4 “goat at a”; and so gowdəðə components); and more. Each system has on. This extra layer of uncertainty can lead to necessarily accepted certain restrictions and 6 limitations in order to improve performance than those limited to phrasebooks, as they and achieve practical deployment. include varied expressions; greater disfluency and more hesitations (“I, er, I uhm, I would like Range (supported linguistic flexibility, sup- to, er ... can I please, er …. Can I make a res- ported topic or domain) ervation, please?”); and generally less careful Restricted syntax, voice phrasebooks: The speech. most straightforward restriction is to severely limit the range of sentences that can be Open-domain speech: In open-domain systems accepted, thereby restricting the allowable we remove the domain restriction by permit- syntax (grammar). A voice-based phrase book, ting any topic of discussion. This freedom is for example, can accept only specific sentences important in applications like translation of (and perhaps near variants). This limitation broadcast news, lectures, speeches, seminars, does simplify recognition and translation by and wide-ranging telephone calls. Developers reducing the number of possible choices (in of these applications confront unrestricted, the jargon, the perplexity). Speech recogni- and thus much larger, vocabularies and con- tion need only pick one of the legal words or cept sets. (Consider, for example, special terms sentences, and translation requires no more in academic lectures or speeches.) Moreover, than a table lookup or best-match operation. open-domain use cases must often handle However, while these constraints improve long monologues or continuous streams of performance and hence ease deployment, speech, in which we don’t know the beginnings deviations from the allowable sentences will and endings of sentences. quickly lead to failure (though fuzzy matching can raise flexibility a bit). Thus voice-activated Speaking style (read vs. conversational speech): phrasebooks are effective in simple tasks like Among open-domain systems, another dimen- command-and-control,
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages58 Page
-
File Size-