Initial Considerations in Building a Speech-To-Speech Translation System for the Slovenian-English Language Pair
Total Page:16
File Type:pdf, Size:1020Kb
Initial Considerations in Building a Speech-to-Speech Translation System for the Slovenian-English Language Pair J. Žganec Gros1, A. Mihelič1, M. Žganec1, F. Mihelič2, S. Dobrišek2, J. Žibert2, Š. Vin- tar2, T. Korošec2, T. Erjavec3, M. Romih4 1Alpineon R&D, Ulica Iga Grudna 15, SI-1000 Ljubljana, Slovenia 2University of Ljubljana, SI-1000 Ljubljana, Slovenia 3Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana, Slovenia 4Amebis, Bakovnik 3, SI-1241 Kamnik, Slovenia E-mail: [email protected] Abstract. The paper presents the design concept of the VoiceTRAN Communicator that in- tegrates speech recognition, machine translation and text-to-speech synthesis using the DARPA Galaxy architecture. The aim of the project is to build a robust multimodal speech-to-speech translation communicator able to translate simple domain-specific sentences in the Slove- nian-English language pair. The project represents a joint collaboration between several Slovenian research organizations that are active in human language technologies. We pro- vide an overview of the task, describe the system architecture and individual servers. Fur- ther we describe the language resources that will be used and developed within the project. We conclude the paper with plans for evaluation of the VoiceTRAN Communicator. 1. Introduction point of the interaction. Consequently, this com- promises the flexibility and naturalness of using Automatic speech-to-speech (STS) translation the system. systems aim to facilitate communication among The VoiceTRAN Communicator is being built people who speak in different languages (Lavie within a national Slovenian research project in- et al., 1997), (Wahlster, 2000), (Lavie et al., volving 5 partners: Alpineon, the University of 2002). Their goal is to generate a speech signal Ljubljana (Faculty of Electrical Engineering, in the target language that conveys the linguis- Faculty of Arts and Faculty of Social Studies), tic information contained in the speech signal the Jožef Stefan Institute, and Amebis as a sub- from the source language. contractor. There are, however, major open research is- The project is co-funded by the Slovenian sues that challenge the deployment of natural Ministry of Defense. The aim of the project is to and unconstrained speech-to-speech translation build a robust multimodal speech-to-speech trans- systems, even for very restricted application lation communicator, similar to Phraselator (Sa- domains, due to the fact that state-of-the-art auto- rich, 2001) or Speechalator (Waibel, 2003), able matic speech recognition and machine transla- to translate simple sentences in a Slovenian-Eng- tion systems are far from perfect. Additionally, lish language pair. in comparison to translating written text, con- The application domain is limited to common versational spoken messages are often conveyed application scenarios that occur in peace-keeping with imperfect syntax and casual spontaneous operations on foreign missions when the users speech. In practice, when building demonstra- of the system have to communicate with the lo- tion systems, STS systems are typically imple- cal population. More complex phrases can be mented by imposing strong constraints on the entered via keyboard using a graphical user in- application domain and the type and structure terface. of possible utterances, i.e. both in the range and in the scope of the user input allowed at any EAMT 2005 Conference Proceedings 288 Initial considerations in building a speech-to-speech translation system for Slovenian-English 2. System architecture Takes the signals from Audio Server and Speech Rec- maps audio samples into text strings. The VoiceTRAN Communicator uses the DAR- ognizer Produces a N-best sentence hypothesis PA Galaxy Communicator architecture (Seneff list. et al, 1998). The Galaxy Communicator open Receives N-best postprocessed sentence source architecture was chosen to provide inter- hypotheses from the Speech Recognition Machine Server and translates them from a source module communication support as its plug-and- Translator language into a target language. play approach allows interoperability of com- Produces a scored disambiguated sen- mercial software and research software compo- tence hypothesis list. nents. It was specially designed for develop- Receives rich and disambiguated word ment of voice-driven user interfaces in a multi- strings from the Machine Translation modal platform. It is a distributed, message- Speech Syn- server. thesizer Converts then input word strings into based, hub-and-spoke infrastructure optimized speech and prepares them for the audio for constructing spoken dialogue systems. server. There are two ways of porting modules into the Galaxy architecture: the first is to alter its code so that it can be incorporated into the Galaxy architecture; the second is to create a wrapper or a capsule for the existing module, the capsule then behaves as a Galaxy server. We have opted for the second option since we want to be able to test commercial modules as well. Minimal changes to the existing mod- ules were required, mainly those regarding in- put/output processing. Figure 1. VoiceTRAN system architecture A particular session is initiated by a user ei- ther through interaction with a graphical user The VoiceTRAN Communicator is composed of interface (typed input) or the microphone. The a number of servers that interact with each other VoiceTRAN Communicator servers capture spo- through the Hub as shown in Figure 1. The Hub ken or typed input from the user, and return the is used as a centralized message router through servers’ responses with synthetic speech, graph- which servers can communicate with one an- ics, and text. The server modules are described other. Frames containing keys and values are in more detail in the next subsections. emitted by each server. They are routed by the hub and received by a secondary server based 2.1. Audio server on rules defined in the Hub script. The VoiceTRAN Communicator consists of The audio server connects to the microphone in- the Hub and five servers: audio server, graphic put and speaker output terminals on the host com- user interface, speech recognizer, machine trans- puter and performs recoding user input and play- lator and speech synthesizer: ing prompts or synthesized speech. Input speech captured by the audio server is automatically Audio Server Receives speech signals from the micro- recorded to files for later system training. phone and sends them to the recognizer. Sends synthesized speech to the speak- 2.2. Speech Recognizer ers. Receives input text from the keyboard. The speech recognition server receives the input Displays recognized source language audio stream from the audio server and pro- Graphic User sentences and translated target language vides at its output a word graph and a ranked Interface sentences. list of candidate sentences, the N-best hypothe- Provides user controls for handling the application. ses list that can include part-of-speech informa- tion generated by the language model. EAMT 2005 Conference Proceedings 289 Žganec Gros, et al. The speech recognition server used in Voice- A postprocessing algorithm inserts basic punc- TRAN is based on the Hidden Markov Model tuation and capitalization information before Recognizer developed by the University of Ljubl- passing the target sentence to the speech synthe- jana (Dobrišek, 2001). It will be upgraded to sizer. The output string can also convey lexical perform large vocabulary speaker (in)dependent stress information in order reduce disambigua- speech recognition on a wider application do- tion efforts during text-to-speech synthesis. main. A back-off class-based trigram language A multi-engine based approach will be used model will be used. Given a limited amount of in the early phase of the project that makes it training data the parameters in the models will possible to exploit strengths and weaknesses of be carefully chosen in order to achieve maxi- different MT technologies and to choose the mum performance. most appropriate engine or combination of en- Further in the project we want to test other gines for the given task. Four different transla- speech recognition approaches with an empha- tion engines will be applied in the system. We sis on robustness, processing time, footprint and will combine TM (translation memories), SMT memory requirements. (statistical machine translation), EBMT (exam- Since the final goal of the project is a stand- ple-based machine translation) and RBMT (rule- alone speech communicator used by a specific based machine translation) methods. A simple user, the speech recognizer can be additionally approach to select the best translation from all trained and adapted to the individual user in or- the outputs will be applied. der to achieve higher recognition accuracy at A bilingual aligned domain-specific corpus least in one language. will be used to build the TM and train the A common speech recognizer output typi- EBMT and the SMT phrase translation models. cally has no information on sentence boundaries, In SMT an interlingua approach, similar to the punctuation and capitalization. Therefore, addi- one described in (Lavie et al., 2002) will be in- tional postprocessing in terms of punctuation and vestigated and promising directions pointed out capitalization will be performed on the N-best in (Ney, 2004) will be pursued. hypotheses list before it is passed to the ma- The Presis translation system will be used as chine translator. our baseline system (Romih