Perceptual Quality Dimensions of Text-To-Speech Systems

Total Page:16

File Type:pdf, Size:1020Kb

Perceptual Quality Dimensions of Text-To-Speech Systems INTERSPEECH 2011 Perceptual Quality Dimensions of Text-to-Speech Systems Florian Hinterleitner1, Sebastian Möller1, Christoph Norrenbrock2, Ulrich Heute2 1Quality and Usability Lab, Deutsche Telekom Laboratories, TU Berlin, Germany 2Digital Signal Processing and System Theory, CAU Kiel, Germany {florian.hinterleitner, sebastian.moeller}@telekom.de, {cno, uh}@tf.uni-kiel.de Abstract The aim of our research is to assess the inherent quality dimen- The aim of this paper is to analyze the perceptual quality dimen- sions of several state-of-the-art TTS systems. This will ensure sions of state-of-the-art text-to-speech systems (TTS). There- a deeper insight into how test subjects perceive modern TTS fore, several pretests were conducted to determine a suitable set quality. Our study follows the approach presented in [8] which of attribute scales. The resulting 16 scales were used in a se- analyzed preceptual quality dimensions of modern telephone mantic differential on a diverse database containing 16 different connections via different multidimensional analysis techniques. TTS systems. A subsequent multidimensional analysis (Princi- The pros and cons of these methods are discussed in Section 2. pal Axis Factor analysis with Promax rotation) resulted in three Section 3 presents the TTS database and the series of tests that underlying quality dimensions. They were labeled naturalness, were conducted. An evaluation via factor analysis of the col- disturbances, and temporal distortions. A mapping of these fac- lected data is performed in Section 4. The resulting quality di- tors onto the perceived overall quality revealed that naturalness mensions are discussed in Section 5. Finally, Section 6 summa- contributes the most to the quality of TTS signals. rizes the main results and gives a perspective to future work. Index Terms: speech synthesis, quality dimensions, multidi- mensional analysis 2. Multidimensional analysis To reveal a mapping of the perceptual space of human listen- 1. Introduction ers, different analysis methods can be used. The MDS [9] uses paired comparison tests to create a stimulus space which is then Naturalness has always been the major weakness of TTS sys- reduced in dimensionality. The drawback of this approach is tems. However, improvements over the past years have shown the constraint on a small set of stimuli. Moreover, no hints are a notable increase in quality, which allows them to be used for given for the interpretation of the resulting perceptual space. a number of applications, e.g. short message services, infor- Therefore we opted for a semantic differential (SD). It uses pre- mation systems, or smart-home assistants. Still, modern TTS defined attribute scales to measure the auditory impression of systems suffer from diverse quality constraints, ranging from the listeners. This guarantees a direct relation between the used concatenation artefacts to difficulties in word- and sentence- attribute scales and the derived quality dimensions and thus an intonations. With the rise of new applications further improve- easier interpretation. On the downside, due to the given set of ments will be necessary. Thus, methods to efficiently assess scales, this approach cannot guarantee that all relevant percep- different quality dimensions are an important tool. tual dimensions are actually solicited from the test participants. Depending on the quality aspect to be assessed different kinds 10.21437/Interspeech.2011-570 To reduce the influence of the test designers to a minimum, of listening tests can be carried out: articulation and intellige- a suitable set of scales has to be developed through several bility tests assess whether the synthetic speech signal is able pretests. In pretest 1 attributes describing the auditory impres- to carry information on a segmental or supra-segmental level sion of the listeners are collected. These terms are converted [1]; comparison tests measure if human listeners can compre- into scales and presented in a second pretest. An analysis of the hend the content provided via the presented TTS signals [2]; and second pretest data leads to a final selection of scales which are overall quality tests, as recommended in ITU-T P.85 [3], capture presented in the final SD experiment. On the basis of these at- different quality aspects of the signal, e.g. naturalness, listening tribute ratings, orthogonal factors can be derived with the help effort and overall impression. Though doubts have been casted of a factor analysis. The realisation of this test will be described on the test protocol [4] [5], the method described in ITU-T Rec. in the following section, and the results of the factor analysis P.85 is still the most common way to evaluate TTS systems. are discussed in Section 4. However, to evaluate the entire perceptive space of test sub- jects, a multidimensional analysis has to be performed. Differ- ent studies have been carried out to determine the underlying 3. Experimental-setup quality dimensions. In [6] a pilot study with multidimensional This section gives an overview of the database of speech syn- scaling (MDS) on TTS data generated by the Festival synthe- thesizers collected for the listening tests. Moreover, it describes sizer lead to a three-dimensional space. Since only stimuli of the approach used to gain a relevant set of attribute scales that one unit-selection synthesizer were presented in that test, the describe the perceptual space of TTS systems in a more-or-less results cannot be generalized. Kraft and Portele [7] evaluated complete way. five German TTS systems in a series of tests and came up with two dimensions representing prosodic and segmental attributes. 3.1. Test database Given that their study was carried out in 1995, distortions from modern TTS systems e.g. unit-selection and HMM-based syn- 10 German sentences from the EUROM.1 corpus [10] were thesizers could not be evaluated. chosen as source material. Since place names, proper names Copyright © 2011 ISCA 2177 28-31 August 2011, Florence, Italy and words from foreign languages often use special pronunci- To narrow down the number of attribute scales, we omitted ation rules and thus cause trouble for speech synthesizers, the unnatural melody vs. natural melody which correlated highly selected sentences did not contain any of these. To avoid user (R>0.60) with the other scales that rate naturalness, and fatigue but still guarantee a valid impression of the occurring thus measure similar features. Moreover, scales that were used distortions, the sentences were shortened to a length of about rather rarely were dropped. 10 s each. In order to gain a first impression of the perceptual space a To capture a broad variety of distortions we generated synthetic Principal Component Analysis (PCA) with Varimax rotation speech files from 14/15 different TTS systems for female/male was performed on the remaining scales (=items) and 3 factors speakers, for some of them with up to 6 different voices. Thus, were extracted. Subsequently all items with high loadings on data from 35/28 different configurations (female/male) could be multiple factors and items with communalities < 0.45 were produced. Besides the synthetic speech files the database also discarded. This led to the following 16 attribute scales: contains stimuli from 4/4 amateur (female/male) and 4/4 pro- artificial vs. natural, bumpy vs. not bumpy, clinking vs. not fessional (female/male) natural speakers. All speech files were clinking, distorted vs. undistorted, fast vs. slow, hissing vs. downsampled to 16 kHz and level normalized to -26 dBov us- not hissing, interrupted vs. continuous, noisy vs. not noisy, ing the speech-level meter [11]. raspy vs not raspy, several voices vs. one voice, tense vs. calm, The database contains speech material synthesized by fol- undisturbed vs. disturbed, unintelligible vs. intelligible, unnat- lowing systems: Acapela Infovox3, AT&T Natural Voice, ural accentuation vs. natural accentuation, unnatural rhythm atip Proser, BOSS, Cepstral Voices, Cereproc CereVoice, vs. natural rhythm, unpleasant vs. pleasant (translations from DRESS, Loquendo, MARY bits, MARY hmm-bits, MARY German wordings) MBROLA, NextUp Talker, NextUp TextAloud3, Nuance Re- alSpeak, SVOX, and SyRUB. Abbr. Provider Synthesizer Female Male BOS RFW Bonn BOSS 1 - 3.2. Pretest 1 DRE TU Dresden DRESS 1 1 BIT MARY bits 1 1 The objective of pretest 1 was to collect a broad basis of at- HMM MARY hmm-bits 1 1 tributes describing auditory features of synthetic speech. There- MBR MARY MBROLA 1 1 fore audio files from 12/13 different TTS systems with fe- SYR RU Bochum SyRUB - 1 male/male voices plus 2 different natural speakers per gender CS1 Commercial synthesizer 1 1 1 were presented. 12 (4 female, 8 male) expert listeners from CS2 Commercial synthesizer 2 2 1 Deutsche Telekom Laboratories in Berlin took part in the test. CS3 Commercial synthesizer 3 1 1 CS4 Commercial synthesizer 4 1 1 The stimuli were presented in a quiet conference room environ- CS5 Commercial synthesizer 5 1 1 ment via headphones (AKG K601) in randomized order. Two CS6 Commercial synthesizer 6 1 1 sessions were conducted, one with female and one with male CS7 Commercial synthesizer 7 1 1 voices, with a break of 5 min in between. Every TTS system CS8 Commercial synthesizer 8 1 1 was covered with 2 stimuli. The listeners were instructed to CS9 Commercial synthesizer 9 1 1 write down nouns, adjectives and antonym pairs describing their CS10 Commercial synthesizer 10 - 1 auditory impression. Furthermore they were asked to give an in- tensity rating for each attribute on a scale ranging from 1 to 10. Table 1: Synthesizers and voice configurations used in the main The listening test resulted in 2179 collected terms out of which test. 296 unique descriptions were found. These attributes were con- densed into 44 scales. Attribute scales that mainly rate features 3.4. Main test concerning individual voice character and accent and those that For the main test a set of 15 different synthesizer configurations rate the same perceptual features were omitted.
Recommended publications
  • A University-Based Smart and Context Aware Solution for People with Disabilities (USCAS-PWD)
    computers Article A University-Based Smart and Context Aware Solution for People with Disabilities (USCAS-PWD) Ghassan Kbar 1,*, Mustufa Haider Abidi 2, Syed Hammad Mian 2, Ahmad A. Al-Daraiseh 3 and Wathiq Mansoor 4 1 Riyadh Techno Valley, King Saud University, P.O. Box 3966, Riyadh 12373-8383, Saudi Arabia 2 FARCAMT CHAIR, Advanced Manufacturing Institute, King Saud University, Riyadh 11421, Saudi Arabia; [email protected] (M.H.A.); [email protected] (S.H.M.) 3 College of Computer and Information Sciences, King Saud University, Riyadh 11421, Saudi Arabia; [email protected] 4 Department of Electrical Engineering, University of Dubai, Dubai 14143, United Arab Emirates; [email protected] * Correspondence: [email protected] or [email protected]; Tel.: +966-114693055 Academic Editor: Subhas Mukhopadhyay Received: 23 May 2016; Accepted: 22 July 2016; Published: 29 July 2016 Abstract: (1) Background: A disabled student or employee in a certain university faces a large number of obstacles in achieving his/her ordinary duties. An interactive smart search and communication application can support the people at the university campus and Science Park in a number of ways. Primarily, it can strengthen their professional network and establish a responsive eco-system. Therefore, the objective of this research work is to design and implement a unified flexible and adaptable interface. This interface supports an intensive search and communication tool across the university. It would benefit everybody on campus, especially the People with Disabilities (PWDs). (2) Methods: In this project, three main contributions are presented: (A) Assistive Technology (AT) software design and implementation (based on user- and technology-centered design); (B) A wireless sensor network employed to track and determine user’s location; and (C) A novel event behavior algorithm and movement direction algorithm used to monitor and predict users’ behavior and intervene with them and their caregivers when required.
    [Show full text]
  • Voicesetting: Voice Authoring Uis for Improved Expressivity in Augmentative Communication Alexander J
    Voicesetting: Voice Authoring UIs for Improved Expressivity in Augmentative Communication Alexander J. Fiannaca, Ann Paradiso, Jon Campbell, Meredith Ringel Morris Microsoft Research, Redmond, WA, USA {alfianna, annpar, joncamp, merrie}@microsoft.com ABSTRACT prosodic features such as the rate of speech, the pitch of the Alternative and augmentative communication (AAC) voice, and cadence/pacing of words. The fact that these systems used by people with speech disabilities rely on text- advances have yet to be incorporated into SGDs in a to-speech (TTS) engines for synthesizing speech. Advances meaningful way is a major issue for AAC. Recent work such in TTS systems allowing for the rendering of speech with a as that of Higginbotham [9], Kane et. al. [11], and Pullin et range of emotions have yet to be incorporated into AAC al. [22] has described the importance of this issue and the systems, leaving AAC users with speech that is mostly need to develop better AAC systems capable of more devoid of emotion and expressivity. In this work, we expressive speech, but to date, there are no research or describe voicesetting as the process of authoring the speech commercially available AAC devices that provide advanced properties of text. We present the design and evaluation of expressive speech capabilities, with a majority only allowing two voicesetting user interfaces: the Expressive Keyboard, for basic modification of speech parameters (overall rate and designed for rapid addition of expressivity to speech, and the volume) that cannot be varied on-the-fly (as utterances are Voicesetting Editor, designed for more careful crafting of the constructed and played), while not leveraging the capabilities way text should be spoken.
    [Show full text]
  • Microsoft Voices Download Windows 10 Use Voice Recognition in Windows 10
    microsoft voices download windows 10 Use voice recognition in Windows 10. Before you set up voice recognition, make sure you have a microphone set up. Select the Start button, then select Settings > Time & Language > Speech . Under Microphone , select the Get started button. Help your PC recognize your voice. You can teach Windows 10 to recognize your voice. Here's how to set it up: In the search box on the taskbar, type Windows Speech Recognition , and then select Windows Speech Recognition in the list of results. If you don't see a dialog box that says "Welcome to Speech Recognition Voice Training," then in the search box on the taskbar, type Control Panel , and select Control Panel in the list of results. Then select Ease of Access > Speech Recognition > Train your computer to understand you better . Appendix A: Supported languages and voices. The following table explains what languages and text-to-speech (TTS) voices are available in the latest version of Windows. Language, country, or region. Female TTS voice. Arabic (Saudi Arabia) Cantonese (Traditional, Hong Kong SAR) Chinese (Traditional, Taiwan) Czech (Czech Republic) English (Great Britain) English (United States) Flemish (Belgian Dutch) Add a TTS voice to your PC. To use one of these voices, add it to your PC: Open Narrator Settings by pressing the Windows logo key + Ctrl + N . Under Personalize Narrator’s voice , select Add more voices . This will take you to the Speech settings page. Under Manage voices , select Add voices. Select the language you would like to install voices for and select Add . The new voices will download and be ready for use in a few minutes, depending on your internet download speed.
    [Show full text]
  • Dialog Systems
    2014-02-13 Dialog systems Professor Joakim Gustafson CV for Joakim Gustafson 1987-1992 Electrical Engineering program at KTH 1992-1993 Linguistics at Stockholm University 1993-2000 PhD studies at KTH 2000-2007 Senior researcher at Telia R&D department 2007- Future faculty position at KTH 2013 – Professor, Head of the Speech Group 1 2014-02-13 What is Dialogue? •A sequence of isolated utterances uttered by at least two speakers that together form a discourse. •Dialogue = a connected sequence of information units (with a goal); - provides coherence over the utterances, - provides a context for interpreting utterances, - multiple participants exchange information. General characteristics of dialogues •At least two participants •No external control over the other participants initiative •A structure develops with the dialogue •Some conventions and protocols exist •Dialogues are robust - we seek to understand the other participant •Various features are problematic. 2 2014-02-13 Different types of dialogue •Conversations – Informal (spoken) interaction between two individuals – Main Goal: development and maintenance of social relationships •Task-oriented Dialogues – Possibly formal multimodal interaction – Main Goal: perform a given task •Natural Dialogues: – Occur between humans •Artificial Dialogues: – At least one of the participant is a computer Vision: artificial dialogues 2001 3 2014-02-13 Did we reach it today? What can be improved? •Speech understanding •Speech synthesis •Dialogue behavior 4 2014-02-13 Improved speech understanding
    [Show full text]
  • RT-Voice PRO Hearing Is Understanding
    RT-Voice PRO Hearing is understanding Documentation Date: 31.08.2021 Version: 2021.3.0 © 2015-2021 crosstales LLC htt s:/!""".crosstales.com #$-Voice PRO 2021.3.0 Table of Contents 1. %&er&ie".........................................................................................................5 2. 'eatures..........................................................................................................( 2.1. Con&ert te)t to &oice.............................................................................................( 2.2. Documentation * control.......................................................................................( 2.3. Com ati+ilit,........................................................................................................( 2.4. .ntegrations........................................................................................................./ 2.5. 0latform-speci1ic 1eatures and limitations.................................................................8 2.5.1. %&er&ie"..................................................................................................................8 2.5.2. 2indo"s..................................................................................................................8 2.5.3. mac%3.....................................................................................................................8 2.5.-. 4ndroid....................................................................................................................5 2.5.5. i%3.........................................................................................................................
    [Show full text]
  • Romanian Minimal Sample Pack
    Romanian Minimal Sample Pack biographically?Partha synthetised tough as fumigatory Trace thigging her pluton savour flipping. Nunzio frisks stately. Sanson reusing The drum and their collection of romanian minimal Eur to minimal. Check the samples which will become widely known about some texts. Ssd is packed crowds, minimal house pack embody the dizi fits the collection of packs you can access. El sonido es creado por. Chandler limited time sale discover new! It as romanian minimal techno pack has a variety of packs that. High quality so exciting new! Download samples pack sizes are nks ready before making hip throw the minimal house, a folder includes tracks now? Dark minimal producers and samples pack for anyone like this? Please fix this sample. Even some delay, minimal techno pack. Deep minimal tech samples and minimalism out! Keyboard shortcuts have used to romanian minimal techno samples are going to. Thank you romanian minimal techno pack fits the paid software. This pack is packed crowds, having a latest professional! Djs from samples pack is packed with minimal techno sample packs, romanian festivals in the. Diva is constantly updating our sample pack produced minimal, samples are virtually unlimited sfx, i could be time when it. This sample packs are categorized for minimal techno samples from the romanian house loops that it is the left before the. None of romanian minimal music production industry experts and minimalism out the pack mixing and download of the kick it is packed with. Setup of handcrafting warm, if not try to get better than this short article is an ancient indian vocals have been recorded sets.
    [Show full text]
  • Speech Recognition Will Be Part of the Interface
    Dialog systems Professor Joakim Gustafson CV for Joakim Gustafson 1987-1992 Electrical Engineering program at KTH 1992-1993 Linguistics at Stockholm University 1993-2000 PhD studies at KTH 2000-2007 Senior researcher at Telia Research 2007- Future faculty position at KTH 2013 – Professor 2015 - Head of department Background Todays topic: dialogue systems What is Dialogue? •A sequence of isolated utterances uttered by at least two speakers that together form a discourse. •Dialogue = a connected sequence of information units (with a goal); - provides coherence over the utterances, - provides a context for interpreting utterances, - multiple participants exchange information. General characteristics of dialogues •At least two participants •No external control over the other participants initiative •A structure develops with the dialogue •Some conventions and protocols exist •Dialogues are robust - we seek to understand the other participant •Various features are problematic. Different types of dialogue •Conversations – Informal (spoken) interaction between two individuals – Main Goal: development and maintenance of social relationships •Task-oriented Dialogues – Possibly formal multimodal interaction – Main Goal: perform a given task •Natural Dialogues: – Occur between humans •Artificial Dialogues: – At least one of the participant is a computer Dialogue research at KTH Our research dialogue systems Waxholm: the first Swedish spoken dialogue system (1995) August: a public chatbot agent (1998) • Swedish spoken dialogue system for public use
    [Show full text]
  • Texafon 2.0: a Text Processing Tool for the Generation of Expressive Speech in TTS Applications
    TexAFon 2.0: A text processing tool for the generation of expressive speech in TTS applications Juan María Garrido, Yesika Laplaza, Benjamin Kolz, Miquel Cornudella Department of Translation and Language Sciences, Pompeu Fabra University, Barcelona, Spain Roc Boronat 138, 08108 Barcelona, Spain E-mail: [email protected], [email protected], [email protected], [email protected] Abstract This paper presents TexAfon 2.0, an improved version of the text processing tool TexAFon, specially oriented to the generation of synthetic speech with expressive content. TexAFon is a text processing module in Catalan and Spanish for TTS systems, which performs all the typical tasks needed for the generation of synthetic speech from text: sentence detection, pre-processing, phonetic transcription, syllabication, prosodic segmentation and stress prediction. These improvements include a new normalisation module for the standardisation on chat text in Spanish, a module for the detection of the expressed emotions in the input text, and a module for the automatic detection of the intended speech acts, which are briefly described in the paper. The results of the evaluations carried out for each module are also presented. Keywords: emotion detection, speech act detection, text normalisation 1. Introduction This paper describes several improvements introduced in Language and speech technological applications in TexAFon (Garrido et al., 2012) to process correctly text general, and TTS systems in particular, have increasingly with expressive content. TexAFon is a text processing to deal, among many other aspects, with the processing module in Catalan and Spanish for TTS systems which and generation of expressive language (messages coming performs all the typical tasks needed for the generation from speech-based person-machine interfaces, e-mail, of synthetic speech from text.
    [Show full text]
  • THE CEREVOICE SPEECH SYNTHESISER Juan María Garrido
    V Jornadas en Tecnología del Habla THE CEREVOICE SPEECH SYNTHESISER Juan María Garrido1, Eva Bofias1, Yesika Laplaza1, Montserrat Marquina1 Matthew Aylett2, Chris Pidcock2 1Barcelona Media Centre d’Innovació, Barcelona, Spain 2Cereproc Ltd, Edinburgh, Great Britain ABSTRACT Cerevoice), and a TTS Voice. Also, some optional modules, such as user lexicons or user abbreviations This paper describes the CereVoice® text-to-speech tables, can be used to improve the text processing in system developed by Cereproc Ltd, and its use for the particular applications. Figure 1 shows a workflow generation of the test sentences for the Albayzin 2008 scheme of the system. TTS evaluation. Also, the building procedure of a Cerevoice-compatible voice for the Albayzin 2008 evaluation using the provided database and the Cerevoice VCK, a Cereproc tool for fast and fully automated creation of voices, is described. 1. INTRODUCTION CereVoice® is a unit selection speech synthesis software development kit (SDK) produced by CereProc Ltd., a company based in Edinburgh and founded in late 2005 with a focus on creating characterful synthesis and Figure 1. Overview of the architecture of the Cerevoice massively increasing the efficiency of unit selection synthesis system. A key element in the architecture is the separation of text normalization from the selection part of the voice creation [1, 2, 3, 4, 5]. system and the use of an XML API. Cereproc Ltd and Barcelona Media Centre d’Innovació (BM) started in 2006 a collaboration which In the following subsections, a brief description of the led to the development of two text normalization main features of the Cerevoice engine, the text modules, for Spanish and Catalan, the lexicon and the processing module and the voices is given.
    [Show full text]
  • Speech Synthesis
    6. Text-to-Speech Synthesis (Most Of these slides come from Dan Juray’s course at Stanford) History of Speech Synthesis • In 1779, the Danish scientist Christian Kratzenstein builds models of the human vocal tract that can produce the five long vowel sounds. • In 1791 Wolfgang von Kempelen (the creator of the Turk chess playing game) devises the bellows(fuelle, mancha)-operated “automatic- mechanical speech machine”. It added models of the tongue and lips that allowed it to produce voewls and consonants. • In 1837 Charles Wheatstone produces a "speaking machine" based on von Kempelen's design. • In the 1930s, Bell Labs developed the VOCODER, a keyboard- operated electronic speech analyzer and synthesizer that was said to be clearly intelligible. It was later refined into the VODER, which was exhibited at the 1939 New York World's Fair. Tractament Digital de la Parla 2 Von Kempelen: • Small whistles controlled consonants • Rubber mouth and nose; nose had to be covered with two fingers for non-nasals • Unvoiced sounds: mouth covered, auxiliary bellows driven by string provides puff of air From Traunmüller’s web site Von Kempelen’s speaking machine Bell labs VOCODER machine Homer Dudley 1939 VODER • Synthesizing speech by electrical means • 1939 World’s Fair Homer Dudley’s VODER • Manually controlled through complex keyboard • Operator training was a problem One of the first “talking” computers Closer to a natural vocal tract: Riesz 1937 The UK Speaking Clock • July 24, 1936 • Photographic storage on 4 glass disks • 2 disks for minutes, 1 for hour, one for seconds. • Other words in sentence distributed across 4 disks, so all 4 used at once.
    [Show full text]
  • Using Large Corpora and Computational Tools to Describe
    Chapter 1 Using large corpora and computational tools to describe prosody: An exciting challenge for the future with some (important) pending problems to solve Juan María Garrido Almiñana National Distance Education University This chapter presents and discusses the use of corpus-based methods for prosody analysis. Corpus-based methods make use of large corpora and computational tools to extract conclusions from the analysis of copious amounts of data and are being used already in many scientific disciplines. However, they are not yet frequently used in phonetic and phonological studies. Existing computational tools for the au- tomatic processing of prosodic corpora are reviewed, and some examples of studies in which this methodology has been applied to the description of prosody are pre- sented. 1 Introduction The “classical” experimental approach to the analysis of prosody (questions and hypotheses, corpus design and collection, data measurement, statistical analy- sis, and conclusions) has until recently been carried out using mostly manual techniques. However, doing experimental research using manual procedures is a time-consuming process, mainly because of the corpus collection and measure- ment processes. For this reason, usually small corpora, recorded by a few number of speakers, are used, which is a problem if the results are supposed to be con- sidered representative of a given language, for example. Juan María Garrido Almiñana. 2018. Using large corpora and computational tools to describe prosody: An exciting challenge for the future with some (important) pending problems to solve. In Ingo Feldhausen, Jan Fliessbach & Maria del Mar Vanrell (eds.), Methods in prosody: A Romance language perspective, 3–43.
    [Show full text]
  • MILLA – Multimodal Interactive Language Learning Agent
    MILLA – Multimodal Interactive Language Learning Agent João Paulo Cabral1, Nick Campbell1, Shree Ganesh2, Emer Gilmartin1, Fasih Haider1, Eamonn Kenny1, Mina Kheirkhah3, Andrew Murphy1, Neasa Ní Chiaráin1, Thomas Pellegrini4, Odei Rey Orozko5 Trinity College Dublin, Ireland1; GCDH-University of Goettingen, Germany2; Institute for Advanced Studies in Basic Sciences, Zanjan, Iran3; Université de Toulouse ; IRIT, France4; Universidad del País Vasco, Bilbao, Spain5 1 Background 2 MILLA System Components Learning a new language involves the acquisition Tuition Manager: MILLA’s spoken dialogue and integration of a range of skills. A human tu- Tuition Manager (Figure 1) consults a two-level tor aids learners by (i) providing tasks suitable curriculum of language learning tasks, a learner to the learner’s needs, (ii) monitoring progress record and learner state module to greet and en- and adapting task content and delivery style, and roll learners, offer language learning submod- (iii) providing a source of speaking practice and ules, provide feedback, and monitor user state motivation. With the advent of audiovisual tech- with Kinect sensors. All of the tuition manager’s nology and the communicative paradigm in lan- interaction with the user can be performed using guage pedagogy, focus has shifted from written speech through a Cereproc Text-to-Speech (TTS) grammar and translation to developing commu- voice and Cereproc’s Python SDK (Cereproc, nicative competence in listening and spoken pro- 2014), and understanding via CMU’s Sphinx4 duction. The Common European Framework of ASR (Walker et al., 2004) through custom Py- Reference for Language Learning and Teaching thon bindings using W3C compliant Java Speech (CEFR) recently added a more integrative fifth Format Grammars.
    [Show full text]