Cross Modal Evaluation of High Quality Emotional Speech Synthesis with the Virtual Human Toolkit
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
UN SYNTHÉTISEUR DE LA VOIX CHANTÉE BASÉ SUR MBROLA POUR LE MANDARIN Liu Ning
UN SYNTHÉTISEUR DE LA VOIX CHANTÉE BASÉ SUR MBROLA POUR LE MANDARIN Liu Ning To cite this version: Liu Ning. UN SYNTHÉTISEUR DE LA VOIX CHANTÉE BASÉ SUR MBROLA POUR LE MAN- DARIN. Journées d’Informatique Musicale, 2012, Mons, France. hal-03041805 HAL Id: hal-03041805 https://hal.archives-ouvertes.fr/hal-03041805 Submitted on 5 Dec 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Actes des Journées d’Informatique Musicale (JIM 2012), Mons, Belgique, 9-11 mai 2012 UN SYNTHÉTISEUR DE LA VOIX CHANTÉE BASÉ SUR MBROLA POUR LE MANDARIN Liu Ning CICM (Centre de recherche en Informatique et Création Musicale) Université Paris VIII, MSH Paris Nord [email protected] RESUME fonction de la mélodie et des paroles à partir d’un fichier MIDI [7]. Dans cet article, nous présentons le projet de développement d’un synthétiseur de la voix chantée basé Néanmoins, les synthèses de la voix chantée pour la sur MBROLA en mandarin pour la langue chinoise. langue chinoise ont encore des limites techniques. Par Notre objectif vise à développer un synthétiseur qui exemple, le système à partir de la lecture de fichier MIDI puisse fonctionner en temps réel, qui soit capable de se est un système en temps différé. -
Espeak : Speech Synthesis
Software Requirements Specification for ESpeak : Speech Synthesis Version 1.48.15 Prepared by Dimitrios Koufounakis January 10, 2018 Copyright © 2002 by Karl E. Wiegers. Permission is granted to use, modify, and distribute this document. Software Requirements Specification for <Project> Page ii Table of Contents Table of Contents .......................................................................................................................... ii Revision History ............................................................................................................................ ii 1. Introduction ..............................................................................................................................1 1.1 Purpose ............................................................................................................................................. 1 1.2 Document Conventions .................................................................................................................... 1 1.3 Intended Audience and Reading Suggestions................................................................................... 1 1.4 Project Scope .................................................................................................................................... 1 1.5 References......................................................................................................................................... 1 2. Overall Description ..................................................................................................................2 -
Fully Generated Scripted Dialogue for Embodied Agents
Fully Generated Scripted Dialogue for Embodied Agents Kees van Deemter 1, Brigitte Krenn 2, Paul Piwek 3, Martin Klesen 4, Marc Schröder 5, and Stefan Baumann 6 Abstract : This paper presents the NECA approach to the generation of dialogues between Embodied Conversational Agents (ECAs). This approach consist of the automated constructtion of an abstract script for an entire dialogue (cast in terms of dialogue acts), which is incrementally enhanced by a series of modules and finally ”performed” by means of text, speech and body language, by a cast of ECAs. The approach makes it possible to automatically produce a large variety of highly expressive dialogues, some of whose essential properties are under the control of a user. The paper discusses the advantages and disadvantages of NECA’s approach to Fully Generated Scripted Dialogue (FGSD), and explains the main techniques used in the two demonstrators that were built. The paper can be read as a survey of issues and techniques in the construction of ECAs, focussing on the generation of behaviour (i.e., focussing on information presentation) rather than on interpretation. Keywords : Embodied Conversational Agents, Fully Generated Scripted Dialogue, Multimodal Interfaces, Emotion modelling, Affective Reasoning, Natural Language Generation, Speech Synthesis, Body Language. 1. Introduction A number of scientific disciplines have started, in the last decade or so, to join forces to build Embodied Conversational Agents (ECAs): software agents with a human-like synthetic voice and 1 University of Aberdeen, Scotland, UK (Corresponding author, email [email protected]) 2 Austrian Research Center for Artificial Intelligence (OEFAI), University of Vienna, Austria 3 Centre for Research in Computing, The Open University, UK 4 German Research Center for Artificial Intelligence (DFKI), Saarbruecken, Germany 5 German Research Center for Artificial Intelligence (DFKI), Saarbruecken, Germany 6 University of Koeln, Germany. -
A University-Based Smart and Context Aware Solution for People with Disabilities (USCAS-PWD)
computers Article A University-Based Smart and Context Aware Solution for People with Disabilities (USCAS-PWD) Ghassan Kbar 1,*, Mustufa Haider Abidi 2, Syed Hammad Mian 2, Ahmad A. Al-Daraiseh 3 and Wathiq Mansoor 4 1 Riyadh Techno Valley, King Saud University, P.O. Box 3966, Riyadh 12373-8383, Saudi Arabia 2 FARCAMT CHAIR, Advanced Manufacturing Institute, King Saud University, Riyadh 11421, Saudi Arabia; [email protected] (M.H.A.); [email protected] (S.H.M.) 3 College of Computer and Information Sciences, King Saud University, Riyadh 11421, Saudi Arabia; [email protected] 4 Department of Electrical Engineering, University of Dubai, Dubai 14143, United Arab Emirates; [email protected] * Correspondence: [email protected] or [email protected]; Tel.: +966-114693055 Academic Editor: Subhas Mukhopadhyay Received: 23 May 2016; Accepted: 22 July 2016; Published: 29 July 2016 Abstract: (1) Background: A disabled student or employee in a certain university faces a large number of obstacles in achieving his/her ordinary duties. An interactive smart search and communication application can support the people at the university campus and Science Park in a number of ways. Primarily, it can strengthen their professional network and establish a responsive eco-system. Therefore, the objective of this research work is to design and implement a unified flexible and adaptable interface. This interface supports an intensive search and communication tool across the university. It would benefit everybody on campus, especially the People with Disabilities (PWDs). (2) Methods: In this project, three main contributions are presented: (A) Assistive Technology (AT) software design and implementation (based on user- and technology-centered design); (B) A wireless sensor network employed to track and determine user’s location; and (C) A novel event behavior algorithm and movement direction algorithm used to monitor and predict users’ behavior and intervene with them and their caregivers when required. -
Voicesetting: Voice Authoring Uis for Improved Expressivity in Augmentative Communication Alexander J
Voicesetting: Voice Authoring UIs for Improved Expressivity in Augmentative Communication Alexander J. Fiannaca, Ann Paradiso, Jon Campbell, Meredith Ringel Morris Microsoft Research, Redmond, WA, USA {alfianna, annpar, joncamp, merrie}@microsoft.com ABSTRACT prosodic features such as the rate of speech, the pitch of the Alternative and augmentative communication (AAC) voice, and cadence/pacing of words. The fact that these systems used by people with speech disabilities rely on text- advances have yet to be incorporated into SGDs in a to-speech (TTS) engines for synthesizing speech. Advances meaningful way is a major issue for AAC. Recent work such in TTS systems allowing for the rendering of speech with a as that of Higginbotham [9], Kane et. al. [11], and Pullin et range of emotions have yet to be incorporated into AAC al. [22] has described the importance of this issue and the systems, leaving AAC users with speech that is mostly need to develop better AAC systems capable of more devoid of emotion and expressivity. In this work, we expressive speech, but to date, there are no research or describe voicesetting as the process of authoring the speech commercially available AAC devices that provide advanced properties of text. We present the design and evaluation of expressive speech capabilities, with a majority only allowing two voicesetting user interfaces: the Expressive Keyboard, for basic modification of speech parameters (overall rate and designed for rapid addition of expressivity to speech, and the volume) that cannot be varied on-the-fly (as utterances are Voicesetting Editor, designed for more careful crafting of the constructed and played), while not leveraging the capabilities way text should be spoken. -
Design and Implementation of Text to Speech Conversion for Visually Impaired People
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Covenant University Repository International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868 Foundation of Computer Science FCS, New York, USA Volume 7– No. 2, April 2014 – www.ijais.org Design and Implementation of Text To Speech Conversion for Visually Impaired People Itunuoluwa Isewon* Jelili Oyelade Olufunke Oladipupo Department of Computer and Department of Computer and Department of Computer and Information Sciences Information Sciences Information Sciences Covenant University Covenant University Covenant University PMB 1023, Ota, Nigeria PMB 1023, Ota, Nigeria PMB 1023, Ota, Nigeria * Corresponding Author ABSTRACT A Text-to-speech synthesizer is an application that converts text into spoken word, by analyzing and processing the text using Natural Language Processing (NLP) and then using Digital Signal Processing (DSP) technology to convert this processed text into synthesized speech representation of the text. Here, we developed a useful text-to-speech synthesizer in the form of a simple application that converts inputted text into synthesized speech and reads out to the user which can then be saved as an mp3.file. The development of a text to Figure 1: A simple but general functional diagram of a speech synthesizer will be of great help to people with visual TTS system. [2] impairment and make making through large volume of text easier. 2. OVERVIEW OF SPEECH SYNTHESIS Speech synthesis can be described as artificial production of Keywords human speech [3]. A computer system used for this purpose is Text-to-speech synthesis, Natural Language Processing, called a speech synthesizer, and can be implemented in Digital Signal Processing software or hardware. -
Feasibility Study on a Text-To-Speech Synthesizer for Embedded Systems
2006:113 CIV MASTER'S THESIS Feasibility Study on a Text-To-Speech Synthesizer for Embedded Systems Linnea Hammarstedt Luleå University of Technology MSc Programmes in Engineering Electrical Engineering Department of Computer Science and Electrical Engineering Division of Signal Processing 2006:113 CIV - ISSN: 1402-1617 - ISRN: LTU-EX--06/113--SE Preface This is a master degree project commissioned by and performed at Teleca Systems GmbH in N¨urnberg at the department of Speech Technology. Teleca is an IT services company focused on developing and integrating advanced software and information technology so- lutions. Today Teleca possesses a speak recognition system including a grapheme-to-phoneme module, i.e., an algorithm converting text into phonetic notation. Their future objective is to develop a Text-To-Speech system including this module. The purpose of this work from Teleca’s point of view is to investigate a possible solution of converting phonetic notation into speech suitable for an embedded implementation platform. I would like to thank Dr. Andreas Kiessling at Teleca for his support and patient discussions during this work, and Dr. Stefan Dobler, the head of department, for giving me the possibility to experience this interesting field of speech technology. Finally, I wish to thank all the other personnel of the department for their consistently practical support. i Abstract A system converting textual information into speech is usually denoted as a TTS (Text-To- Speech) system. The design of this system varies depending on its purpose and platform requirements. In this thesis a TTS synthesizer designed for an embedded system operat- ing on an arbitrary vocabulary has been evaluated and partially implemented in Matlab, constituting a base for further development. -
Assisting the Speech Impaired People Using Text-To-Speech Synthesis 1Ledisi G
International Journal of Emerging Engineering Research and Technology Volume 3, Issue 8, August, 2015, PP 214-224 ISSN 2349-4395 (Print) & ISSN 2349-4409 (Online) Assisting the Speech Impaired People Using Text-to-Speech Synthesis 1Ledisi G. Kabari 2Ledum F. Atu 1,2Department of Computer Science, Rivers State Polytechnic, Bori, Nigeria ABSTRACT Many people have some sort of disability which impairs their ability to communicate, thus with all their intelligence they are not able to participate in conferences or meeting proceedings and consequently inhibit in a way their contributions to development of the nation. Work in alternative and augmentative communication (AAC) devices attempts to address this need. This paper tries to design a system that can be used to read out an input to its user. The system was developed using Visual Basic6.0 for the user interface, and was interfaced with the Lemout & Hauspie True Voice Text-to-Speech Engine (American English) and Microsoft agent. The system will allow the users to input the text to be read out, and also allows the user to open text document of the Rich Text Formant (.rtf) and Text File format (.txt) that have been saved on disk, for reading. Keywords: Text-To-Speech (TTS), Digital Signal Processing (DSP), Natural Language Processing (NLP), Alterative and Augmentative Communication (AAC), Rich Text file Format (RTF), Text File Format (TFF). INTRODUCTION The era has come and it is now here, where the mind and reasoning of all are needed for the development of a nation. Thus people who have some sort of disability which impairs their ability to communicate cannot be left behind. -
Multi-Lingual Screen Reader and Processing of Font-Data in Indian Languages Submitted by A
MULTI-LINGUAL SCREEN READER AND PROCESSING OF FONT-DATA IN INDIAN LANGUAGES A THESIS submitted by A. ANAND AROKIA RAJ for the award of the degree of Master of Science (by Research) in Computer Science & Engineering LANGUAGE TECHNOLOGIES RESEARCH CENTER INTERNATIONAL INSTITUTE OF INFORMATION TECHNOLOGY HYDERABAD - 500 032, INDIA February 2008 Dedication I dedicate this work to my loving parents ANTONI MUTHU RAYAR and MUTHU MARY who are always behind my growth. i THESIS CERTIFICATE This is to certify that the thesis entitled Multi-Lingual Screen Reader and Processing of Font-Data in Indian Languages submitted by A. Anand Arokia Raj to the International Institute of Information Technology, Hyderabad for the award of the degree of Master of Science (by Research) in Computer Science & Engineering is a bonafide record of research work carried out by him under our supervision. The contents of this thesis, in full or in parts, have not been submitted to any other Institute or University for the award of any degree or diploma. Hyderabad - 500 032 (Mr. S. Kishore Prahallad) Date: ACKNOWLEDGMENTS I thank my adviser Mr.Kishore Prahallad for his guidance during my thesis work. Because of him now I am turned into not only a researcher but also a developer. I thank Prof.Rajeev Sangal and Dr.Vasudeva Varma for their valuable advices and inputs for improving the screen reading software. I am also thankful to my speech lab friends and other IIIT friends who helped me in many ways. They were with me when I needed a help and when I needed a relaxation. -
Microsoft Voices Download Windows 10 Use Voice Recognition in Windows 10
microsoft voices download windows 10 Use voice recognition in Windows 10. Before you set up voice recognition, make sure you have a microphone set up. Select the Start button, then select Settings > Time & Language > Speech . Under Microphone , select the Get started button. Help your PC recognize your voice. You can teach Windows 10 to recognize your voice. Here's how to set it up: In the search box on the taskbar, type Windows Speech Recognition , and then select Windows Speech Recognition in the list of results. If you don't see a dialog box that says "Welcome to Speech Recognition Voice Training," then in the search box on the taskbar, type Control Panel , and select Control Panel in the list of results. Then select Ease of Access > Speech Recognition > Train your computer to understand you better . Appendix A: Supported languages and voices. The following table explains what languages and text-to-speech (TTS) voices are available in the latest version of Windows. Language, country, or region. Female TTS voice. Arabic (Saudi Arabia) Cantonese (Traditional, Hong Kong SAR) Chinese (Traditional, Taiwan) Czech (Czech Republic) English (Great Britain) English (United States) Flemish (Belgian Dutch) Add a TTS voice to your PC. To use one of these voices, add it to your PC: Open Narrator Settings by pressing the Windows logo key + Ctrl + N . Under Personalize Narrator’s voice , select Add more voices . This will take you to the Speech settings page. Under Manage voices , select Add voices. Select the language you would like to install voices for and select Add . The new voices will download and be ready for use in a few minutes, depending on your internet download speed. -
A Unit Selection Text-To-Speech-And-Singing Synthesis Framework from Neutral Speech: Proof of Concept 39 II.1 Introduction
Adding expressiveness to unit selection speech synthesis and to numerical voice production 90) - 02 - Marc Freixes Guerreiro http://hdl.handle.net/10803/672066 Generalitat 472 (28 de Catalunya núm. Rgtre. Fund. ADVERTIMENT. L'accés als continguts d'aquesta tesi doctoral i la seva utilització ha de respectar els drets de ió la persona autora. Pot ser utilitzada per a consulta o estudi personal, així com en activitats o materials d'investigació i docència en els termes establerts a l'art. 32 del Text Refós de la Llei de Propietat Intel·lectual undac F (RDL 1/1996). Per altres utilitzacions es requereix l'autorització prèvia i expressa de la persona autora. En qualsevol cas, en la utilització dels seus continguts caldrà indicar de forma clara el nom i cognoms de la persona autora i el títol de la tesi doctoral. No s'autoritza la seva reproducció o altres formes d'explotació efectuades amb finalitats de lucre ni la seva comunicació pública des d'un lloc aliè al servei TDX. Tampoc s'autoritza la presentació del seu contingut en una finestra o marc aliè a TDX (framing). Aquesta reserva de drets afecta tant als continguts de la tesi com als seus resums i índexs. Universitat Ramon Llull Universitat Ramon ADVERTENCIA. El acceso a los contenidos de esta tesis doctoral y su utilización debe respetar los derechos de la persona autora. Puede ser utilizada para consulta o estudio personal, así como en actividades o materiales de investigación y docencia en los términos establecidos en el art. 32 del Texto Refundido de la Ley de Propiedad Intelectual (RDL 1/1996). -
Controlling Synthetic Voices Via Midi- Accordeon
PHONODEON: CONTROLLING SYNTHETIC VOICES VIA MIDI- ACCORDEON Anastasia Georgaki Ioannis Zannos Nikolaos Valsmakis, Music Department, Audiovisual Arts Technological Institute of University of Athens, Department, Ionian Crete, Department of Music Athens, Greece University, acoustics and Technology, [email protected] Corfu, Greece Rethymnon, Greece [email protected] [email protected] ABSTRACT particularities of the vocal signal, or the refinement of established sound synthesis techniques. For example This paper presents research on controlling synthetic one of the techniques based on estimating the spectral voice via a MIDI accordion. As motivation for this envelope, while allowing the precise extraction of research served the goal of reviving "lost instruments", formants of high frequencies of female voices, poses that is of using existing traditional instruments “lost” to many difficulties at the level of analysis [X. Rodet, new music genres as interfaces to control new types of 1992]. On the other task of automatic generation of sound generation in novel and future genres of music. control parameters is further complicated by the non- We worked with a MIDI-accordion, because it linearity of the vocal signal [P.Cook, 1996]. provides expressive control similar to the human breath While research on the synthesis of the singing through the mechanism of the bellows. The control voice is well established and widely present in interface of the two hands is also suited for computer music and related fields, the aspect of characteristics of the singing voice: The right hand controlling the synthetic voices through real instruments controls the pitch of the voice as traditionally used, is yet understudied.