Personalising Synthetic Voices for Individuals with Severe Speech Impairment

Total Page:16

File Type:pdf, Size:1020Kb

Personalising Synthetic Voices for Individuals with Severe Speech Impairment PERSONALISING SYNTHETIC VOICES FOR INDIVIDUALS WITH SEVERE SPEECH IMPAIRMENT Sarah M. Creer DOCTOR OF PHILOSOPHY AT DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF SHEFFIELD SHEFFIELD, UK AUGUST 2009 IMAGING SERVICES NORTH Boston Spa, Wetherby West Yorkshire, lS23 7BQ www.bl.uk Contains CD Table of Contents Table of Contents ii List of Figures vi Abbreviations x 1 Introduction 1 1.1 Population affected by speech loss and impairment 2 1.1.1 Dysarthria .......... 2 1.2 Voice Output Communication Aids . 3 1.3 Scope of the thesis . 5 1.4 Thesis overview . 6 1.4.1 Chapter 2: VOCAs and social interaction 6 1.4.2 Chapter 3: Speech synthesis methods and evaluation. 7 1.4.3 Chapter 4: HTS - HMM-based synthesis. 7 1.4.4 Chapter 5: Voice banking using HMM-based synthesis for data pre- deterioration .. 7 1.4.5 Chapter 6: Building voices using dysarthric data 8 1.4.6 Chapter 7: Conclusions and further work 8 2 VOCAs and social interaction 9 2.1 Introduction ...................... 9 2.2 Acceptability of AAC: social interaction perspective 9 2.3 Acceptability of VOCAs . 10 2.3.1 Speed of access . 11 2.3.2 Aspects related to the output voice . 15 2.4 Dysarthric speech. 21 2.4.1 Types of dysarthria ........ 22 2.4.2 Acoustic description of dysarthria . 23 2.4.3 Theory of dysarthric speech production 26 2.5 Requirements of aVOCA . .. ... 27 ii 2.6 Conclusions................. 27 3 Speech synthesis methods and evaluation 29 3.1 Introduction ..... 29 3.2 Evaluation...... 29 3.2.1 Intelligibility 30 3.2.2 Naturalness. 31 3.2.3 Similarity to target speaker 32 3.3 Methods of synthesis . 33 3.3.1 Articulatory synthesis . 33 3.3.2 Parametric synthesis . 36 3.3.3 Concatenative synthesis 43 3.3.4 Model-based synthesis 54 3.3.5 Voice conversion 58 3.4 Conclusions.......... 62 4 HTS - HMM-based synthesis 65 4.1 Introduction ......... 65 4.2 HTS overview . 65 4.2.1 Speaker dependent system . 66 4.2.2 Speaker adaptation system 67 4.3 Feature vectors .......... 68 4.4 Model details: contexts and parameter sharing 69 4.5 Model construction . 70 4.5.1 Speaker dependent models. 70 4.5.2 Speaker adaptation models 70 4.6 Synthesis ............. 70 4.6.1 Global variance ...... 71 4.7 HMM-based synthesis and dysarthric speech. 71 4.7.1 Misalignment between data and labels 71 4.7.2 Proposed approach. 72 4.8 Conclusion .................. 79 5 Voice banking using HMM-based synthesis for speech data pre-deterioration 81 5.1 Introduction. 81 5.2 Background.. 81 5.3 Evaluation... 82 5.3.1 Stimuli 82 5.3.2 Participants. 85 5.3.3 Procedure. 85 5.3.4 Results ... 86 iii 5.3.5 Discussion ................ 89 5.4 Acoustic features affecting listener judgements 91 5.4.1 Introduction ..... 91 5.4.2 Multi-layer perceptrons 93 5.4.3 Feature extraction ... 93 5.4.4 'fraining......... 95 5.4.5 Results: speaker-dependent MLPs 96 5.4.6 Results: speaker-independent MLPs 97 5.4.7 Discussion. 99 5.5 Conclusions............... 103 6 Building voices using dysarthric data 104 6.1 Introduction .. 104 6.2 Evaluation... 104 6.2.1 Method 105 6.3 Results..... 113 6.3.1 Speaker 3 113 6.3.2 Speaker 4 116 6.3.3 Speaker 5 119 6.4 Discussion.... 121 6.4.1 Limitations of the evaluation 121 6.4.2 Capturing speaker characteristics 122 6.4.3 Voice output quality . 123 6.4.4 Manipulation of prosodic features . 125 6.4.5 Speech reconstruction 126 6.4.6 Speaker acceptability. 126 6.5 Conclusion ......... 127 7 Conclusions and further work 129 7.1 Introduction. 129 7.2 Conclusions 129 7.3 Contributions 131 7.4 Further work 131 7.4.1 Value assessment of the application. 132 7.4.2 Automation of the procedure .. 133 7.4.3 Specification of target population. 136 7.5 Summary ....... 138 A Speech production theory 139 A.l Classical phonetics and phonology 139 A.I.1 International phonetic alphabet . 139 iv A.1.2 Limitations .. 140 A.2 Coarticulation theory 140 A.2.1 Target theory. 141 A.3 Action theory . 143 A.4 Articulatory phonology 143 A.5 Task dynamics .. 144 A.6 Cognitive phonetics. 144 B HMM-based speech synthesis: HTS further details 146 B.l Hidden Markov Models .... 146 B.2 HMM-based synthesis features 148 B.2.1 Spectral features 148 B.2.2 Log FO ... .. 149 B.2.3 Aperiodicity... 149 B.3 Context-dependent modelling for HMM-based synthesis 150 B.4 Label file format . 153 B.5 Average voice building and speaker adaptation 154 B.5.1 Adaptation from the average voice 155 C Test set sentences 157 D Protocol for data selection process 159 E Test set for evaluation of voices built with dysarthric data 160 v List of Figures 3.1 Source-filter model ........ 37 3.2 Basic cascade formant synthesiser . 38 3.3 Basic parallel formant synthesiser configuration 39 3.4 Decreasing pitch using PSOLA ........ 50 4.1 Overview of the basic HTS HMM-based speech synthesis speaker-dependent system. .. 66 4.2 Overview of HTS HMM-based synthesis speaker-independent adaptation sys- tern 67 4.3 The phrase "a cargo back as well" as spoken by a dysarthric speaker, labelled for use as adaptation data . .. 75 4.4 Possible component feature selection from the average voice model and the participant speaker model to produce an output speaker model . .. 77 5.1 Boxplot of results for the listener experiment for speaker 1 . 87 5.2 Boxplot of results for the listener experiment for speaker 2 . 88 5.3 Overall results of the listener experiment for speaker 1 88 5.4 Overall results of the listener experiment for speaker 2 89 5.5 Multi-layer percept ron . 94 5.6 Results of the speaker-dependent MLP experiment for speaker 1 97 5.7 Results of the speaker-dependent MLP experiment for speaker 2 98 5.8 Results of the listener experiments (equal contribution from each speaker) 100 5.9 Results of the speaker-independent MLP experiments ........... 101 vi 6.1 Amount of data accepted by the system in the case of unedited original recorded data and data edited for intelligibility ............... 108 A.1 Alternative models of speech production .. 142 B.1 Hidden Markov Model . 147 B.2 Hidden Semi-Markov Model (HSMM) 147 B.3 Aperiodicity component extraction . 150 B.4 Conversion of utterance level orthographic transcription to phonetic and prosodic labelling . 152 B.5 Component parts and structure of CSMAPLR 155 vii Abstract Speech technology can help individuals with speech disorders to interact more easily. Many individuals with severe speech impairment, due to conditions such as Parkinson's disease or motor neurone disease, use voice output communication aids (VOCAs), which have synthesised or pre­ recorded voice output. This voice output effectively becomes the voice of the individual and should therefore represent the user accurately. Currently available personalisation of speech synthesis techniques require a large amount of data input, which is difficult to produce for individuals with severe speech impairment. These techniques also do not provide a solution for those individuals whose voices have begun to show the effects of dysarthria. The thesis shows that Hidden Markov Model (HMM)-based speech synthesis is a promising approach for 'voice banking' for individuals before their condition causes deterioration of the speech and once deterioration has begun. Data input requirements for building personalised voices with this technique using human listener judgement evaluation is investigated. It shows that 100 sentences is the minimum required to build a significantly different voice from an average voice model and show some resemblance to the target speaker. This amount depends on the speaker and the average model used. A neural network analysis trained on extracted acoustic features revealed that spectral features had the most influence for predicting human listener judgements of similarity of synthesised speech to a target speaker. Accuracy of prediction significantly improves if other acoustic features are introduced and combined non-linearly. These results were used to inform the reconstruction of personalised synthetic voices for speakers whose voices had begun to show the effects of their conditions. Using HMM-based synthesis, per­ sonalised synthetic voices were built using dysarthric speech showing similarity to target speakers without recreating the impairment in the synthesised speech output. viii Acknowledgements Many thanks must go to my supervisors, Phil Green and Stuart Cunningham, whose experience, knowledge and direction was always appreciated and gratefully received. This research was sponsored by the Engineering and Physical Sciences Research Council. This work would not have been possible without the contribution from Junichi Yamagishi whose willingness to share his expertise and ideas has been greatly appreciated. Thanks must also go to others at CSTR who made my visit there so useful and enjoyable. Thanks to Margaret Freeman in the Department of Human Communication Sciences for the advice and assistance in finding participants. I am extremely grateful to all the participants in this study whose time and contributions have been essential and much appreciated. Thanks to all those in the VIVOCA project and the CAST and RAT groups for their interest and involvement in this work and general support. A thank you goes to SpandH, past and present for all the assistance, discussions, coffee­ times and the willingness to tolerate the quizzes and sweepstakes. Thanks also to those in ICOSS floor 3 for the tea and the solidarity throughout all weather conditions. It is good to have the opportunity to thank my friends and family for their great and continuing support.
Recommended publications
  • Expression Control in Singing Voice Synthesis
    Expression Control in Singing Voice Synthesis Features, approaches, n the context of singing voice synthesis, expression control manipu- [ lates a set of voice features related to a particular emotion, style, or evaluation, and challenges singer. Also known as performance modeling, it has been ] approached from different perspectives and for different purposes, and different projects have shown a wide extent of applicability. The Iaim of this article is to provide an overview of approaches to expression control in singing voice synthesis. We introduce some musical applica- tions that use singing voice synthesis techniques to justify the need for Martí Umbert, Jordi Bonada, [ an accurate control of expression. Then, expression is defined and Masataka Goto, Tomoyasu Nakano, related to speech and instrument performance modeling. Next, we pres- ent the commonly studied set of voice parameters that can change and Johan Sundberg] Digital Object Identifier 10.1109/MSP.2015.2424572 Date of publication: 13 October 2015 IMAGE LICENSED BY INGRAM PUBLISHING 1053-5888/15©2015IEEE IEEE SIGNAL PROCESSING MAGAZINE [55] noVEMBER 2015 voices that are difficult to produce naturally (e.g., castrati). [TABLE 1] RESEARCH PROJECTS USING SINGING VOICE SYNTHESIS TECHNOLOGIES. More examples can be found with pedagogical purposes or as tools to identify perceptually relevant voice properties [3]. Project WEBSITE These applications of the so-called music information CANTOR HTTP://WWW.VIRSYN.DE research field may have a great impact on the way we inter- CANTOR DIGITALIS HTTPS://CANTORDIGITALIS.LIMSI.FR/ act with music [4]. Examples of research projects using sing- CHANTER HTTPS://CHANTER.LIMSI.FR ing voice synthesis technologies are listed in Table 1.
    [Show full text]
  • Attuning Speech-Enabled Interfaces to User and Context for Inclusive Design: Technology, Methodology and Practice
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Springer - Publisher Connector Univ Access Inf Soc (2009) 8:109–122 DOI 10.1007/s10209-008-0136-x LONG PAPER Attuning speech-enabled interfaces to user and context for inclusive design: technology, methodology and practice Mark A. Neerincx Æ Anita H. M. Cremers Æ Judith M. Kessens Æ David A. van Leeuwen Æ Khiet P. Truong Published online: 7 August 2008 Ó The Author(s) 2008 Abstract This paper presents a methodology to apply 1 Introduction speech technology for compensating sensory, motor, cog- nitive and affective usage difficulties. It distinguishes (1) Speech technology seems to provide new opportunities to an analysis of accessibility and technological issues for the improve the accessibility of electronic services and soft- identification of context-dependent user needs and corre- ware applications, by offering compensation for limitations sponding opportunities to include speech in multimodal of specific user groups. These limitations can be quite user interfaces, and (2) an iterative generate-and-test pro- diverse and originate from specific sensory, physical or cess to refine the interface prototype and its design cognitive disabilities—such as difficulties to see icons, to rationale. Best practices show that such inclusion of speech control a mouse or to read text. Such limitations have both technology, although still imperfect in itself, can enhance functional and emotional aspects that should be addressed both the functional and affective information and com- in the design of user interfaces (cf. [49]). Speech technol- munication technology-experiences of specific user groups, ogy can be an ‘enabler’ for understanding both the content such as persons with reading difficulties, hearing-impaired, and ‘tone’ in user expressions, and for producing the right intellectually disabled, children and older adults.
    [Show full text]
  • Voice Synthesizer Application Android
    Voice synthesizer application android Continue The Download Now link sends you to the Windows Store, where you can continue the download process. You need to have an active Microsoft account to download the app. This download may not be available in some countries. Results 1 - 10 of 603 Prev 1 2 3 4 5 Next See also: Speech Synthesis Receming Device is an artificial production of human speech. The computer system used for this purpose is called a speech computer or speech synthesizer, and can be implemented in software or hardware. The text-to-speech system (TTS) converts the usual text of language into speech; other systems display symbolic linguistic representations, such as phonetic transcriptions in speech. Synthesized speech can be created by concatenating fragments of recorded speech that are stored in the database. Systems vary in size of stored speech blocks; The system that stores phones or diphones provides the greatest range of outputs, but may not have clarity. For specific domain use, storing whole words or suggestions allows for high-quality output. In addition, the synthesizer may include a vocal tract model and other characteristics of the human voice to create a fully synthetic voice output. The quality of the speech synthesizer is judged by its similarity to the human voice and its ability to be understood clearly. The clear text to speech program allows people with visual impairments or reading disabilities to listen to written words on their home computer. Many computer operating systems have included speech synthesizers since the early 1990s. A review of the typical TTS Automatic Announcement System synthetic voice announces the arriving train to Sweden.
    [Show full text]
  • Observations on the Dynamic Control of an Articulatory Synthesizer Using Speech Production Data
    Observations on the dynamic control of an articulatory synthesizer using speech production data Dissertation zur Erlangung des Grades eines Doktors der Philosophie der Philosophischen Fakultäten der Universität des Saarlandes vorgelegt von Ingmar Michael Augustus Steiner aus San Francisco Saarbrücken, 2010 ii Dekan: Prof. Dr. Erich Steiner Berichterstatter: Prof. Dr. William J. Barry Prof. Dr. Dietrich Klakow Datum der Einreichung: 14.12.2009 Datum der Disputation: 19.05.2010 Contents Acknowledgments vii List of Figures ix List of Tables xi List of Listings xiii List of Acronyms 1 Zusammenfassung und Überblick 1 Summary and overview 4 1 Speech synthesis methods 9 1.1 Text-to-speech ................................. 9 1.1.1 Formant synthesis ........................... 10 1.1.2 Concatenative synthesis ........................ 11 1.1.3 Statistical parametric synthesis .................... 13 1.1.4 Articulatory synthesis ......................... 13 1.1.5 Physical articulatory synthesis .................... 15 1.2 VocalTractLab in depth ............................ 17 1.2.1 Vocal tract model ........................... 17 1.2.2 Gestural model ............................. 19 1.2.3 Acoustic model ............................. 24 1.2.4 Speaker adaptation ........................... 25 2 Speech production data 27 2.1 Full imaging ................................... 27 2.1.1 X-ray and cineradiography ...................... 28 2.1.2 Magnetic resonance imaging ...................... 30 2.1.3 Ultrasound tongue imaging ...................... 33
    [Show full text]
  • Chapter 2 a Survey on Speech Synthesis Techniques
    Chapter 2 A Survey on Speech Synthesis Techniques* Producing synthetic speech segments from natural language text utterances comes with a unique set of challenges and is currently under serviced due to the unavail- ability of a generic model for all available languages. This chapter presents a study on the existing speech synthesis techniques along with their major advantages and deficiencies. The classification of different standard speech synthesis techniques are presented in Figure 2.1. This chapter also discusses about the current status of the text to speech technology in Indian languages focusing on the issues to be resolved in proposing a generic model for different Indian languages. Speech Synthesis Techniques Articulatory Formant Concatenative Statistical Unit Selection Domain Specific Diphone Syllable-based Figure 2.1: Classification of speech synthesis techniques *S. P. Panda, A. K. Nayak, and S. Patnaik, “Text to Speech Synthesis with an Indian Language Perspective”, International Journal of Grid and Utility Computing, Inderscience, Vol. 6, No. 3/4, pp. 170-178, 2015 17 Chapter 2: A Survey on Speech Synthesis Techniques 18 2.1 Articulatory Synthesis Articulatory synthesis models the natural speech production process of human. As a speech synthesis method this is not among the best, when the quality of the produced speech is the main criterion. However, for studying speech production process it is the most suitable method adopted by the researchers [48]. To understand the articulatory speech synthesis process the human speech production process is needed to be understood first. The human speech production organs (i.e. main articulators - the tongue, the jaw and the lips, etc as well as other important parts of the vocal tract) is shown in Figure 2.2 [49] along with the idealized model, which is the basis of almost every acoustical model.
    [Show full text]
  • Chapter 8 Speech Synthesis
    PRELIMINARY PROOFS. Unpublished Work c 2008 by Pearson Education, Inc. To be published by Pearson Prentice Hall, Pearson Education, Inc., Upper Saddle River, New Jersey. All rights reserved. Permission to use this unpublished Work is granted to individuals registering through [email protected] for the instructional purposes not exceeding one academic term or semester. Chapter 8 Speech Synthesis And computers are getting smarter all the time: Scientists tell us that soon they will be able to talk to us. (By ‘they’ I mean ‘computers’: I doubt scientists will ever be able to talk to us.) Dave Barry In Vienna in 1769, Wolfgang von Kempelen built for the Empress Maria Theresa the famous Mechanical Turk, a chess-playing automaton consisting of a wooden box filled with gears, and a robot mannequin sitting behind the box who played chess by moving pieces with his mechanical arm. The Turk toured Europeand the Americas for decades, defeating Napolean Bonaparte and even playing Charles Babbage. The Mechanical Turk might have been one of the early successes of artificial intelligence if it were not for the fact that it was, alas, a hoax, powered by a human chessplayer hidden inside the box. What is perhaps less well-known is that von Kempelen, an extraordinarily prolific inventor, also built between 1769 and 1790 what is definitely not a hoax: the first full-sentence speech synthesizer. His device consisted of a bellows to simulate the lungs, a rubber mouthpiece and a nose aperature, a reed to simulate the vocal folds, various whistles for each of the fricatives. and a small auxiliary bellows to provide the puff of air for plosives.
    [Show full text]
  • Challenges and Perspectives on Real-Time Singing Voice Synthesis S´Intesede Voz Cantada Em Tempo Real: Desafios E Perspectivas
    Revista de Informatica´ Teorica´ e Aplicada - RITA - ISSN 2175-2745 Vol. 27, Num. 04 (2020) 118-126 RESEARCH ARTICLE Challenges and Perspectives on Real-time Singing Voice Synthesis S´ıntesede voz cantada em tempo real: desafios e perspectivas Edward David Moreno Ordonez˜ 1, Leonardo Araujo Zoehler Brum1* Abstract: This paper describes the state of art of real-time singing voice synthesis and presents its concept, applications and technical aspects. A technological mapping and a literature review are made in order to indicate the latest developments in this area. We made a brief comparative analysis among the selected works. Finally, we have discussed challenges and future research problems. Keywords: Real-time singing voice synthesis — Sound Synthesis — TTS — MIDI — Computer Music Resumo: Este trabalho trata do estado da arte do campo da s´ıntese de voz cantada em tempo real, apre- sentando seu conceito, aplicac¸oes˜ e aspectos tecnicos.´ Realiza um mapeamento tecnologico´ uma revisao˜ sistematica´ de literatura no intuito de apresentar os ultimos´ desenvolvimentos na area,´ perfazendo uma breve analise´ comparativa entre os trabalhos selecionados. Por fim, discutem-se os desafios e futuros problemas de pesquisa. Palavras-Chave: S´ıntese de Voz Cantada em Tempo Real — S´ıntese Sonora — TTS — MIDI — Computac¸ao˜ Musical 1Universidade Federal de Sergipe, Sao˜ Cristov´ ao,˜ Sergipe, Brasil *Corresponding author: [email protected] DOI: http://dx.doi.org/10.22456/2175-2745.107292 • Received: 30/08/2020 • Accepted: 29/11/2020 CC BY-NC-ND 4.0 - This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. 1. Introduction voice synthesis embedded systems, through the description of its concept, theoretical premises, main used techniques, latest From a computational vision, the aim of singing voice syn- developments and challenges for future research.
    [Show full text]
  • Robust Structured Voice Extraction for Flexible Expressive Resynthesis
    ROBUST STRUCTURED VOICE EXTRACTION FOR FLEXIBLE EXPRESSIVE RESYNTHESIS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Pamornpol Jinachitra June 2007 c Copyright by Pamornpol Jinachitra 2007 All Rights Reserved ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Julius O. Smith, III Principal Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Robert M. Gray I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Jonathan S. Abel Approved for the University Committee on Graduate Studies. iii iv Abstract Parametric representation of audio allows for a reduction in the amount of data needed to represent the sound. If chosen carefully, these parameters can capture the expressiveness of the sound, while reflecting the production mechanism of the sound source, and thus allow for an intuitive control in order to modify the original sound in a desirable way. In order to achieve the desired parametric encoding, algorithms which can robustly identify the model parameters even from noisy recordings are needed. As a result, not only do we get an expressive and flexible coding system, we can also obtain a model-based speech enhancement that reconstructs the speech embedded in noise cleanly and free of musical noise usually associated with the filter-based approach.
    [Show full text]
  • Text to Speech Synthesis Introduction
    Text to speech Text to speech synthesis 20 Text to speech Introduction Computer speech Text preprocessing Grapheme to phoneme conversion Morphological decomposition Lexical stress and sentence accent Duration Intonation Acoutic realisation 21 Controlling TTS systems: XML standards for speech synthesis David Weenink (Phonetic Sciences University ofSpraakherkenning Amsterdam) en -synthese January, 2016 143 / 243 Text to speech Introduction Introduction Uses of speech synthesis by computer read aloud existing text (news, email, stories) Communicate volatile data as speech (weather reports, query results) The computer part of interactive dialogs The building block is a Text-to-Speech system that can handle standard text with a Speech Synthesis (XML) markup. The TTS system has to be able to generate acceptable speech from plain text, but can improve the quality using the markup tags David Weenink (Phonetic Sciences University ofSpraakherkenning Amsterdam) en -synthese January, 2016 144 / 243 Text to speech Computer speech Computer speech: generating the sound Speech Synthesizers can be classied on the way they generate speech sounds. This determines the type, and amount, of data that have to be collected. Speech synthesis Articulatory models Rules (formant synthesis) Diphone concatenation Unit selection David Weenink (Phonetic Sciences University ofSpraakherkenning Amsterdam) en -synthese January, 2016 145 / 243 Text to speech Computer speech Computer speech: articulatory models O Characteristics (/Er@/) Quantitavive source-lter model of vocal
    [Show full text]
  • Singing Voice Resynthesis Using Concatenative-Based Techniques
    Singing Voice Resynthesis using Concatenative-based Techniques Nuno Miguel da Costa Santos Fonseca Faculty of Engineering, University of Porto Department of Informatics Engineering December 2011 FACULDADE DE ENGENHARIA DA UNIVERSIDADE DO PORTO Departamento de Engenharia Informáica Singing Voice Resynthesis using Concatenative-based Techniques Nuno Miguel da Costa Santos Fonseca Dissertação submetida para satisfação parcial dos requisitos do grau de doutor em Engenharia Informática Dissertação realizada sob a orientação do Professor Doutor Aníbal João de Sousa Ferreira do Departamento de Engenharia Electrotécnica e de Computadores da Faculdade de Engenharia da Universidade do Porto e da Professora Doutora Ana Paula Cunha da Rocha do Departamento de Engenharia Informática da Faculdade de Engenharia da Universidade do Porto Porto, Dezembro de 2011 Singing Voice Resynthesis using Concatenative-based Techniques ii This dissertation had the kind support of FCT (Portuguese Foundation for Science and Technology, an agency of the Portuguese Ministry for Science, Technology and Higher Education) under grant SFRH / BD / 30300 / 2006, and has been articulated with research project PTDC/SAU- BEB/104995/2008 (Assistive Real-Time Technology in Singing) whose objectives include the development of interactive technologies helping the teaching and learning of singing. Copyright © 2011 by Nuno Miguel C. S. Fonseca All rights reserved. No parts of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic
    [Show full text]
  • Speech Synthesis
    Contents 1 Introduction 3 1.1 Quality of a Speech Synthesizer 3 1.2 The TTS System 3 2 History 4 2.1 Electronic Devices 4 3 Synthesizer Technologies 6 3.1 Waveform/Spectral Coding 6 3.2 Concatenative Synthesis 6 3.2.1 Unit Selection Synthesis 6 3.2.2 Diaphone Synthesis 7 3.2.3 Domain-Specific Synthesis 7 3.3 Formant Synthesis 8 3.4 Articulatory Synthesis 9 3.5 HMM-Based Synthesis 10 3.6 Sine Wave Synthesis 10 4 Challenges 11 4.1 Text Normalization Challenges 11 4.1.1 Homographs 11 4.1.2 Numbers and Abbreviations 11 4.2 Text-to-Phoneme Challenges 11 4.3 Evaluation Challenges 12 5 Speech Synthesis in Operating Systems 13 5.1 Atari 13 5.2 Apple 13 5.3 AmigaOS 13 5.4 Microsoft Windows 13 6 Speech Synthesis Markup Languages 15 7 Applications 16 7.1 Contact Centers 16 7.2 Assistive Technologies 16 1 © Specialty Answering Service. All rights reserved. 7.3 Gaming and Entertainment 16 8 References 17 2 © Specialty Answering Service. All rights reserved. 1 Introduction The word ‘Synthesis’ is defined by the Webster’s Dictionary as ‘the putting together of parts or elements so as to form a whole’. Speech synthesis generally refers to the artificial generation of human voice – either in the form of speech or in other forms such as a song. The computer system used for speech synthesis is known as a speech synthesizer. There are several types of speech synthesizers (both hardware based and software based) with different underlying technologies.
    [Show full text]
  • Production of Greek Singing Voice, the Result Is Clearly Acceptable, but Not Satisfactory Enough to Be 3.1
    Proceedings of the Inter. Computer Music Conference (ICMC07), Copenhagen, Denmark, 27-31 August 2007 A SCORE-TO-SINGING VOICE SYNTHESIS SYSTEM FOR THE GREEK LANGUAGE Varvara Kyritsi Anastasia Georgaki Georgios Kouroupetroglou University of Athens, University of Athens, University of Athens, Department of Informatics and Department of Music Studies, Department of Informatics and Telecommunications, Athens, Greece Telecommunications, Athens, Greece [email protected] Athens, Greece [email protected] [email protected] ABSTRACT [15], [11]. Consequently, a SVS system must include control tools for various parameters, as phone duration, In this paper, we examine the possibility of generating vibrato, pitch, volume etc. Greek singing voice with the MBROLA synthesizer, by making use of an already existing diphone database. Our Since 1980, various projects have been carried out all goal is to implement a score-to-singing synthesis system, over the world, through different optical views, languages where the score is written in a score editor and saved in and methodologies, focusing on the mystery of the the midi format. However, MBROLA accepts phonetic synthetic singing voice [10], [11], [13], [14], [17], [18], files as input rather than midi ones and because of this, and [19]. The different optical views extend from the we construct a midi-file to phonetic-file converter, which development of a proper technique for the naturalness of applies irrespective of the underlying language. the sound quality to the design of the rules that determine the adjunction of phonemes or diphones into phrases and The evaluation of the Greek singing voice synthesis their expression. Special studies on the emotion of the system is achieved through a psycho acoustic experiment.
    [Show full text]