Sintesi Vocale Concatenativa Per L'italiano Tramite Modello Sinusoidale

Total Page:16

File Type:pdf, Size:1020Kb

Sintesi Vocale Concatenativa Per L'italiano Tramite Modello Sinusoidale UNIVERSITA` DEGLI STUDI DI PADOVA FACOLTA` DI INGEGNERIA DIPARTIMENTO DI INGEGNERIA DELL'INFORMAZIONE TESI DI LAUREA SINTESI VOCALE CONCATENATIVA PER L'ITALIANO TRAMITE MODELLO SINUSOIDALE Relatore: Ch.mo Prof. Giovanni De Poli Correlatore: Ing. Carlo Drioli Laureando: Giacomo Sommavilla 436433/IF A.A. 2004/2005 12 luglio 2005 iii ai miei genitori e a mio fratello iv Abstract In questa sezione si trova un breve riassunto in lingua italiana. Il lavoro di tesi consiste nel progetto di un sintetizzatore (nel dominio frequen- ziale) per un sistema Text-To-Speech (TTS) della lingua italiana. Tale lavoro `estato svolto presso la sezione di Dialettologia e Fonetica del Consiglio Nazionale di Ricerca (CNR) di Padova. of speech, through a grapheme-to-phoneme transcription of the sentences to utter 0.1 Cosa signi¯ca Text-To-Speech (TTS) Un sintetizzatore Text-To-Speech (TTS)[7] `e una applicazione software capace di riprodurre la voce, attraverso una trascrizione da grafemi a fonemi delle frasi da pronunciare1. La di®erenza fondamentale con altri sistemi di riproduzione audio della voce umana `eche il sistema di cui stiamo parlando `ein grado di leggere nuove frasi. L'algoritmo di conversione da testo a voce `ecostituito da due parti fondamentali: 1. analisi del linguaggio naturale (Natural Language Processing, NLP) e 2. elaborazione del segnale (Digital Signal Processing, DSP). Nel nostro caso abbiamo lavorato solamente sulla seconda parte, in particolare sulle seguenti tre sezioni: Prosody matching trasforma i parametri di prosodia (intonazione e durate delle 1Una de¯nizione alternativa, meno adatta al nostro caso, `e una applicazione software capace di leggere qualsiasi testo scritto v vi parole) calcolati nell'analisi del linguaggio naturale in variabili che serviranno per pilotare le funzioni di pitch shifting e time stretching; Diphone Concatenation si occupa di concatenare difoni successivi; Signal Synthesis e®ettua la anti-trasformata del segnale dal dominio frequenziale a quello temporale. Gli elementi concatenativi di base per la costruzione delle parole che abbiamo usato sono difoni, ovvero \segmenti acustici che includono la transizione fra due fonemi consecutivi", di cui abbiamo a disposizione un database2 di circa 1300 ¯les audio. Il numero `ecos`³ elevato poich¶e`enecessario un difono per ogni possibile transizione da un fonema all'altro. 0.2 Sintesi in frequenza L'applicazione software lavora su un secondo database, creato a partire dal primo, che si occupa di salvare i parametri frequenziali dell'analisi SMS (Spectral Model- ing Synthesis [20]) di ogni difono in un corrispondente ¯le .sdif. Il formato SDIF[3] (Sound Description Interface Format) `eadatto ai nostri scopi perch¶epermette una compressione lossless dei valori calcolati dall'analisi SMS, incapsulati nella rappre- sentazione spettrale di frames audio consecutivi. I pi`uimportanti di essi sono tre vettori che memorizzano ampiezze, frequenze e fasi delle parziali. Inoltre ad ogni frame `eassociato anche un inviluppo spettrale per parte residuale. Per svilupare le funzioni di analisi e sintesi abbiamo fatto uso del framework CLAM[4] (ideale per l'approccio SMS), realizzate dal Music Technology Group (MTG) della Universitat Pompeu Fabra di Barcellona. Nel nostro caso i vantaggi che derivano dall'operare nel dominio della frequenza (rispetto alle operazioni e®ettuate nel dominio temporale) e in particolare usando la tecnica SMS sono: 1. una rappresentazione del segnale pi`uversatile e potente per le elaborazioni in questione (tempo, pitch, e inviluppo spettrale)3; 2E` lo stesso database di difoni usato dal programma MBROLA[8] per l'italiano. 3La complessit`acomputazionale `emaggiore visto che sono coinvolte la ricostruzione dello spettro e una IFFT. 0.2 - Sintesi in frequenza vii 2. possibilit`adi sfruttare i risultati dell'analisi SMS nella concatenazione dei di- foni; 3. la distinzione tra parte sinusoidale (deterministica) e residuale (stocastica) permette di trattare diversamente fonemi vocalizzati (voiced phonemes) e non. Abbiamo sviluppato l'applicazione in modalit`a\Console" (ovvero da riga di co- mando, senza interfaccia gra¯ca), in modo tale che le sue funzionalit`apossano es- sere espanse (struttura modulare del codice sorgente). Per questo motivo abbiamo anche ridotto al minimo le librerie esterne, ad esempio abbiamo tolto alcune ine±- cienti dipendenze dal formato XML, normalmente in uso all'interno del framework di CLAM. Contents 0.1 Cosa signi¯ca Text-To-Speech (TTS) .................. v 0.2 Sintesi in frequenza ............................ vi Index xii 1 Introduction 1 1.1 Thesis structure .............................. 1 1.2 History of Speech Synthesis ....................... 2 1.2.1 From Mechanical to Electrical Synthesis ............ 2 1.2.2 Development of Electrical Synthesizers ............. 3 1.3 A Short Introduction to Text-to-Speech Synthesis ........... 5 1.3.1 What is TTS? .......................... 6 1.3.2 Why develop Automatic Reading? ............... 6 1.3.3 A general Text-To-Speech model ................ 8 1.3.4 The NLP component ....................... 8 1.3.5 Prosody generation ........................ 10 1.3.6 The DSP component ....................... 11 1.3.7 Speech synthesis ......................... 14 1.4 Characteristics of our TTS system ................... 15 1.4.1 Frequency-domain synthesis ................... 16 ix x CONTENTS 1.4.2 The .raw and .sdif databases . 17 1.5 About the realization of the software applications . 17 2 Sinusoidal Modeling Analysis and Synthesis 19 2.1 Analysis-Synthesis techniques: a brief history . 19 2.1.1 VOCODER and VODER .................... 19 2.1.2 Phase Vocoder .......................... 20 2.1.3 PARSHL and McAuley/Quatieri approach . 20 2.1.4 Harmonic plus Noise approach to speech synthesis . 22 2.1.5 Sinusoidal plus residual representation . 23 2.2 What is the SMS technique? ...................... 24 2.3 SMS Analysis ............................... 25 2.3.1 Sine detection ........................... 26 2.3.2 Pitch detection .......................... 28 2.3.3 Peaks continuation ........................ 28 2.4 SMS Transformations .......................... 30 2.5 SMS Synthesis .............................. 30 3 Diphones Database 35 3.1 What is a diphone? ............................ 35 3.2 The .raw diphones database ....................... 36 3.2.1 An example ............................ 37 3.2.2 Preprocessing ........................... 37 3.3 The .sdif diphone database ...................... 38 3.3.1 The analyzed SMS database ................... 38 3.3.2 What is the SDIF format? .................... 39 3.4 Transformations upon the diphones ................... 39 CONTENTS xi 3.4.1 An introductory description of CLAM classes . 40 3.4.2 Pitch Shifting ........................... 42 3.4.3 Time Stretching ......................... 43 3.5 Diphones Concatenation ......................... 44 3.5.1 Scheme of concatenation ..................... 44 4 Speech Synthesis architecture 47 4.1 Preamble: compatibility with MBROLA . 47 4.2 Speech Synthesis structure ........................ 47 4.3 Processing phonetic data ......................... 49 4.3.1 .pho ¯le parsing ......................... 49 4.3.2 Phonemes data structure .................... 51 4.3.3 Diphones data structure ..................... 52 4.4 Prosody Matching ............................ 52 4.4.1 Calculating and storing time values . 53 4.4.2 Calculating and storing pitch values . 54 4.5 Speech Synthesis Main Loop ....................... 56 4.5.1 The synthesized word \Ava" ................... 58 5 Tests and Conclusions 61 5.1 Cross tests with MBROLA ....................... 61 5.1.1 Tests Results ........................... 62 5.2 Some examples of further Spectral Transformations . 62 5.3 Conclusions ................................ 64 5.4 Acknowledgements ............................ 65 A Spectral Processing 69 A.1 The Fourier Transform .......................... 69 xii CONTENTS A.1.1 DCT and FFT properties .................... 70 A.2 The Short-time Fourier Transform ................... 70 A.3 Analysis windows ............................. 71 A.3.1 Time versus frequency resolution . 72 A.3.2 Zero phase windows ....................... 72 A.3.3 Zero padding ........................... 73 A.3.4 The Blackman-Harris window . 74 A.4 Inverse STFT ............................... 74 A.5 Spectral Representation ......................... 75 B The CLAM Framework 77 B.1 What is the CLAM framework? ..................... 77 B.2 Dynamic Types .............................. 78 B.3 About Processing in CLAM ....................... 78 B.3.1 Main CLAM Processing data structures . 78 B.3.2 Other CLAM Processing classes . 80 B.3.3 Audio I/O ............................. 80 B.3.4 External libraries ......................... 80 B.4 SMS Analysis and Synthesis in CLAM . 81 Bibliogra¯a 84 Chapter 1 Introduction 1.1 Thesis structure This Thesis is structured in 5 chapters. This introduction to speech synthesis (and more in particular Text-to-Speech) is the ¯rst one. The second chapter explains the mathematical basis behind our approach, the sinusoidal plus residual analysis - synthesis: after a brief introduction into digital spectral processing the sinusoidal plus residual model is presented. Then follows the theory of the Spectral Modeling Synthesis (SMS) approach. The third one begins with some preliminary de¯nitions. Subsequently we de- scribe how we obtained the frequency-domain transformed one. The chapter ends explaining how the operations (transformations
Recommended publications
  • Masterarbeit
    Masterarbeit Erstellung einer Sprachdatenbank sowie eines Programms zu deren Analyse im Kontext einer Sprachsynthese mit spektralen Modellen zur Erlangung des akademischen Grades Master of Science vorgelegt dem Fachbereich Mathematik, Naturwissenschaften und Informatik der Technischen Hochschule Mittelhessen Tobias Platen im August 2014 Referent: Prof. Dr. Erdmuthe Meyer zu Bexten Korreferent: Prof. Dr. Keywan Sohrabi Eidesstattliche Erklärung Hiermit versichere ich, die vorliegende Arbeit selbstständig und unter ausschließlicher Verwendung der angegebenen Literatur und Hilfsmittel erstellt zu haben. Die Arbeit wurde bisher in gleicher oder ähnlicher Form keiner anderen Prüfungsbehörde vorgelegt und auch nicht veröffentlicht. 2 Inhaltsverzeichnis 1 Einführung7 1.1 Motivation...................................7 1.2 Ziele......................................8 1.3 Historische Sprachsynthesen.........................9 1.3.1 Die Sprechmaschine.......................... 10 1.3.2 Der Vocoder und der Voder..................... 10 1.3.3 Linear Predictive Coding....................... 10 1.4 Moderne Algorithmen zur Sprachsynthese................. 11 1.4.1 Formantsynthese........................... 11 1.4.2 Konkatenative Synthese....................... 12 2 Spektrale Modelle zur Sprachsynthese 13 2.1 Faltung, Fouriertransformation und Vocoder................ 13 2.2 Phase Vocoder................................ 14 2.3 Spectral Model Synthesis........................... 19 2.3.1 Harmonic Trajectories........................ 19 2.3.2 Shape Invariance..........................
    [Show full text]
  • A Unit Selection Text-To-Speech-And-Singing Synthesis Framework from Neutral Speech: Proof of Concept 39 II.1 Introduction
    Adding expressiveness to unit selection speech synthesis and to numerical voice production 90) - 02 - Marc Freixes Guerreiro http://hdl.handle.net/10803/672066 Generalitat 472 (28 de Catalunya núm. Rgtre. Fund. ADVERTIMENT. L'accés als continguts d'aquesta tesi doctoral i la seva utilització ha de respectar els drets de ió la persona autora. Pot ser utilitzada per a consulta o estudi personal, així com en activitats o materials d'investigació i docència en els termes establerts a l'art. 32 del Text Refós de la Llei de Propietat Intel·lectual undac F (RDL 1/1996). Per altres utilitzacions es requereix l'autorització prèvia i expressa de la persona autora. En qualsevol cas, en la utilització dels seus continguts caldrà indicar de forma clara el nom i cognoms de la persona autora i el títol de la tesi doctoral. No s'autoritza la seva reproducció o altres formes d'explotació efectuades amb finalitats de lucre ni la seva comunicació pública des d'un lloc aliè al servei TDX. Tampoc s'autoritza la presentació del seu contingut en una finestra o marc aliè a TDX (framing). Aquesta reserva de drets afecta tant als continguts de la tesi com als seus resums i índexs. Universitat Ramon Llull Universitat Ramon ADVERTENCIA. El acceso a los contenidos de esta tesis doctoral y su utilización debe respetar los derechos de la persona autora. Puede ser utilizada para consulta o estudio personal, así como en actividades o materiales de investigación y docencia en los términos establecidos en el art. 32 del Texto Refundido de la Ley de Propiedad Intelectual (RDL 1/1996).
    [Show full text]
  • Tools for Creating Audio Stories
    Tools for Creating Audio Stories Steven Rubin Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2015-237 http://www.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-237.html December 15, 2015 Copyright © 2015, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement Advisor: Maneesh Agrawala Tools for Creating Audio Stories By Steven Surmacz Rubin A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Maneesh Agrawala, Chair Professor Björn Hartmann Professor Greg Niemeyer Fall 2015 Tools for Creating Audio Stories Copyright 2015 by Steven Surmacz Rubin 1 Abstract Tools for Creating Audio Stories by Steven Surmacz Rubin Doctor of Philosophy in Computer Science University of California, Berkeley Professor Maneesh Agrawala, Chair Audio stories are an engaging form of communication that combines speech and music into com- pelling narratives. One common production pipeline for creating audio stories involves three main steps: recording speech, editing speech, and editing music. Existing audio recording and editing tools force the story producer to manipulate speech and music tracks via tedious, low-level wave- form editing.
    [Show full text]
  • A Parametric Sound Object Model for Sound Texture Synthesis
    Daniel Mohlmann¨ A Parametric Sound Object Model for Sound Texture Synthesis Dissertation zur Erlangung des Grades eines Doktors der Ingenieurwissenschaften | Dr.-Ing. | Vorgelegt im Fachbereich 3 (Mathematik und Informatik) der Universitat¨ Bremen im Juni 2011 Gutachter: Prof. Dr. Otthein Herzog Universit¨atBremen Prof. Dr. J¨ornLoviscach Fachhochschule Bielefeld Abstract This thesis deals with the analysis and synthesis of sound textures based on parametric sound objects. An overview is provided about the acoustic and perceptual principles of textural acoustic scenes, and technical challenges for analysis and synthesis are con- sidered. Four essential processing steps for sound texture analysis are identified, and existing sound texture systems are reviewed, using the four-step model as a guideline. A theoretical framework for analysis and synthesis is proposed. A parametric sound object synthesis (PSOS) model is introduced, which is able to describe individual recorded sounds through a fixed set of parameters. The model, which applies to harmonic and noisy sounds, is an extension of spectral modeling and uses spline curves to approximate spectral envelopes, as well as the evolution of pa- rameters over time. In contrast to standard spectral modeling techniques, this repre- sentation uses the concept of objects instead of concatenated frames, and it provides a direct mapping between sounds of different length. Methods for automatic and manual conversion are shown. An evaluation is presented in which the ability of the model to encode a wide range of different sounds has been examined. Although there are aspects of sounds that the model cannot accurately capture, such as polyphony and certain types of fast modula- tion, the results indicate that high quality synthesis can be achieved for many different acoustic phenomena, including instruments and animal vocalizations.
    [Show full text]
  • Robust Structured Voice Extraction for Flexible Expressive Resynthesis
    ROBUST STRUCTURED VOICE EXTRACTION FOR FLEXIBLE EXPRESSIVE RESYNTHESIS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Pamornpol Jinachitra June 2007 c Copyright by Pamornpol Jinachitra 2007 All Rights Reserved ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Julius O. Smith, III Principal Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Robert M. Gray I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Jonathan S. Abel Approved for the University Committee on Graduate Studies. iii iv Abstract Parametric representation of audio allows for a reduction in the amount of data needed to represent the sound. If chosen carefully, these parameters can capture the expressiveness of the sound, while reflecting the production mechanism of the sound source, and thus allow for an intuitive control in order to modify the original sound in a desirable way. In order to achieve the desired parametric encoding, algorithms which can robustly identify the model parameters even from noisy recordings are needed. As a result, not only do we get an expressive and flexible coding system, we can also obtain a model-based speech enhancement that reconstructs the speech embedded in noise cleanly and free of musical noise usually associated with the filter-based approach.
    [Show full text]
  • Spectral Processing of the Singing Voice
    SPECTRAL PROCESSING OF THE SINGING VOICE A DISSERTATION SUBMITTED TO THE DEPARTMENT OF INFORMATION AND COMMUNICATION TECHNOLOGIES FROM THE POMPEU FABRA UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR PER LA UNIVERSITAT POMPEU FABRA Alex Loscos 2007 i © Copyright by Alex Loscos 2007 All Rights Reserved ii THESIS DIRECTION Dr. Xavier Serra Department of Information and Communication Technologies Universitat Pompeu Fabra, Barcelona ______________________________________________________________________________________ This research was performed at the Music Technology Group of the Universitat Pompeu Fabra in Barcelona. Primary support was provided by YAMAHA corporation and the EU project FP6-IST-507913 SemanticHIFI http://shf.ircam.fr/ iii Dipòsit legal: B.42903-2007 ISBN: 978-84-691-1201-4 Abstract This dissertation is centered on the digital processing of the singing voice, more concretely on the analysis, transformation and synthesis of this type of voice in the spectral domain, with special emphasis on those techniques relevant for music applications. The digital signal processing of the singing voice became a research topic itself since the middle of last century, when first synthetic singing performances were generated taking advantage of the research that was being carried out in the speech processing field. Even though both topics overlap in some areas, they present significant differentiations because of (a) the special characteristics of the sound source they deal and (b) because of the applications that can be built around them. More concretely, while speech research concentrates mainly on recognition and synthesis; singing voice research, probably due to the consolidation of a forceful music industry, focuses on experimentation and transformation; developing countless tools that along years have assisted and inspired most popular singers, musicians and producers.
    [Show full text]
  • Singing Synthesis Framework from Neutral Speech: Proof of Concept Marc Freixes* , Francesc Alías and Joan Claudi Socoró
    Freixes et al. EURASIP Journal on Audio, Speech, and Music Processing (2019) 2019:22 https://doi.org/10.1186/s13636-019-0163-y RESEARCH Open Access A unit selection text-to-speech-and- singing synthesis framework from neutral speech: proof of concept Marc Freixes* , Francesc Alías and Joan Claudi Socoró Abstract Text-to-speech (TTS) synthesis systems have been widely used in general-purpose applications based on the generation of speech. Nonetheless, there are some domains, such as storytelling or voice output aid devices, which may also require singing. To enable a corpus-based TTS system to sing, a supplementary singing database should be recorded. This solution, however, might be too costly for eventual singing needs, or even unfeasible if the original speaker is unavailable or unable to sing properly. This work introduces a unit selection-based text-to-speech-and-singing (US-TTS&S) synthesis framework, which integrates speech-to-singing (STS) conversion to enable the generation of both speech and singing from an input text and a score, respectively, using the same neutral speech corpus. The viability of the proposal is evaluated considering three vocal ranges and two tempos on a proof-of-concept implementation using a 2.6-h Spanish neutral speech corpus. The experiments show that challenging STS transformation factors are required to sing beyond the corpus vocal range and/or with notes longer than 150 ms. While score-driven US configurations allow the reduction of pitch-scale factors, time-scale factors are not reduced due to the short length of the spoken vowels. Moreover, in the MUSHRA test, text-driven and score-driven US configurations obtain similar naturalness rates of around 40 for all the analysed scenarios.
    [Show full text]
  • Spectral Analysis, Editing, and Resynthesis: Methods and Applications
    Spectral Analysis, Editing, and Resynthesis: Methods and Applications Michael Kateley Klingbeil Submitted in partial fulfillment of the requirements for the degree of Doctor of Musical Arts in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2009 © 2008 Michael Kateley Klingbeil All Rights Reserved ABSTRACT Spectral Analysis, Editing, and Resynthesis: Methods and Applications Michael Kateley Klingbeil This document describes the design, development, and applications of cross-platform software for audio analysis, editing, and resynthesis. Analysis is accomplished using a variation of the McAulay-Quatieri technique of peak interpolation and partial tracking. Linear prediction of the partial amplitudes and frequencies is used to determine the best continuations for sinusoidal tracks. A high performance user interface supports flexible selection and immediate manipulation of analysis data, cut and paste, and un- limited undo/redo. Hundreds of simultaneous partials can be synthesized in real-time and documents may contain thousands of individual partials dispersed in time without degrading performance. A variety of standard file formats, including the Sound Descrip- tion Interchange Format (SDIF), are supported for import and export of analysis data. Specific musical and compositional applications, including those in works by the author, are discussed. Contents List of Figures iv List of Tables vi Acknowledgements vii 1 Introduction1 1.1 Timbre and the Computer.............................1 1.2 Synthesis Methods.................................5 1.3 Theory Versus Implementation..........................9 1.3.1 Implementation History.......................... 10 1.3.2 Present Work................................ 11 2 Spectral Modeling 13 2.1 The Frequency Domain.............................. 13 2.1.1 Sinusoids.................................. 13 2.1.2 The Fourier Transform........................... 14 2.1.3 The Discrete Fourier Transform....................
    [Show full text]
  • Speech Analysis-Synthesis for Speaker Characteristic Modification G
    M.Tech. Credit Seminar Report, Electronic Systems Group, EE Dept, IIT Bombay, submitted November ‘04 SPEECH ANALYSIS-SYNTHESIS FOR SPEAKER CHARACTERISTIC MODIFICATION G. Gidda Reddy (Roll no. 04307046) (Supervisor: Prof. P. C. Pandey) ABSTRACT Speech analysis-synthesis techniques, for speaker characteristic modification, have many applications. Such a modification of speech could potentially be helpful to people with hearing disabilities. This paper describes the HNM (Harmonic plus Noise Model) [1], a new analysis/modification/synthesis model, in which each segment of speech can be modeled as two bands: a lower “harmonic” part can be represented using the amplitudes and phases of the harmonics of a fundamental and an upper “noise" part using an all pole filter excited by random white noise, with dynamically varying band boundary. HNM based synthesis [2] can be used for good quality output with relatively small number of parameters. Using HNM, pitch and time scaling are also possible without explicit estimation of vocal tract parameters. This paper also describes some other analysis-synthesis techniques (LPC vocoders, Cepstral vocoders, and Sine Transform Coders) in brief. 1. INTRODUCTION High quality speech modification is a subject of considerable importance in many applications such as text to speech synthesis based on acoustic unit concatenation, psychoacoustics experiments, foreign language learning, etc. Speech modification techniques have many applications [5]. Speeding up a voice response system save time for a busy, impatient user. Compressing the spectrum could be of helpful to people with hearing disabilities. Deep-sea divers speak in a helium-rich environment, and this gives the speech an effect called Mickey Mouse effect, which is due to the spectral changes caused by changes in the velocity of sound [5], where the spectral modification techniques are useful.
    [Show full text]
  • On Prosodic Modification of Speech
    Thesis for the degree of Licentiate of Engineering On Prosodic Modification of Speech Barbara Resch Sound and Image Processing Laboratory School of Electrical Engineering KTH (Royal Institute of Technology) Stockholm 2006 Resch, Barbara On Prosodic Modification of Speech Copyright c 2006 Barbara Resch except where otherwise stated. All rights reserved. ISBN 91-7178-267-2 TRITA-EE 2006:002 ISSN 1653 - 5146 Sound and Image Processing Laboratory School of Electrical Engineering KTH (Royal Institute of Technology) SE-100 44 Stockholm, Sweden Telephone + 46 (0)8-790 7790 Abstract Prosodic modification has become of major theoretical and practical inter- est in the field of speech processing research over the last decades. Algo- rithms for time and pitch scaling are used both for speech modification and for speech synthesis. The thesis consists of an introduction providing an overview and discussion of existing techniques for time and pitch scaling and of three research papers in this area. In paper A a system for time synchronization of speech is presented. It performs an alignment of two utterances of the same sentence, where one of the utterances is modified in time scale so as to be synchronized with the other utterance. The system is based on Dynamic Time Warping (DTW) and the Waveform Similarity Overlap and Add (WSOLA) method, a tech- nique for time scaling of speech signals. Paper B and C complement each other and present a novel speech representation system that facilitates both time and pitch scaling of speech signals. Paper A describes a method to warp a signal with time-varying pitch to a signal with constant pitch.
    [Show full text]
  • Synteza Mowy
    Akustyka mowy Synteza mowy mgr inż. Kuba Łopatka Katedra Systemów Multimedialnych [email protected] pok. 628, tel. (348) 63-32 PLAN WYKŁADU Pojęcie i ogólny schemat działania syntezy mowy Analiza językowa i fonetyczna Podejścia do syntezy sygnału mowy Synteza formantowa Synteza artykulacyjna Synteza konkatenacyjna Modelowanie prozodii Zastosowania i przykłady syntezy mowy 2 SYNTEZA MOWY Synteza mowy – (ang. TTS - Text-To- Speech ) – zamiana tekstu w formie pisanej na sygnał akustyczny, którego brzmienie naśladuje brzmienie ludzkiej mowy. Podstawowe cele syntezy to: zrozumiałość treści wypowiedzi, naturalność brzmienia. 3 SCHEMAT DZIAŁANIA SYSTEMU TTS tekst (lub tekst z opisem) Analiza tekstu synteza (NLP) wysokiego przetworzony tekst poziomu Analiza fonetyczna ciąg głosek + parametry synteza niskiego Synteza poziomu sygnału mowy 4 akustyczny sygnał mowy ANALIZA JĘZYKOWA Pierwszy etap przetwarzania – analiza tekstu. W analizie wykorzystywane są metody z dziedziny przetwarzania języka naturalnego (ang. Natural Language Processing – NLP ). Zadania wchodzące w skład analizy tekstu wejściowego: normalizacja tekstu, analiza morfologiczna, analiza syntaktyczna, analiza semantyczna, analiza prozodyczna. 5 ANALIZA JĘZYKOWA Przetwarzanie wstępne Analiza morfologiczna Analiza syntaktyczna Analiza kontekstowa / semantyczna Generowanie przebiegu prozodi 6 ANALIZA FONETYCZNA Zamiana wypowiedzi dostępnej w formie tekstowej na ciąg fonemów. uwzględnienie zjawisk fonetycznych obowiązujących w języku (np. utrata dźwięczności,
    [Show full text]
  • A Review of Deep Learning Based Speech Synthesis
    applied sciences Review A Review of Deep Learning Based Speech Synthesis Yishuang Ning 1,2,3,4, Sheng He 1,2,3,4, Zhiyong Wu 5,6,*, Chunxiao Xing 1,2 and Liang-Jie Zhang 3,4 1 Research Institute of Information Technology Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China 2 Department of Computer Science and Technology Institute of Internet Industry, Tsinghua University, Beijing 100084, China 3 National Engineering Research Center for Supporting Software of Enterprise Internet Services, Shenzhen 518057, China 4 Kingdee Research, Kingdee International Software Group Company Limited, Shenzhen 518057, China 5 Tsinghua-CUHK Joint Research Center for Media Sciences, Technologies and Systems, Shenzhen Key Laboratory of Information Science and Technology, Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055, China 6 Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China * Correspondence: [email protected] Received: 1 August 2019; Accepted: 20 September 2019; Published: 27 September 2019 Abstract: Speech synthesis, also known as text-to-speech (TTS), has attracted increasingly more attention. Recent advances on speech synthesis are overwhelmingly contributed by deep learning or even end-to-end techniques which have been utilized to enhance a wide range of application scenarios such as intelligent speech interaction, chatbot or conversational artificial
    [Show full text]