Review of Speech Synthesis Technology

Total Page:16

File Type:pdf, Size:1020Kb

Review of Speech Synthesis Technology Helsinki University of Technology Department of Electrical and Communications Engineering Sami Lemmetty Review of Speech Synthesis Technology This Master's Thesis has been submitted for official examination for the degree of Master of Science in Espoo on March 30, 1999. Supervisor of the Thesis Professor Matti Karjalainen HELSINKI UNIVERSTY OF TECHNOLOGY Abstract of the Master's Thesis Author: Sami Lemmetty Name of the Thesis: Review of Speech Synthesis Technology Date: March 30, 1999 Number of pages: 104 Department: Electrical and Communications Engineering Professorship: Acoustics and Audio Signal Processing (S-89) Supervisor: Professor Matti Karjalainen Synthetic or artificial speech has been developed steadily during the last decades. Especially, the intelligibility has reached an adequate level for most applications, especially for communication impaired people. The intelligibility of synthetic speech may also be increased considerably with visual information. The objective of this work is to map the current situation of speech synthesis technology. Speech synthesis may be categorized as restricted (messaging) and unrestricted (text-to-speech) synthesis. The first one is suitable for announcing and information systems while the latter is needed for example in applications for the visually impaired. The text-to-speech procedure consists of two main phases, usually called high- and low-level synthesis. In high-level synthesis the input text is converted into such form that the low-level synthesizer can produce the output speech. The three basic methods for low-level synthesis are the formant, concatenative, and articulatory synthesis. The formant synthesis is based on the modeling of the resonances in the vocal tract and is perhaps the most commonly used during last decades. However, the concatenative synthesis which is based on playing prerecorded samples from natural speech is becoming more popular. In theory, the most accurate method is articulatory synthesis which models the human speech production system directly, but it is also the most difficult approach. Since the quality of synthetic speech is improving steadily, the application field is also expanding rapidly. Synthetic speech may be used to read e-mail and mobile messages, in multimedia applications, or in any kind of human-machine interaction. The evaluation of synthetic speech is also an important issue, but difficult because the speech quality is a very multidimensional term. This has led to the large number of different tests and methods to evaluate different features in speech. Today, speech synthesizers of various quality are available as several different products for all common languages, including Finnish. Keywords: speech synthesis, synthesized speech, text-to-speech, tts, artificial speech, speech synthesizer, audio-visual speech. TEKNILLINEN KORKEAKOULU Diplomityön tiivistelmä Tekijä: Sami Lemmetty Työn nimi: Katsaus puhesynteesiteknologiaan Päivämäärä: 30.3.1999 Sivumäärä: 104 Osasto: Sähkö- ja tietoliikennetekniikan osasto Professuuri: Akustiikka ja äänenkäsittelytekniikka (S-89) Työn valvoja: Professori Matti Karjalainen Synteettinen eli keinotekoisesti tuotettu puhe on kehittynyt varsin nopeasti viimeisten vuosikymmenten aikana. Erityisesti puheen ymmärrettävyys on saavuttanut riittävän tason moniin kommunikaatiovaikeuksia omaavien ihmisten tarpeisiin ja sovelluksiin. Synteettisen puheen ymmärrettävyyttä voidaan lisäksi parantaa merkittävästi lisäämällä visuaalista informaatiota (puhuva pää). Tämän työn tarkoitus on kartoittaa puhe- synteesiteknologian nykytila. Puhesynteesi voidaan jakaa rajoitetun ja rajoittamattoman sanaston synteesiin. Rajoitetun sanaston synteesi soveltuu hyvin erilaisiin kuulutus- ja informaatiojärjestelmiin, kun taas esimerkiksi näkövammaissovelluksiin tarvitaan useimmiten rajoittamattoman sanaston synteesiä. Rajoittamattoman sanaston synteesi voidaan jakaa korkean- ja matalan tason synteesiin. Korkean tason synteesi huolehtii tekstin esikäsittelystä (numerot, lyhenteen jne.), analyysistä sekä tarvittavan tiedon välittämisestä varsinaisen puhesignaalin tuottavan matalan tason syntetisaattorin ohjaamiseksi. Varsinaisen puhesynteesin tuottamiselle on kolme perusmenetelmää. Yleisin menetelmä on formanttisynteesi, missä mallinnetaan ihmisen ääniväylän resonanssikohtia. Yleistymässä on myös luonnollisesta puheesta poimittujen lyhyiden ääninäytteiden toistamiseen perustuva aikatason synteesi. Kolmas vaihtoehto on mallintaa ihmisen puheentuottojärjestelmää suoraan, mikä on kuitenkin teknisesti ja laskennallisesti varsin raskasta. Puheen luonnollisuuden parantuessa sitä on alettu käyttää yhä useammassa eri sovelluskohteessa, kuten erilaiset lukulaitteet (sähköposti, tekstiviesti jne.), multimedia, tai mikä tahansa ihmisen ja koneen välinen vuorovaikutus. Koska puheen laatu on varsin monitahoinen kysymys, on myös sen laadun arvioiminen varsin hankalaa ja monimutkaista. Tämän vuoksi on olemassa lukuisia eri menetelmiä synteettisen puheen laadun ja erilaisten ominaisuuksien arvioimiseksi. Puhe- syntetisaattoreita on tällä hetkellä saatavilla lukuisia erilaisia ja eritasoisia kaikille yleisimmille kielille, myös suomeksi. Avainsanat: puhesynteesi, audiovisuaalinen puhesynteesi, tts, keinotekoinen puhe, synteettinen puhe. PREFACE This thesis has been carried out at Helsinki University of Technology, Laboratory of Acoustics and Audio Signal Processing. The work is a part of the larger audio-visual speech synthesis project at HUT and it is supported by TEKES, Nokia Research Center, Sonera, Finnish Federation for the Visually Impaired, and Timehouse Oy. This thesis is based on the research and literature made by several speech synthesis researchers and research groups over the world during last decades. First, I would like to thank my supervisor Professor Matti Karjalainen for his instruction and guidance during this work. Without his encouragement, feedback, and motivation this work could not have been done. I wish also to give my thanks to Dr. Tech. Unto K. Laine for his comments and discussions considering my work, Mr. Matti Airas, Mr. Martti Rahkila, and Mr. Tero Tolonen for their support when the Windows95 has shown some of its concealed and unpredictable features, and the people at the Acoustics Laboratory for giving me the most enjoyable and inspiring working environment. I would also thank all the project participants for their feedback and comments during my work. Espoo, March 30, 1999 Sami Lemmetty TABLE OF CONTENTS 1. INTRODUCTION ..............................................................................................................1 1.1 PROJECT DESCRIPTION .....................................................................................................1 1.2 INTRODUCTION TO SPEECH SYNTHESIS ............................................................................2 2. HISTORY AND DEVELOPMENT OF SPEECH SYNTHESIS...................................4 2.1 FROM MECHANICAL TO ELECTRICAL SYNTHESIS ............................................................4 2.2 DEVELOPMENT OF ELECTRICAL SYNTHESIZERS ..............................................................6 2.3 HISTORY OF FINNISH SPEECH SYNTHESIS ......................................................................10 3. PHONETICS AND THEORY OF SPEECH PRODUCTION.....................................11 3.1 REPRESENTATION AND ANALYSIS OF SPEECH SIGNALS.................................................11 3.2 SPEECH PRODUCTION .....................................................................................................14 3.3 PHONETICS......................................................................................................................17 3.3.1 English Articulatory Phonetics..............................................................................18 3.3.2 Finnish Articulatory Phonetics..............................................................................20 4. PROBLEMS IN SPEECH SYNTHESIS ........................................................................22 4.1 TEXT-TO-PHONETIC CONVERSION..................................................................................22 4.1.1 Text preprocessing.................................................................................................22 4.1.2 Pronunciation ........................................................................................................24 4.1.3 Prosody..................................................................................................................25 4.2 PROBLEMS IN LOW LEVEL SYNTHESIS ...........................................................................26 4.3 LANGUAGE SPECIFIC PROBLEMS AND FEATURES...........................................................27 5. METHODS, TECHNIQUES, AND ALGORITHMS....................................................28 5.1 ARTICULATORY SYNTHESIS............................................................................................28 5.2 FORMANT SYNTHESIS .....................................................................................................29 5.3 CONCATENATIVE SYNTHESIS .........................................................................................32 5.3.1 PSOLA Methods.....................................................................................................34 5.3.2 Microphonemic Method.........................................................................................36 5.4 LINEAR PREDICTION BASED METHODS ..........................................................................37
Recommended publications
  • THE DEVELOPMENT of ACCENTED ENGLISH SYNTHETIC VOICES By
    THE DEVELOPMENT OF ACCENTED ENGLISH SYNTHETIC VOICES by PROMISE TSHEPISO MALATJI DISSERTATION Submitted in fulfilment of the requirements for the degree of MASTER OF SCIENCE in COMPUTER SCIENCE in the FACULTY OF SCIENCE AND AGRICULTURE (School of Mathematical and Computer Sciences) at the UNIVERSITY OF LIMPOPO SUPERVISOR: Mr MJD Manamela CO-SUPERVISOR: Dr TI Modipa 2019 DEDICATION In memory of my grandparents, Cecilia Khumalo and Alfred Mashele, who always believed in me! ii DECLARATION I declare that THE DEVELOPMENT OF ACCENTED ENGLISH SYNTHETIC VOICES is my own work and that all the sources that I have used or quoted have been indicated and acknowledged by means of complete references and that this work has not been submitted before for any other degree at any other institution. ______________________ ___________ Signature Date iii ACKNOWLEDGEMENTS I want to recognise the following people for their individual contributions to this dissertation: • My brother, Mr B.I. Khumalo and the whole family for the unconditional love, support and understanding. • A distinct thank you to both my supervisors, Mr M.J.D. Manamela and Dr T.I. Modipa, for their guidance, motivation, and support. • The Telkom Centre of Excellence for Speech Technology for providing the resources and support to make this study a success. • My colleagues in Department of Computer Science, Messrs V.R. Baloyi and L.M. Kola, for always motivating me. • A special thank you to Mr T.J. Sefara for taking his time to participate in the study. • The six Computer Science undergraduate students who sacrificed their precious time to participate in data collection.
    [Show full text]
  • Commercial Tools in Speech Synthesis Technology
    International Journal of Research in Engineering, Science and Management 320 Volume-2, Issue-12, December-2019 www.ijresm.com | ISSN (Online): 2581-5792 Commercial Tools in Speech Synthesis Technology D. Nagaraju1, R. J. Ramasree2, K. Kishore3, K. Vamsi Krishna4, R. Sujana5 1Associate Professor, Dept. of Computer Science, Audisankara College of Engg. and Technology, Gudur, India 2Professor, Dept. of Computer Science, Rastriya Sanskrit VidyaPeet, Tirupati, India 3,4,5UG Student, Dept. of Computer Science, Audisankara College of Engg. and Technology, Gudur, India Abstract: This is a study paper planned to a new system phonetic and prosodic information. These two phases are emotional speech system for Telugu (ESST). The main objective of usually called as high- and low-level synthesis. The input text this paper is to map the situation of today's speech synthesis might be for example data from a word processor, standard technology and to focus on potential methods for the future. ASCII from e-mail, a mobile text-message, or scanned text Usually literature and articles in the area are focused on a single method or single synthesizer or the very limited range of the from a newspaper. The character string is then preprocessed and technology. In this paper the whole speech synthesis area with as analyzed into phonetic representation which is usually a string many methods, techniques, applications, and products as possible of phonemes with some additional information for correct is under investigation. Unfortunately, this leads to a situation intonation, duration, and stress. Speech sound is finally where in some cases very detailed information may not be given generated with the low-level synthesizer by the information here, but may be found in given references.
    [Show full text]
  • Gender, Ethnicity, and Identity in Virtual
    Virtual Pop: Gender, Ethnicity, and Identity in Virtual Bands and Vocaloid Alicia Stark Cardiff University School of Music 2018 Presented in partial fulfilment of the requirements for the degree Doctor of Philosophy in Musicology TABLE OF CONTENTS ABSTRACT i DEDICATION iii ACKNOWLEDGEMENTS iv INTRODUCTION 7 EXISTING STUDIES OF VIRTUAL BANDS 9 RESEARCH QUESTIONS 13 METHODOLOGY 19 THESIS STRUCTURE 30 CHAPTER 1: ‘YOU’VE COME A LONG WAY, BABY:’ THE HISTORY AND TECHNOLOGIES OF VIRTUAL BANDS 36 CATEGORIES OF VIRTUAL BANDS 37 AN ANIMATED ANTHOLOGY – THE RISE IN POPULARITY OF ANIMATION 42 ALVIN AND THE CHIPMUNKS… 44 …AND THEIR SUCCESSORS 49 VIRTUAL BANDS FOR ALL AGES, AVAILABLE ON YOUR TV 54 VIRTUAL BANDS IN OTHER TYPES OF MEDIA 61 CREATING THE VOICE 69 REPRODUCING THE BODY 79 CONCLUSION 86 CHAPTER 2: ‘ALMOST UNREAL:’ TOWARDS A THEORETICAL FRAMEWORK FOR VIRTUAL BANDS 88 DEFINING REALITY AND VIRTUAL REALITY 89 APPLYING THEORIES OF ‘REALNESS’ TO VIRTUAL BANDS 98 UNDERSTANDING MULTIMEDIA 102 APPLYING THEORIES OF MULTIMEDIA TO VIRTUAL BANDS 110 THE VOICE IN VIRTUAL BANDS 114 AGENCY: TRANSFORMATION THROUGH TECHNOLOGY 120 CONCLUSION 133 CHAPTER 3: ‘INSIDE, OUTSIDE, UPSIDE DOWN:’ GENDER AND ETHNICITY IN VIRTUAL BANDS 135 GENDER 136 ETHNICITY 152 CASE STUDIES: DETHKLOK, JOSIE AND THE PUSSYCATS, STUDIO KILLERS 159 CONCLUSION 179 CHAPTER 4: ‘SPITTING OUT THE DEMONS:’ GORILLAZ’ CREATION STORY AND THE CONSTRUCTION OF AUTHENTICITY 181 ACADEMIC DISCOURSE ON GORILLAZ 187 MASCULINITY IN GORILLAZ 191 ETHNICITY IN GORILLAZ 200 GORILLAZ FANDOM 215 CONCLUSION 225
    [Show full text]
  • Models of Speech Synthesis ROLF CARLSON Department of Speech Communication and Music Acoustics, Royal Institute of Technology, S-100 44 Stockholm, Sweden
    Proc. Natl. Acad. Sci. USA Vol. 92, pp. 9932-9937, October 1995 Colloquium Paper This paper was presented at a colloquium entitled "Human-Machine Communication by Voice," organized by Lawrence R. Rabiner, held by the National Academy of Sciences at The Arnold and Mabel Beckman Center in Irvine, CA, February 8-9,1993. Models of speech synthesis ROLF CARLSON Department of Speech Communication and Music Acoustics, Royal Institute of Technology, S-100 44 Stockholm, Sweden ABSTRACT The term "speech synthesis" has been used need large amounts of speech data. Models working close to for diverse technical approaches. In this paper, some of the the waveform are now typically making use of increased unit approaches used to generate synthetic speech in a text-to- sizes while still modeling prosody by rule. In the middle of the speech system are reviewed, and some of the basic motivations scale, "formant synthesis" is moving toward the articulatory for choosing one method over another are discussed. It is models by looking for "higher-level parameters" or to larger important to keep in mind, however, that speech synthesis prestored units. Articulatory synthesis, hampered by lack of models are needed not just for speech generation but to help data, still has some way to go but is yielding improved quality, us understand how speech is created, or even how articulation due mostly to advanced analysis-synthesis techniques. can explain language structure. General issues such as the synthesis of different voices, accents, and multiple languages Flexibility and Technical Dimensions are discussed as special challenges facing the speech synthesis community.
    [Show full text]
  • Masterarbeit
    Masterarbeit Erstellung einer Sprachdatenbank sowie eines Programms zu deren Analyse im Kontext einer Sprachsynthese mit spektralen Modellen zur Erlangung des akademischen Grades Master of Science vorgelegt dem Fachbereich Mathematik, Naturwissenschaften und Informatik der Technischen Hochschule Mittelhessen Tobias Platen im August 2014 Referent: Prof. Dr. Erdmuthe Meyer zu Bexten Korreferent: Prof. Dr. Keywan Sohrabi Eidesstattliche Erklärung Hiermit versichere ich, die vorliegende Arbeit selbstständig und unter ausschließlicher Verwendung der angegebenen Literatur und Hilfsmittel erstellt zu haben. Die Arbeit wurde bisher in gleicher oder ähnlicher Form keiner anderen Prüfungsbehörde vorgelegt und auch nicht veröffentlicht. 2 Inhaltsverzeichnis 1 Einführung7 1.1 Motivation...................................7 1.2 Ziele......................................8 1.3 Historische Sprachsynthesen.........................9 1.3.1 Die Sprechmaschine.......................... 10 1.3.2 Der Vocoder und der Voder..................... 10 1.3.3 Linear Predictive Coding....................... 10 1.4 Moderne Algorithmen zur Sprachsynthese................. 11 1.4.1 Formantsynthese........................... 11 1.4.2 Konkatenative Synthese....................... 12 2 Spektrale Modelle zur Sprachsynthese 13 2.1 Faltung, Fouriertransformation und Vocoder................ 13 2.2 Phase Vocoder................................ 14 2.3 Spectral Model Synthesis........................... 19 2.3.1 Harmonic Trajectories........................ 19 2.3.2 Shape Invariance..........................
    [Show full text]
  • UN SYNTHÉTISEUR DE LA VOIX CHANTÉE BASÉ SUR MBROLA POUR LE MANDARIN Liu Ning
    UN SYNTHÉTISEUR DE LA VOIX CHANTÉE BASÉ SUR MBROLA POUR LE MANDARIN Liu Ning To cite this version: Liu Ning. UN SYNTHÉTISEUR DE LA VOIX CHANTÉE BASÉ SUR MBROLA POUR LE MAN- DARIN. Journées d’Informatique Musicale, 2012, Mons, France. hal-03041805 HAL Id: hal-03041805 https://hal.archives-ouvertes.fr/hal-03041805 Submitted on 5 Dec 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Actes des Journées d’Informatique Musicale (JIM 2012), Mons, Belgique, 9-11 mai 2012 UN SYNTHÉTISEUR DE LA VOIX CHANTÉE BASÉ SUR MBROLA POUR LE MANDARIN Liu Ning CICM (Centre de recherche en Informatique et Création Musicale) Université Paris VIII, MSH Paris Nord [email protected] RESUME fonction de la mélodie et des paroles à partir d’un fichier MIDI [7]. Dans cet article, nous présentons le projet de développement d’un synthétiseur de la voix chantée basé Néanmoins, les synthèses de la voix chantée pour la sur MBROLA en mandarin pour la langue chinoise. langue chinoise ont encore des limites techniques. Par Notre objectif vise à développer un synthétiseur qui exemple, le système à partir de la lecture de fichier MIDI puisse fonctionner en temps réel, qui soit capable de se est un système en temps différé.
    [Show full text]
  • Espeak : Speech Synthesis
    Software Requirements Specification for ESpeak : Speech Synthesis Version 1.48.15 Prepared by Dimitrios Koufounakis January 10, 2018 Copyright © 2002 by Karl E. Wiegers. Permission is granted to use, modify, and distribute this document. Software Requirements Specification for <Project> Page ii Table of Contents Table of Contents .......................................................................................................................... ii Revision History ............................................................................................................................ ii 1. Introduction ..............................................................................................................................1 1.1 Purpose ............................................................................................................................................. 1 1.2 Document Conventions .................................................................................................................... 1 1.3 Intended Audience and Reading Suggestions................................................................................... 1 1.4 Project Scope .................................................................................................................................... 1 1.5 References......................................................................................................................................... 1 2. Overall Description ..................................................................................................................2
    [Show full text]
  • Fully Generated Scripted Dialogue for Embodied Agents
    Fully Generated Scripted Dialogue for Embodied Agents Kees van Deemter 1, Brigitte Krenn 2, Paul Piwek 3, Martin Klesen 4, Marc Schröder 5, and Stefan Baumann 6 Abstract : This paper presents the NECA approach to the generation of dialogues between Embodied Conversational Agents (ECAs). This approach consist of the automated constructtion of an abstract script for an entire dialogue (cast in terms of dialogue acts), which is incrementally enhanced by a series of modules and finally ”performed” by means of text, speech and body language, by a cast of ECAs. The approach makes it possible to automatically produce a large variety of highly expressive dialogues, some of whose essential properties are under the control of a user. The paper discusses the advantages and disadvantages of NECA’s approach to Fully Generated Scripted Dialogue (FGSD), and explains the main techniques used in the two demonstrators that were built. The paper can be read as a survey of issues and techniques in the construction of ECAs, focussing on the generation of behaviour (i.e., focussing on information presentation) rather than on interpretation. Keywords : Embodied Conversational Agents, Fully Generated Scripted Dialogue, Multimodal Interfaces, Emotion modelling, Affective Reasoning, Natural Language Generation, Speech Synthesis, Body Language. 1. Introduction A number of scientific disciplines have started, in the last decade or so, to join forces to build Embodied Conversational Agents (ECAs): software agents with a human-like synthetic voice and 1 University of Aberdeen, Scotland, UK (Corresponding author, email [email protected]) 2 Austrian Research Center for Artificial Intelligence (OEFAI), University of Vienna, Austria 3 Centre for Research in Computing, The Open University, UK 4 German Research Center for Artificial Intelligence (DFKI), Saarbruecken, Germany 5 German Research Center for Artificial Intelligence (DFKI), Saarbruecken, Germany 6 University of Koeln, Germany.
    [Show full text]
  • Text-To-Speech Synthesis System for Marathi Language Using Concatenation Technique
    Text-To-Speech Synthesis System for Marathi Language Using Concatenation Technique Submitted to Dr. Babasaheb Ambedkar Marathwada University, Aurangabad, INDIA For the award of Doctor of Philosophy by Mr. Sangramsing Nathsing Kayte Supervised by: Dr. Bharti W. Gawali Professor Department of Computer Science and Information Technology DR. BABASAHEB AMBEDKAR MARATHWADA UNIVERSITY, AURANGABAD November 2018 Abstract A speech synthesis system is a computer-based system that should be able to read any text aloud with a particular language or multiple languages. This is also called as Text-to- Speech synthesis or in short TTS. Communication plays an important role in everyones life. Usually communication refers to speaking or writing or sending a message to another person. Speech is one of the most important ways for human communication. There have been a great number of efforts to incorporate speech for communication between humans and computers. Demand for technologies in speech processing, such as speech recogni- tion, dialogue processing, natural language processing, and speech synthesis are increas- ing. These technologies are useful for human-to-human like spoken language translation systems and human-to-machine communication like control for handicapped persons etc., TTS is one of the key technologies in speech processing. TTS system converts ordinary orthographic text into acoustic signal, which is indis- tinguishable from human speech. For developing a natural human machine interface, the TTS system can be used as a way to communicate back, like a human voice by a computer. The TTS can be a voice for those people who cannot speak. People wishing to learn a new language can use the TTS system to learn the pronunciation.
    [Show full text]
  • Wormed Voice Workshop Presentation
    Wormed Voice Workshop Presentation micro_research December 27, 2017 1 some worm poetry and songs: The WORM was for a long time desirous to speake, but the rule and order of the Court enjoyned him silence, but now strutting and swelling, and impatient, of further delay, he broke out thus... [Michael Maier] He worshipped the worm and prayed to the wormy grave. Serpent Lucifer, how do you do? Of your worms and your snakes I'd be one or two; For in this dear planet of wool and of leather `Tis pleasant to need neither shirt, sleeve, nor shoe, 2 And have arm, leg, and belly together. Then aches your head, or are you lazy? Sing, `Round your neck your belly wrap, Tail-a-top, and make your cap Any bee and daisy. Two pigs' feet, two mens' feet, and two of a hen; Devil-winged; dragon-bellied; grave- jawed, because grass Is a beard that's soon shaved, and grows seldom again worm writing the the the the,eeeronencoug,en sthistit, d.).dupi w m,tinsprsool itav f bometaisp- pav wheaigelic..)a?? orerdi mise we ich'roo bish ftroo htothuloul mespowouklain- duteavshi wn,jis, sownol hof." m,tisorora angsthyedust,es, fofald,junss ownoug brad,)fr m fr,aA?a????ck;A?stelav aly, al is.'rady'lfrdil owoncorara wns t.) sh'r, oof ofr,a? ar,a???????a? fu mo towess,eethen hrtolly-l,."tigolav ict,a???!ol, w..'m,elyelil,tstreamas..n gotaillas.tansstheatsea f mb ispot inici t.) owar.**1 wnshigigholoothtith orsir.tsotic.'m, sotamimoledug imootrdeavet..t,) sh s,tranciror."wn sieee h asinied.tiear wspilotor,) bla av.nicord,ier.dy'et.*tite m.)..*d, hrouceto hie, ig il m, bsomoug,.t.'l,t, olitel bs,.nt,.dotr tat,)aa? htotitedont,j alesil, starar,ja taie ass.nishiceroouldseal fotitoonckysil, m oitispl o anteeeaicowousomirot.
    [Show full text]
  • Design and Implementation of Text to Speech Conversion for Visually Impaired People
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Covenant University Repository International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868 Foundation of Computer Science FCS, New York, USA Volume 7– No. 2, April 2014 – www.ijais.org Design and Implementation of Text To Speech Conversion for Visually Impaired People Itunuoluwa Isewon* Jelili Oyelade Olufunke Oladipupo Department of Computer and Department of Computer and Department of Computer and Information Sciences Information Sciences Information Sciences Covenant University Covenant University Covenant University PMB 1023, Ota, Nigeria PMB 1023, Ota, Nigeria PMB 1023, Ota, Nigeria * Corresponding Author ABSTRACT A Text-to-speech synthesizer is an application that converts text into spoken word, by analyzing and processing the text using Natural Language Processing (NLP) and then using Digital Signal Processing (DSP) technology to convert this processed text into synthesized speech representation of the text. Here, we developed a useful text-to-speech synthesizer in the form of a simple application that converts inputted text into synthesized speech and reads out to the user which can then be saved as an mp3.file. The development of a text to Figure 1: A simple but general functional diagram of a speech synthesizer will be of great help to people with visual TTS system. [2] impairment and make making through large volume of text easier. 2. OVERVIEW OF SPEECH SYNTHESIS Speech synthesis can be described as artificial production of Keywords human speech [3]. A computer system used for this purpose is Text-to-speech synthesis, Natural Language Processing, called a speech synthesizer, and can be implemented in Digital Signal Processing software or hardware.
    [Show full text]
  • Feasibility Study on a Text-To-Speech Synthesizer for Embedded Systems
    2006:113 CIV MASTER'S THESIS Feasibility Study on a Text-To-Speech Synthesizer for Embedded Systems Linnea Hammarstedt Luleå University of Technology MSc Programmes in Engineering Electrical Engineering Department of Computer Science and Electrical Engineering Division of Signal Processing 2006:113 CIV - ISSN: 1402-1617 - ISRN: LTU-EX--06/113--SE Preface This is a master degree project commissioned by and performed at Teleca Systems GmbH in N¨urnberg at the department of Speech Technology. Teleca is an IT services company focused on developing and integrating advanced software and information technology so- lutions. Today Teleca possesses a speak recognition system including a grapheme-to-phoneme module, i.e., an algorithm converting text into phonetic notation. Their future objective is to develop a Text-To-Speech system including this module. The purpose of this work from Teleca’s point of view is to investigate a possible solution of converting phonetic notation into speech suitable for an embedded implementation platform. I would like to thank Dr. Andreas Kiessling at Teleca for his support and patient discussions during this work, and Dr. Stefan Dobler, the head of department, for giving me the possibility to experience this interesting field of speech technology. Finally, I wish to thank all the other personnel of the department for their consistently practical support. i Abstract A system converting textual information into speech is usually denoted as a TTS (Text-To- Speech) system. The design of this system varies depending on its purpose and platform requirements. In this thesis a TTS synthesizer designed for an embedded system operat- ing on an arbitrary vocabulary has been evaluated and partially implemented in Matlab, constituting a base for further development.
    [Show full text]