Lecture Notes

Total Page:16

File Type:pdf, Size:1020Kb

Lecture Notes Presentations Work in pairs in 6 minutes mini-interviews (3 minutes each). Ask questions around the topics: • What is your previous experience of speech synthesis? Speech synthesis • Why did you decide to take this course? • What do you expect to learn? Write down the answers of your partner. Present during the presentation round Submit the answers to me Why? To let me know more about your background and expectations to be able to adapt the course content. To get to know each other. To ”start you up”… Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 The course Definition & Main scope This is what the course book will look like… Until then, refer to http://svr-www.eng.cam.ac.uk/~pat40/book.html Course pages: www.speech.kth.se/courses/GSLT_SS Lecture content (impossible to cover the entire book): 1) History, Concatenative synthesis, Unit selection, HMM synthesis, Text issues, Prosody The automatic generation of synthesized sound or 2) Vocal tract models, Formant synthesis, Evaluation visual output from any phonetic string. 3) Term paper presentations, assignment correction To Do until next time: 1) Assignment 1: Unit selection calculations 2) Term paper topic selection Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 Synthesis approaches History By Concatenation By Rule Elementary speech units are Speech is produced by stored in a database and then mathematical rules that concatenated and processed describe the influence of to produce the speech signal phonemes on one another Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 1 van Kempelen Wheatstone’s version Wolfgang von Kempelen’s book Charles Wheatstone’s version of von Kempelen's speaking machine Mechanismus der menschlichen Sprache nebst Beschreibung einer sprechenden Maschine (1791). The essential parts • pressure chamber = lungs, • a vibrating reed = vocal cords, • a leather tube = vocal tract. The machine was • hand operated • could produce whole words and Why is it of interest to us? short phrases. Parametric features! Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 First electronic synthesis First formant synthesizers 1950’s PAT (Parametric Artificial Talker), Walter Lawrence 3 electronic formant resonators input signal (noise) 6 functions to control 3 formant frequencies, voicing, amplitude, fundamental frequency, and noise amplitude. 1950’s OVE (Orator Verbis Electris) by Gunnar Fant From 1950’s: other synthesizers including the first articulatory synthesis DAVO (Dynamic Analog of the Vocal • Homer Dudley presented VODER (Voice Operating tract) Demonstrator) at the World Fair in New York in 1939 An excellent historical trip of speech synthesis: • The device was played like a musical instrument, with voicing/noise source on a foot pedal and signal routed Dennis Klatt's History of Speech Synthesis at through ten bandpass filters. http://www.cs.indiana.edu/rhythmsp/ASA/Contents.html Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 Let us take a look at OVE OVE Instructions http://www.speech.kth.se/courses/GSLT_SS/ove.html • OVE I (1953) 1. Test how the five different source models change the output. What is the difference in the • On your computer today, and the original next formant pattern between different sources? Look at the number of formants, the peak amplitude, the bandwidth. time + OVE II (1962) 2. Alter a) the frequency and b) the shape of the source signal. What happens with the formant frequencies in the two cases? Relate these changes to human speech production. 3. Change the Frequency values F1-F4. Start with a neutral vowel (F1=500 Hz, F2=1500 Hz, F3=2500 Hz, F4=3500 Hz). Explain the attenuation in formant peak amplitude for higher frequencies (hint: try a rectangle source and change the shape to 99). Now move one of the formant peaks so that it is about 200 Hz from the closest peak. What happens with the neighbour peak? Change the bandwidth of the formants. What is the relation between the bandwidth and the formant peak amplitude? 4. Move around the cursor in the vowel space and see how the shape of the output waveform (green curve in the bottom panel) changes. If you have time, try to generate the sentences "How are you?" and "I love you!". Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 2 Formant amplitudes Speech analysis & manipulation Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 Why signal processing? The source-filter theory • Need to separate the source from the filter The signal (c-d) is the result of a linear filter for modelling (linear predictive analysis) (b) excited by one or several sources (a). • Need to model the sound source (prosody, speaker characteristics) • Need to alter speech units in concatenative synthesis (amplitude, cepstrums) • Need to make concatenations smooth in concatenative synthesis (PSOLA) Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 The source-filter theory Source functions • The voiced quasi-periodic source (glottis pulses) – vowels t TIME: Parameters: – on/off, source filter radiation – Fundamental frequency F0, (glottis) (vocal tract) (lips) – intensity, – shape FREQUENCY: • Frication source – fricatives • Transient noise – plosives More to come on the vocal tract filter in lecture 7 • No source – voiceless occlusions Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 3 Voice source types The quasi-periodic source Articulatory Acoustic Why is there a damping slope in Modal Normal, efficient Standard source. the transfer function? Complete glottis closures Steep spectral slope. Breathy Lack of tension. Audible aspiration Never close completely. Slow “glottal return”. Frequency Glottal pulse symmetry. Time Higher F0 intensity. Whispery Low glottal tension. Triangular High aspiration levels. glottal opening. High medial Greater pulse asymmetry compression Medium longitud. t f Less time in open state. T0 tension. f= nF0 =n/T0 Creaky High adductive tension and Very low F0. medial compression Irregular F0 & amplitude Little longitudinal tension t f Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 Simple vowel synthesis Linear Prediction (LP) A method to separate the source from the filter Predicts the next sample as a linear combination of the past p p Source Filter samples ~ x[n] = ∑ ak x[n − k] k=1 The coefficients a1 …ap describe a filter that is the inverse of the transfer function Waveform F1 F2 F3 F4 • Minimization of the prediction error results in an all-pole filter which Triangle source and formant filters in cascade: matches the signal spectrum Bandpass filters with frequency, bandwidth, level • This inverse filter removes the formants and can hence be used So, how do we find the source from a speech signal? to find the source. Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 Spectral Fourier analysis Fourier Transforms • A Fourier transform of the filter coefficients • Fourirer transform (FT): A non-period signal has a a1 …an give the frequency response of the inverse filter continous frequency spectrum • A periodic waveform can be described as a sum of harmonics t • The harmonics are sine waves with different phases, f amplitudes and frequencies. • Discrete FT (DFT): Fourier transform of a sampled signal • The frequencies are multiples of the fundamental • N samples in both the time and frequency domain. frequency. • The spectrum is mirrored around the sampling frequency • A periodic signal has a discrete spectrum t t f f • Fast FT (FFT): Clever algorithm to calculate DFT t • Reduces the number of multiplications: f DFT: ~N2 FFT: ~(N/2) * 2log(N) Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 4 Windowing Effect spectrum • The analysis of a long speech signal is made on short frames: • The FFT analysis gives complex values: amplitude and Analysis window 10 – 50 ms (20 ms in example) phase for each frequency component • The phase is often not interesting, only the signal’s energy at different frequencies. • The effect spectrum shows the power spectrum for a short section of the signal • The truncation of the signal results in artefacs (sidelobes) Windowing • The artefacts are reduced if the signal is multifplied with a FFT window that gives less weight to the sides. Square Logarithm Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 Cepstrum Analysis Cepstrum from filterbank • The dominating method for ASR, used in HMM synthesis 2 N C = A cos( jπ (i−0.5) ) j N ∑ i N •Inverse Fourier transform of i=1 logarithmic frequency spectrum Spectrum of /a:/ Weight functions “Spectral analysis of spectrum” 2 110 1 90 0 W1 Cepstrum of /a:/ -1 70 •The coarse structure of the -2 50 spectrum is described with a 30 * = 1234 1,5 small number of parameters 1 0,5 0 -0,5 W2 -1 C1 C2 C3 C4 •Orthogonal coefficients -1,5 (uncorrelated) Spectrum of /s/ 2 90 1 Cepstrum of /s/ 70 0 -1 •Anagram: Spectrum-cepstrum, 50 W3 -2 filtering-liftering, frequency- 30 1234 quefrency, phase-saphe * 1,5 1 = 0,5 0 -0,5 C1 C2 C3 C4 -1 -1,5 W4 Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 Mel Frequency Cepstral Coefficients MFCC Concatenative synthesis FFTFFT MelMel filter filter bank bank Cepstrum transform Linear < 1000 Hz Log > 1000 Hz dB 110 100 90 0 70 1234 -100 50 ~6000 Hz -200 30 Mel C1 C2 C3 C4 Mel-Spectrum of /a:/ Mel-Cepstrum of /a:/ The Mel scale is perceptually motivated Olov Engwall, Speech synthesis, 2008 Olov Engwall, Speech synthesis, 2008 5 Nothing new under the sun… Let’s get the terms straight Concatenative synthesis Definition: All kinds of synthesis based on the concatenation of units, regardless of type (sound, formant trajectories, articulatory parameters) and size • Peterson et al. (diphones, triphones, syllables, longer units). (1958) (Everyday use: Concatenation of same-size sound units.) • Dixon and Maxey (1968) Unit selection •“DiadicUnits”, (Olive, 1977) Definition: All kinds of synthesis based on the concatenation of units where there are several candidates to choose from, regardless of if the candidates have the same, fixed size or if the size is variable.
Recommended publications
  • THE DEVELOPMENT of ACCENTED ENGLISH SYNTHETIC VOICES By
    THE DEVELOPMENT OF ACCENTED ENGLISH SYNTHETIC VOICES by PROMISE TSHEPISO MALATJI DISSERTATION Submitted in fulfilment of the requirements for the degree of MASTER OF SCIENCE in COMPUTER SCIENCE in the FACULTY OF SCIENCE AND AGRICULTURE (School of Mathematical and Computer Sciences) at the UNIVERSITY OF LIMPOPO SUPERVISOR: Mr MJD Manamela CO-SUPERVISOR: Dr TI Modipa 2019 DEDICATION In memory of my grandparents, Cecilia Khumalo and Alfred Mashele, who always believed in me! ii DECLARATION I declare that THE DEVELOPMENT OF ACCENTED ENGLISH SYNTHETIC VOICES is my own work and that all the sources that I have used or quoted have been indicated and acknowledged by means of complete references and that this work has not been submitted before for any other degree at any other institution. ______________________ ___________ Signature Date iii ACKNOWLEDGEMENTS I want to recognise the following people for their individual contributions to this dissertation: • My brother, Mr B.I. Khumalo and the whole family for the unconditional love, support and understanding. • A distinct thank you to both my supervisors, Mr M.J.D. Manamela and Dr T.I. Modipa, for their guidance, motivation, and support. • The Telkom Centre of Excellence for Speech Technology for providing the resources and support to make this study a success. • My colleagues in Department of Computer Science, Messrs V.R. Baloyi and L.M. Kola, for always motivating me. • A special thank you to Mr T.J. Sefara for taking his time to participate in the study. • The six Computer Science undergraduate students who sacrificed their precious time to participate in data collection.
    [Show full text]
  • Part 2: RHYTHM – DURATION and TIMING Część 2: RYTM – ILOCZAS I WZORCE CZASOWE
    Part 2: RHYTHM – DURATION AND TIMING Część 2: RYTM – ILOCZAS I WZORCE CZASOWE From research to application: creating and applying models of British RP English rhythm and intonation Od badań do aplikacji: tworzenie i zastosowanie modeli rytmu i intonacji języka angielskiego brytyjskiego David R. Hill Department of Computer Science The University of Calgary, Alberta, Canada [email protected] ABSTRACT Wiktor Jassem’s contributions and suggestions for further work have been an essential influence on my own work. In 1977, Wiktor agreed to come to my Human-Computer Interaction Laboratory at the University of Calgary to collaborate on problems asso- ciated with British English rhythm and intonation in computer speech synthesis from text. The cooperation resulted in innovative models which were used in implementing the world’s first completely functional and still operational real-time system for arti- culatory speech synthesis by rule from plain text. The package includes the software tools needed for developing the databases required for synthesising other languages, as well as providing stimuli for psychophysical experiments in phonetics and phonology. STRESZCZENIE Pu bli ka cje Wik to ra Jas se ma i wska zów ki do dal szej pra cy w spo sób istot ny wpły nę - ły na mo ją wła sną pra cę. W 1997 ro ku Wik tor zgo dził się na przy jazd do mo je go La - bo ra to rium In te rak cji Czło wiek -Kom pu ter na Uni wer sy te cie w Cal ga ry aby wspól nie za jąć się pro ble ma mi zwią za ny mi z ryt mem w bry tyj skim an giel skim oraz in to na cją w syn te zie mo wy z tek stu.
    [Show full text]
  • SPEECH ACOUSTICS and PHONETICS Text, Speech and Language Technology VOLUME 24
    SPEECH ACOUSTICS AND PHONETICS Text, Speech and Language Technology VOLUME 24 Series Editors Nancy Ide, Vassar College, New York Jean V´eronis, Universited´ eProvence and CNRS, France Editorial Board Harald Baayen, Max Planck Institute for Psycholinguistics, The Netherlands Kenneth W. Church, AT&TBell Labs, New Jersey, USA Judith Klavans, Columbia University, New York, USA David T. Barnard, University of Regina, Canada Dan Tufis, Romanian Academy of Sciences, Romania Joaquim Llisterri, Universitat Autonoma de Barcelona, Spain Stig Johansson, University of Oslo, Norway Joseph Mariani, LIMSI-CNRS, France The titles published in this series are listed at the end of this volume. Speech Acoustics and Phonetics by GUNNAR FANT Department of Speech, Music and Hearing, Royal Institute of Technology, Stockholm, Sweden KLUWER ACADEMIC PUBLISHERS DORDRECHT / BOSTON / LONDON A C.I.P Catalogue record for this book is available from the Library of Congress. ISBN 1-4020-2789-3 (PB) ISBN 1-4020-2373-1 (HB) ISBN 1-4020-2790-7 (e-book) Published by Kluwer Academic Publishers, P.O. Box 17, 3300 AA Dordrecht, The Netherlands. Sold and distributed in North, Central and South America by Kluwer Academic Publishers, 101 Philip Drive, Norwell, MA 02061, U.S.A. In all other countries, sold and distributed by Kluwer Academic Publishers, P.O. Box 322, 3300 AH Dordrecht, The Netherlands. Printed on acid-free paper All Rights Reserved C 2004 Kluwer Academic Publishers No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work.
    [Show full text]
  • Gender, Ethnicity, and Identity in Virtual
    Virtual Pop: Gender, Ethnicity, and Identity in Virtual Bands and Vocaloid Alicia Stark Cardiff University School of Music 2018 Presented in partial fulfilment of the requirements for the degree Doctor of Philosophy in Musicology TABLE OF CONTENTS ABSTRACT i DEDICATION iii ACKNOWLEDGEMENTS iv INTRODUCTION 7 EXISTING STUDIES OF VIRTUAL BANDS 9 RESEARCH QUESTIONS 13 METHODOLOGY 19 THESIS STRUCTURE 30 CHAPTER 1: ‘YOU’VE COME A LONG WAY, BABY:’ THE HISTORY AND TECHNOLOGIES OF VIRTUAL BANDS 36 CATEGORIES OF VIRTUAL BANDS 37 AN ANIMATED ANTHOLOGY – THE RISE IN POPULARITY OF ANIMATION 42 ALVIN AND THE CHIPMUNKS… 44 …AND THEIR SUCCESSORS 49 VIRTUAL BANDS FOR ALL AGES, AVAILABLE ON YOUR TV 54 VIRTUAL BANDS IN OTHER TYPES OF MEDIA 61 CREATING THE VOICE 69 REPRODUCING THE BODY 79 CONCLUSION 86 CHAPTER 2: ‘ALMOST UNREAL:’ TOWARDS A THEORETICAL FRAMEWORK FOR VIRTUAL BANDS 88 DEFINING REALITY AND VIRTUAL REALITY 89 APPLYING THEORIES OF ‘REALNESS’ TO VIRTUAL BANDS 98 UNDERSTANDING MULTIMEDIA 102 APPLYING THEORIES OF MULTIMEDIA TO VIRTUAL BANDS 110 THE VOICE IN VIRTUAL BANDS 114 AGENCY: TRANSFORMATION THROUGH TECHNOLOGY 120 CONCLUSION 133 CHAPTER 3: ‘INSIDE, OUTSIDE, UPSIDE DOWN:’ GENDER AND ETHNICITY IN VIRTUAL BANDS 135 GENDER 136 ETHNICITY 152 CASE STUDIES: DETHKLOK, JOSIE AND THE PUSSYCATS, STUDIO KILLERS 159 CONCLUSION 179 CHAPTER 4: ‘SPITTING OUT THE DEMONS:’ GORILLAZ’ CREATION STORY AND THE CONSTRUCTION OF AUTHENTICITY 181 ACADEMIC DISCOURSE ON GORILLAZ 187 MASCULINITY IN GORILLAZ 191 ETHNICITY IN GORILLAZ 200 GORILLAZ FANDOM 215 CONCLUSION 225
    [Show full text]
  • Estudios De I+D+I
    ESTUDIOS DE I+D+I Número 51 Proyecto SIRAU. Servicio de gestión de información remota para las actividades de la vida diaria adaptable a usuario Autor/es: Catalá Mallofré, Andreu Filiación: Universidad Politécnica de Cataluña Contacto: Fecha: 2006 Para citar este documento: CATALÁ MALLOFRÉ, Andreu (Convocatoria 2006). “Proyecto SIRAU. Servicio de gestión de información remota para las actividades de la vida diaria adaptable a usuario”. Madrid. Estudios de I+D+I, nº 51. [Fecha de publicación: 03/05/2010]. <http://www.imsersomayores.csic.es/documentos/documentos/imserso-estudiosidi-51.pdf> Una iniciativa del IMSERSO y del CSIC © 2003 Portal Mayores http://www.imsersomayores.csic.es Resumen Este proyecto se enmarca dentro de una de las líneas de investigación del Centro de Estudios Tecnológicos para Personas con Dependencia (CETDP – UPC) de la Universidad Politécnica de Cataluña que se dedica a desarrollar soluciones tecnológicas para mejorar la calidad de vida de las personas con discapacidad. Se pretende aprovechar el gran avance que representan las nuevas tecnologías de identificación con radiofrecuencia (RFID), para su aplicación como sistema de apoyo a personas con déficit de distinta índole. En principio estaba pensado para personas con discapacidad visual, pero su uso es fácilmente extensible a personas con problemas de comprensión y memoria, o cualquier tipo de déficit cognitivo. La idea consiste en ofrecer la posibilidad de reconocer electrónicamente los objetos de la vida diaria, y que un sistema pueda presentar la información asociada mediante un canal verbal. Consta de un terminal portátil equipado con un trasmisor de corto alcance. Cuando el usuario acerca este terminal a un objeto o viceversa, lo identifica y ofrece información complementaria mediante un mensaje oral.
    [Show full text]
  • Masterarbeit
    Masterarbeit Erstellung einer Sprachdatenbank sowie eines Programms zu deren Analyse im Kontext einer Sprachsynthese mit spektralen Modellen zur Erlangung des akademischen Grades Master of Science vorgelegt dem Fachbereich Mathematik, Naturwissenschaften und Informatik der Technischen Hochschule Mittelhessen Tobias Platen im August 2014 Referent: Prof. Dr. Erdmuthe Meyer zu Bexten Korreferent: Prof. Dr. Keywan Sohrabi Eidesstattliche Erklärung Hiermit versichere ich, die vorliegende Arbeit selbstständig und unter ausschließlicher Verwendung der angegebenen Literatur und Hilfsmittel erstellt zu haben. Die Arbeit wurde bisher in gleicher oder ähnlicher Form keiner anderen Prüfungsbehörde vorgelegt und auch nicht veröffentlicht. 2 Inhaltsverzeichnis 1 Einführung7 1.1 Motivation...................................7 1.2 Ziele......................................8 1.3 Historische Sprachsynthesen.........................9 1.3.1 Die Sprechmaschine.......................... 10 1.3.2 Der Vocoder und der Voder..................... 10 1.3.3 Linear Predictive Coding....................... 10 1.4 Moderne Algorithmen zur Sprachsynthese................. 11 1.4.1 Formantsynthese........................... 11 1.4.2 Konkatenative Synthese....................... 12 2 Spektrale Modelle zur Sprachsynthese 13 2.1 Faltung, Fouriertransformation und Vocoder................ 13 2.2 Phase Vocoder................................ 14 2.3 Spectral Model Synthesis........................... 19 2.3.1 Harmonic Trajectories........................ 19 2.3.2 Shape Invariance..........................
    [Show full text]
  • Half a Century in Phonetics and Speech Research
    Fonetik 2000, Swedish phonetics meeting in Skövde, May 24-26, 2000 (Expanded version, internal TMH report) Half a century in phonetics and speech research Gunnar Fant Department of Speech, Music and Hearing, KTH, Stockholm, 10044 Abstract This is a brief outlook of experiences during more than 50 years in phonetics and speech research. I will have something to say about my own scientific carrier, the growth of our department at KTH, and I will end up with an overview of research objectives in phonetics and a summary of my present activities. Introduction As you are all aware of, phonetics and speech research are highly interrelated and integrated in many branches of humanities and technology. In Sweden by tradition, phonetics and linguistics have had a strong position and speech technology is well developed and internationally respected. This is indeed an exciting field of growing importance which still keeps me busy. What have we been up to during half a century? Where do we stand today and how do we look ahead? I am not attempting a deep, thorough study, my presentation will in part be anecdotal, but I hope that it will add to the perspective, supplementing the brief account presented in Fant (1998) The early period 1945-1966 KTH and Ericsson 1945-1949 I graduated from the department of Telegraphy and Telephony of the KTH in May 1945. My supervisor, professor Torbern Laurent, a specialist in transmission line theory and electrical filters had an open mind for interdisciplinary studies. My thesis was concerned with theoretical matters of relations between speech intelligibility and reduction of overall system bandwidth, incorporating the effects of different types of hearing loss.
    [Show full text]
  • Wormed Voice Workshop Presentation
    Wormed Voice Workshop Presentation micro_research December 27, 2017 1 some worm poetry and songs: The WORM was for a long time desirous to speake, but the rule and order of the Court enjoyned him silence, but now strutting and swelling, and impatient, of further delay, he broke out thus... [Michael Maier] He worshipped the worm and prayed to the wormy grave. Serpent Lucifer, how do you do? Of your worms and your snakes I'd be one or two; For in this dear planet of wool and of leather `Tis pleasant to need neither shirt, sleeve, nor shoe, 2 And have arm, leg, and belly together. Then aches your head, or are you lazy? Sing, `Round your neck your belly wrap, Tail-a-top, and make your cap Any bee and daisy. Two pigs' feet, two mens' feet, and two of a hen; Devil-winged; dragon-bellied; grave- jawed, because grass Is a beard that's soon shaved, and grows seldom again worm writing the the the the,eeeronencoug,en sthistit, d.).dupi w m,tinsprsool itav f bometaisp- pav wheaigelic..)a?? orerdi mise we ich'roo bish ftroo htothuloul mespowouklain- duteavshi wn,jis, sownol hof." m,tisorora angsthyedust,es, fofald,junss ownoug brad,)fr m fr,aA?a????ck;A?stelav aly, al is.'rady'lfrdil owoncorara wns t.) sh'r, oof ofr,a? ar,a???????a? fu mo towess,eethen hrtolly-l,."tigolav ict,a???!ol, w..'m,elyelil,tstreamas..n gotaillas.tansstheatsea f mb ispot inici t.) owar.**1 wnshigigholoothtith orsir.tsotic.'m, sotamimoledug imootrdeavet..t,) sh s,tranciror."wn sieee h asinied.tiear wspilotor,) bla av.nicord,ier.dy'et.*tite m.)..*d, hrouceto hie, ig il m, bsomoug,.t.'l,t, olitel bs,.nt,.dotr tat,)aa? htotitedont,j alesil, starar,ja taie ass.nishiceroouldseal fotitoonckysil, m oitispl o anteeeaicowousomirot.
    [Show full text]
  • Biomimetics of Sound Production, Synthesis and Recognition
    Design and Nature V 273 Biomimetics of sound production, synthesis and recognition G. Rosenhouse Swantech - Sound Wave Analysis & Technologies Ltd, Haifa, Israel Abstract Biomimesis of sound production, synthesis and recognition follows millenias of years of adaptation as it developed in nature. Technically such communication means were initiated since people began to build speaking machines based on natural speech concepts. The first devices were mechanical and they were in use till the end of the 19th century. Those developments later led to modern speech and music synthesis, initially applying pure mechanics. Since the beginning of the 20th century electronics has taken up the lead to independent electronic achievements in human communication abilities. As shown in the present paper, this development was intentionally made along history in order to satisfy human needs. Keywords: speaking machines, biomimetics, speech synthesis. 1 Introduction Automation is an old attempt of people to tame machines for the needs of human beings. It began mainly, as far as we know, since 70 DC (with Heron). In linguistics, the history of speaking machines began in the 2nd century when people tried to build speaking heads. However, actually, the process was initiated in the 18th century, with Wolfgang von Kempelen who invented a speaking mechanism that simulated the speech system of human beings. It opened the way to inventions of devices for producing artificial vowels and consonants. This development was an important step towards modern speech recognition
    [Show full text]
  • Siren Songs and Echo's Response: Towards a Media Theory of the Voice
    Published as _Article in On_Culture: The Open Journal for the Study of Culture (ISSN 2366-4142) SIREN SONGS AND ECHO’S RESPONSE: TOWARDS A MEDIA THEORY OF THE VOICE IN THE LIGHT OF SPEECH SYNTHESIS CHRISTOPH BORBACH [email protected] Christoph Borbach is a research fellow at the research training group Locating Media at the University of Siegen, where he conducts a research project entitled “Zeitkanäle|Kanalzeiten” (“Time Channels|Channel Times”) on the media history of the operationalization of delay time from a media archaeological perspective. Borbach studied Musicology, Media, and History at the Humboldt University of Berlin. His bachelor’s thesis dealt with radio theories between ideology and media epistemology. For his master’s thesis, he studied the media-technical implementations of echoes. His research interests include media theory of the voice, media archaeology of the echo, operationalization of the sonic, time-critical detection technologies and their visualiza- tion strategies, and the occult of media/media of the occult. KEYWORDS speech synthesis, phonography, voice theory, embodied voices, speaking machines, technotraumatic affects PUBLICATION DATE Issue 2, November 30, 2016 HOW TO CITE Christoph Borbach. “Siren Songs and Echo’s Response: Towards a Media Theory of the Voice in the Light of Speech Synthesis.” On_Culture: The Open Journal for the Study of Culture 2 (2016). <http://geb.uni-giessen.de/geb/volltexte/2016/12354/>. Permalink URL: <http://geb.uni-giessen.de/geb/volltexte/2016/12354/> URN: <urn:nbn:de:hebis:26-opus-123545> On_Culture: The Open Journal for the Study of Culture Issue 2 (2016): The Nonhuman www.on-culture.org http://geb.uni-giessen.de/geb/volltexte/2016/12354/ Siren Songs and Echo’s Response: Towards a Media Theory of the Voice in the Light of Speech Synthesis _Abstract In contrast to phonographical recording, storage, and reproduction of the voice, most media theories, especially prominent media theories of the human voice, neglected the aspect of synthesizing human-like voices by non-human means.
    [Show full text]
  • Spoken Language Generation and Un- Derstanding by Machine: a Problems and Applications-Oriented Overview
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/256982222 Spoken Language Generation and Understanding by Machine: A Problems and Applications Oriented Overview Chapter · January 1980 DOI: 10.1007/978-94-009-9091-3_1 CITATIONS READS 6 826 1 author: David Hill The University of Calgary 58 PUBLICATIONS 489 CITATIONS SEE PROFILE Some of the authors of this publication are also working on these related projects: Computer-Human Interaction View project Animating speech View project All content following this page was uploaded by David Hill on 03 December 2015. The user has requested enhancement of the downloaded file. Spoken Language Generation and Un- derstanding by Machine: a Problems and Applications-Oriented Overview David R. Hill Department of Computer Science, University of Calgary, AB, Canada © 1979, 2015 David R. Hill Summary Speech offers humans a means of spontaneous, convenient and effective communication for which neither preparation nor tools are required, and which may be adapted to suit the developing requirements of a communication situation. Although machines lack the feedback, based on understanding and shared experience that is essential to this form of communication in general, speech is so fundamental to the psychol- ogy of the human, and offers such a range of practical advantages, that it is profitable to develop and apply means of speech communication with machines under constraints that allow the provision of adequate dialog and feedback. The paper details the advantages of speech com- munication and outlines speech production and perception in terms relevant to understanding the rest of the paper. The hierarchy/heterar- chy of processes involved in both automatic speech understanding and the machine generation of speech is examined from a variety of points of view and the current capabilities in terms of viable applications are noted.
    [Show full text]
  • Speech Generation: from Concept and from Text
    Speech Generation From Concept and from Text Martin Jansche CS 6998 2004-02-11 Components of spoken output systems Front end: From input to control parameters. • From naturally occurring text; or • From constrained mark-up language; or • From semantic/conceptual representations. Back end: From control parameters to waveform. • Articulatory synthesis; or • Acoustic synthesis: – Based predominantly on speech samples; or – Using mostly synthetic sources. 2004-02-11 1 Who said anything about computers? Wolfgang von Kempelen, Mechanismus der menschlichen Sprache nebst Beschreibung einer sprechenden Maschine, 1791. Charles Wheatstone’s reconstruction of von Kempelen’s machine 2004-02-11 2 Joseph Faber’s Euphonia, 1846 2004-02-11 3 Modern articulatory synthesis • Output produced by an articulatory synthesizer from Dennis Klatt’s review article (JASA 1987) • Praat demo • Overview at Haskins Laboratories (Yale) 2004-02-11 4 The Voder ... Developed by Homer Dudley at Bell Telephone Laboratories, 1939 2004-02-11 5 ... an acoustic synthesizer Architectural blueprint for the Voder Output produced by the Voder 2004-02-11 6 The Pattern Playback Developed by Franklin Cooper at Haskins Laboratories, 1951 No human operator required. Machine plays back previously drawn spectrogram (spectrograph invented a few years earlier). 2004-02-11 7 Can you understand what it says? Output produced by the Pattern Playback. 2004-02-11 8 Can you understand what it says? Output produced by the Pattern Playback. These days a chicken leg is a rare dish. It’s easy to tell the depth of a well. Four hours of steady work faced us. 2004-02-11 9 Synthesis-by-rule • Realization that spectrograph and Pattern Playback are really only recording and playback devices.
    [Show full text]