4. La Conversión De Texto En Habla

Total Page:16

File Type:pdf, Size:1020Kb

4. La Conversión De Texto En Habla MÁSTER OFICIAL EN LENGUA ESPAÑOLA Y LITERATURA HISPÁNICA Aplicación de las tecnologías de la información y de la comunicación (TIC) a la Lengua española Tecnologías del habla, Carme de-la-Mota LA CONVERSIÓN DE TEXTO EN HABLA Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 1 GUIÓN DEL CURSO Interacción entre personas y ordenadores Tema que Conversión de texto en habla trataremos Reconocimiento de habla con más detalle Sistemas de diálogo Interacción entre personas mediatizada por ordenador Traducción de lenguas orales Traducción entre lengua oral y lengua de signos Reconocimiento de habla y generación de caras animadas para lectura labial La necesidad de crear y etiquetar corpus orales Aplicaciones Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 2 Interacción ordenador persona Foto: teclarosrg Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 3 La generación automática de habla Conversión de texto en habla text-to-speech Generación de habla a partir de conceptos concept-to-speech Procesamiento del lenguaje natural PLN Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 4 ¿Qué es la conversión de texto en habla? Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 5 Conversión de texto en habla Gota. [‘gota] Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 6 Objetivos de un sistema de CTH Primero: inteligibilidad Segundo: naturalidad Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 7 La conversión de texto en habla 1. De los inicios a la actualidad 2. La estructura de un conversor 3. Las mejoras pendientes 4. Las aplicaciones Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 8 La conversión de texto en habla 1. De los inicios a la actualidad 2. La estructura de un conversor 3. Las mejoras pendientes 4. Las aplicaciones Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 9 Von Kempelen, 1791 Reconstrucción por Wheatstone Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 10 Von Kempelen, 1791 Simulación del aparato fonador Deutsches Museum, Munich http://www.ling.su.se/staff/hartmut/kemplne.htm Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 11 Evolución de los sistemas de síntesis: VODER VODER (Voice Operating Demonstrator) Homer Dudley, New York World's Fair 1939 Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 12 Evolución de los sistemas de síntesis: Pattern Playback Pattern Playback Laboratorios Haskins 1951 Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 13 Evolución de los sistemas de síntesis: OVE OVE I (Orator Verbis Electris) Gunnar Fant Primer sintetizador por formantes en cascada 1953 Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 14 Evolución de los sistemas de síntesis: DAVO DAVO (Dynamic Analog of the VOcal tract) George Rosen Massachusetts Institute of Technology, M.I.T. Primer sintetizador de base articulatoria 1958 Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 15 Evolución de los sistemas de síntesis: TI Speak'n'Spell Speak'n'Spell Linear Predictive Coding (LPC) 1980 Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 16 Sami Lemmetty (1999) Review of Speech Synthesis Technology, Master's Thesis, Laboratory of Acoustics and Audio Signal Processing, Helsinki University of Technology. http://www.acoustics.hut.fi/publications/files/theses/lemmetty_m st/index.html Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 17 Actualmente Inteligibilidad altísima Naturalidad aceptable Mejorable: expresividad … Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 18 La conversión de texto en habla 1. De los inicios a la actualidad 2. La estructura de un conversor 3. Las mejoras pendientes 4. Las aplicaciones Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 19 Conversión de texto en habla Lectura en voz alta Gota. [‘gota] Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 20 Conversión de texto en habla A. Aspectos lingüísticos B. Generación de habla sintética Módulo lingüístico Gota. Módulo de síntesis Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 21 Funcionamiento Principales módulos en un conversor de texto en habla J. Llisterri, C. Carbó, M. J. Machuca, C. de-la-Mota, M. Riera, A. Ríos (2003) "El papel de la lingüística en el desarrollo de las tecnologías del habla“, en M. Casas Gómez (dir.) y C. Varo Varo (ed.) VII Jornadas de Lingüística. Cádiz: Universidad de Cádiz. ISBN: 84- 88423-30-6: 137-191 http://liceu.uab.es/~carme/Cadiz_03.pdf http://liceu.uab.es/~joaquim/publicacions/Llisterri_Carbo _Machuca_Mota_Riera_Rios_03_Linguistica_Tecnologi as_Habla.pdf Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 22 A. Aspectos lingüísticos Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 23 Información lingüística ¿Qué información fonética debe contener? Determinación y descripción de unidades Transcripción fonética Silabificación Acento Entonación Junturas… ¿Qué información léxica, sintáctica, semántica y pragmática debe contener? ¿En qué orden deben actuar los módulos? Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 24 Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 25 1. Preprocesado o normalización del texto Preparación del texto para que pueda ser transcrito fonéticamente Ejemplo: Juan Carlos I [ i ] <primero> Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 26 1. Preprocesado o normalización del texto Números ordinales Números romanos Horas Fechas Unidades de medida Temperaturas Expresiones monetarias Siglas Abreviaturas Direcciones de correo electrónico e Internet Símbolos Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 27 ¿Cómo debería realizarse el preprocesado de estos casos? % @ CCOO 1 m kW Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 28 ¿Cómo debería realizarse el preprocesado de estos casos? 1 m 150 m Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 29 ¿Cómo debería realizarse el preprocesado de estos casos? 1 metro 150 metros Concordancia morfológica Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 30 ¿Cómo debería realizarse el preprocesado de estos casos? El valor era 25, ni más ni menos. Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 31 ¿Cómo debería realizarse el preprocesado de estos casos? El valor era 25, ni más ni menos. El valor era 25,3, ni más ni menos. ¿Cómo tratar la coma y los otros signos de puntuación? Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 32 ¿Cómo debería realizarse el preprocesado de estos casos? UAM 6,50 € 91 213 55 98 Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 33 ¿Cómo debería realizarse el preprocesado de estos casos? UAM 6,50 € 91 213 55 98 1-12 ¡Hay más de una salida posible! Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 34 Abreviaturas ¿Cómo expandir estas abreviaturas? c/ ed. impr. R.D. S.A. trad. v. Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 35 Abreviaturas Consulta DPD, Diccionario Panhispánico de Dudas Apéndice 2: Lista de abreviaturas http://buscon.rae.es/dpdI/ Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 36 Abreviaturas ¿Cómo expandir estas abreviaturas? c/ ed. impr. R.D. S.A. ¡Hay más de una salida posible! trad. v. Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 37 Abreviaturas Consulta en el DPD c/ calle (también c. y cl.) || cargo (también cgo.) || cuenta (también cta.) ed. edición || editorial (también edit.) || editor, -ra impr. imprenta (también imp.) || impreso R.D. Real Decreto [Esp.] (cf. R. O.) || República Dominicana S.A. sociedad anónima (cf. S. L.) || su alteza trad. traducción || traductor, -ra v. véase (cf. vid.) || verso Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 38 ¿Preprocesado? Probemos con varios de los sistemas actuales accesibles en Internet… Texto: Vamos a ver si se ha preprocesado el texto. Atención. Son las 12:45. El rey Carlos III. Son 100 Km/h. Es el 1º. Hablamos de la UGT, CCOO, el PSOE y la ONU. Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 39 AT&T http://www.research.att.com/~ttsweb/tts/demo.php Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 40 Acapela ELAN TTS http://www.acapela-group.com/text-to-speech-interactive-demo.html Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 41 Loquendo Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 42 Pero cuidado… ¿Cuál es el objetivo del preprocesado? ¿Hasta qué punto debe cubrir el preprocesado los elementos ajenos a la cultura habitual de uso del conversor? UGT, CCOO, PSOE Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 43 Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 44 2. Análisis lingüístico ¿Grado de detalle
Recommended publications
  • La Voz Humana
    UNIVERSIDAD DE CHILE FACULTAD DE ARTES MAGISTER EN ARTES MEDIALES SHOUT! Tesis para optar al Grado de Magíster en Artes Medialess ALUMNO: Christian Oyarzún Roa PROFESOR GUÍA: Néstor Olhagaray Llanos Santiago, Chile 2012 A los que ya no tienen voz, a los viejos, a los antiguos, a aquellos que el silencio se lleva en la noche. ii AGRADECIMIENTOS Antes que nada quiero agradecer a Néstor Olhagaray, por el tesón en haberse mantenido como profesor guía de este proyecto y por la confianza manifestada más allá de mis infinitos loops y procastinación. También quisiera manifestar mis agradecimientos a Álvaro Sylleros quién, como profesor del Diplomado en Lutería Electrónica en la PUC, estimuló y ayudó a dar forma a parte sustancial de lo aquí expuesto. A Valentina Montero por la ayuda, referencia y asistencia curatorial permanente, A Pablo Retamal, Pía Sommer, Mónica Bate, Marco Aviléz, Francisco Flores, Pablo Castillo, Willy MC, Álvaro Ceppi, Valentina Serrati, Yto Aranda, Leonardo Beltrán, Daniel Tirado y a todos aquellos que olvido y que participaron como informantes clave, usuarios de prueba ó simplemente apoyando entusiastamente el proyecto. Finalmente, quiero agradecer a Claudia González, quien ha sido una figura clave dentro de todo este proceso. Gracias Clau por tu sentido crítico, por tu presencia inspiradora y la ejemplar dedicación y amor por lo que haces. iii Tabla de Contenidos RESÚMEN v INTRODUCCIÓN 1 A Hard Place 1 Descripción 4 Objetivos 6 LA VOZ HUMANA 7 La voz y el habla 7 Balbuceos, risas y gritos 11 Yo no canto por cantar 13
    [Show full text]
  • Aac 2, 3, 4, 5, 6, 50, 51, 52, 53, 55, 56, 57, 58, 59, 60, 61, 62
    315 Index A augmentative and alternative communication (AAC) 130, 145, 148 AAC 2, 3, 4, 5, 6, 50, 51, 52, 53, 55, 56, 57, Augmentative and Alternative Communication 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, (AAC) 50 69, 130, 131, 138, 140, 141, 142, 143, augmented speech 116, 126 144, 145, 147, 148, 149, 150, 153, 156, Augmented Speech Communication 117 158, 159, 160, 161, 162, 163, 164, 165, Augmented speech communication (ASC) 116 172, 173, 174, 175, 192, 194, 198, 205, automatic categorization 220 206, 207, 208, 213, 214, 215, 216, 257, automatic speech recognition 100 258, 259, 260, 263, 265 Automatic Speech Recognition (ASR) 189 able-bodied people 220 accessibility 12, 14, 15, 21 B acoustic energy 31 adaptive differential pulse code modulation background noise 56, 67, 130, 132, 134, 135, (ADPCM) 32 136, 137, 141, 142, 144 AIBO 17 BCI 17 aided communication techniques 205 bit rate 31, 32, 33, 41, 45, 47 aided communicator 234, 235, 241, 243, 245, bits per second (bps) 31 248, 250, 252 Blissymbols 235, 238, 239, 240, 241, 242, 243, aided language development 235 244, 246, 247, 250, 256 allophones 34, 35 brain-computer interfaces (BCI) 17 alternate reader 17 Brain Computer Interfaces (BCI’s) 6 alternative and augmentative communication brain-machine interface (BMI) 17 (AAC) 2, 93 C alternative communication 1 amyotrophic lateral sclerosis (ALS) 1 CALL 189, 190, 191, 192, 193, 196, 198, 199, analog-to-digital converter 30, 31 201, 202, 203 anti-aliasing (low-pass) filters 3 1 CASLT 188, 189, 190, 191, 193, 198, 199 aphasia 148, 149,
    [Show full text]
  • WIKI on ACCESSIBILITY Completion Report March 2010
    WIKI ON ACCESSIBILITY Completion report March 2010 By Nirmita Narasimhan Programme Manager Centre for Internet and Society Project Title: Wiki on “Accessibility, Disability and the Internet in India” Page | 1 REPORT Accessibility wiki: accessibility.cis-india.org The wiki project was envisaged and funded by the National Internet Exchange of India (www.nixi.in) and has been executed by the Centre for Internet and Society (www.cis-india.org), Bangalore. Project Start date: May 2009 End date: February 2010. Background India has a large percentage of disabled persons in its population— estimated to be over seven per cent as per the Census of 2001. Even this figure is believed to be a gross under representation of the total number of disabled persons residing in this large and diverse country. Taken in figures, this amounts to roughly 70-100 million persons with disabilities in the territory of India. Out of this number, a mere two per cent residing in urban areas have access to information and assistive technologies which enable them to function in society and enhance their performance. There are several reasons for this, one of them being that there is a deplorable lack of awareness which exists on the kinds of disabilities and about ways in which one can provide information and services to disabled persons. Parents, teachers, government authorities and society at large are all equally unaware about the options which exist in technology today to enable persons with disabilities to carry on independent and productive lives. Barring a few exceptions, India is still trapped in an era where a white cane and a Braille slate symbolises the future for blind people, while the world has progressed to newer forms of enabling technology such as screen readers, daisy players, the Kindle and so on.
    [Show full text]
  • Chapter 10 Speech Synthesis Ch T 11 a T Ti S H R Iti Chapter 11 Automatic
    Chapter 10 Speech Synthesis Chapt er 11 Aut omati c Speech Recogniti on 1 An Enggpineer’s Perspective Speech production Speech analysis ShSpeech Speech Speech quality coding recognition assessment Speech Speech Speaker synthesis enhancement recognition 2 3 4 History Long before modern electronic signal processing was invented, speech researchers tried to build machines to create human speech. Early examples of 'speaking heads' were made by Gerbert of Aurillac (d. 1003), Albertus Magnus (1198-1280), and Roger Bacon (1214-1294). In 1779, the Danish scientist Christian Kratzenstein, working at the time at the Russian Academy of Sciences, built models of the human vocal tract that could produce the five long vowel sounds (a, e, i, o and u). 5 Kratzenstein's resonators 6 Engineering the vocal tract: Riesz 1937 7 Homer Dudley 1939 VODER Synthesizing speech by electrical means 1939 World’s Fair •Manually controlled through complex keyboard •Operator training was a problem 8 Cooper’s Pattern Playback Haskins Labs for investigating speech perception Works like an inverse of a spectrograph Light from a lamp goes through a rotating disk then througgph spectrog ram into photovoltaic cells Thus amount of light that gets transmitted at eachfh frequency b and correspond s to amount of acoustic energy at that band 9 Cooper’s Pattern Playback 10 Modern TTS systems 1960’s first full TTS: Umeda et al (1968) 1970’s Joe Olive 1977 concatenation of linear-prediction diphones Speak and Spell 1980’s 1979 MIT MITalk (Allen, Hunnicut, Klatt) 1990’s-present Diphone synthesis Unit selection synthesis 11 Types of Modern Synthesis Articulatory Synthesis: Model movements of articulators and acoustics o f voca l trac t Formant Synthesis: Start with acoustics,,/ create rules/filters to create each formant Concatenative Synthesis: Use databases of stored speech to assemble new utterances.
    [Show full text]
  • Haskins Newsletter
    THE SCIENCE OF THE SPOKEN AND WRITTEN WORD CommuniquéWinter 2019 an evidence-based curriculum. Under Dr. Nicole Landi’s direction, we have now collected extensive From the President's brain and cognitive data on a large group of children Desk at each school and we are gaining important insights into individual differences in core learning profiles in It is a pleasure to introduce our this complex population. Look for updates on this inaugural edition of the Haskins initiative in future editions of this newsletter. Communiqué newsletter! It is our This newsletter also highlights the uniqueness of hope that the Communiqué will Haskins as an international community; a defining become a vehicle for sharing the attribute of a lab that cares about the biology of exciting discoveries and tremendous growth that I human language in all its variations. Two important am privileged to witness daily in my role as President. partnerships are highlighted here: the announcement This newsletter is a component of the development of the Haskins Global Literacy Hub (page 2), and the portfolio taken on by Dr. Julie Van Dyke, who moved founding of a joint developmental neuroscience lab into the new role of Vice President for Research and with the National Taiwan Normal University (page 3). Strategic Initiatives earlier this year. Having served Synergistic relationships across such a wide range as a Senior Scientist at Haskins for 18 years, and with of disciplines and among researchers from over 40 extesive experience in project management, science countries is what makes Haskins Laboratories so communication, and editorial service, she is well unique and special.
    [Show full text]
  • Speech Generation: from Concept and from Text
    Speech Generation From Concept and from Text Martin Jansche CS 6998 2004-02-11 Components of spoken output systems Front end: From input to control parameters. • From naturally occurring text; or • From constrained mark-up language; or • From semantic/conceptual representations. Back end: From control parameters to waveform. • Articulatory synthesis; or • Acoustic synthesis: – Based predominantly on speech samples; or – Using mostly synthetic sources. 2004-02-11 1 Who said anything about computers? Wolfgang von Kempelen, Mechanismus der menschlichen Sprache nebst Beschreibung einer sprechenden Maschine, 1791. Charles Wheatstone’s reconstruction of von Kempelen’s machine 2004-02-11 2 Joseph Faber’s Euphonia, 1846 2004-02-11 3 Modern articulatory synthesis • Output produced by an articulatory synthesizer from Dennis Klatt’s review article (JASA 1987) • Praat demo • Overview at Haskins Laboratories (Yale) 2004-02-11 4 The Voder ... Developed by Homer Dudley at Bell Telephone Laboratories, 1939 2004-02-11 5 ... an acoustic synthesizer Architectural blueprint for the Voder Output produced by the Voder 2004-02-11 6 The Pattern Playback Developed by Franklin Cooper at Haskins Laboratories, 1951 No human operator required. Machine plays back previously drawn spectrogram (spectrograph invented a few years earlier). 2004-02-11 7 Can you understand what it says? Output produced by the Pattern Playback. 2004-02-11 8 Can you understand what it says? Output produced by the Pattern Playback. These days a chicken leg is a rare dish. It’s easy to tell the depth of a well. Four hours of steady work faced us. 2004-02-11 9 Synthesis-by-rule • Realization that spectrograph and Pattern Playback are really only recording and playback devices.
    [Show full text]
  • Gnuspeech Tract Manual 0.9
    Gnuspeech TRAcT Manual 0.9 TRAcT: the Gnuspeech Tube Resonance Access Tool: a means of investigating and understanding the basic Gnuspeech vocal tract model David R. Hill, University of Calgary TRAcT and the \tube" model to which it allows access was originally known as \Synthesizer" and developed by Leonard Manzara on the NeXT computer. The name has been changed because \Synthesiser" might be taken to imply it synthesises speech, which it doesn't. It is a \low-level" sound synthesiser that models the human vocal tract, requiring the \high-level" synthesiser control inherent in Gnuspeech/Monet to speak. TRAcT provides direct interactive access to the human vocal tract model parameters and shapes as a means of exploring its sound generation capabilities, as needed for speech database creation. i (GnuSpeech TRAcT Manual Version 0.9) David R. Hill, PEng, FBCS ([email protected] or drh@firethorne.com) © 2004, 2015 David R. Hill. All rights reserved. This document is publicly available under the terms of a Free Software Foundation \Free Doc- umentation Licence" See see http://www.gnu.org/copyleft/fdl.html for the licence terms. This page and the acknowledgements section are the invariant sections. ii SUMMARY The \Tube Resonance Model" (TRM, or \tube", or \waveguide", or transmission-line) forms the acoustic basis of the Gnupeech articulatory text-to-speech system and provides a linguistics research tool. It emulates the human vocal apparatus and allows \postures" to be imposed on vocal tract model, and energy to be injected representing voicing, whispering, sibilant noises and \breathy" noise. The \Distinctive Region Model" (DRM) control system that allows simple specification of the postures, and accessed by TRAcT 1is based on research by CGM Fant and his colleagues at the Stockholm Royal Institute of Technology Speech Technology Laboratory (Fant & Pauli 1974), by Ren´eCarr´eand his colleagues at T´el´ecomParis (Carr´e,Chennoukh & Mrayati 1992), and further developed by our original research team at Trillium Sound Research and the U of Calgary).
    [Show full text]
  • DOCUMENT RESUME ED 052 654 FL 002 384 TITLE Speech Research
    DOCUMENT RESUME ED 052 654 FL 002 384 TITLE Speech Research: A Report on the Status and Progress of Studies on Lhe Nature of Speech, Instrumentation for its Investigation, and Practical Applications. 1 July - 30 September 1970. INSTITUTION Haskins Labs., New Haven, Conn. SPONS AGENCY Office of Naval Research, Washington, D.C. Information Systems Research. REPORT NO SR-23 PUB DATE Oct 70 NOTE 211p. EDRS PRICE EDRS Price MF-$0.65 HC-$9.87 DESCRIPTORS Acoustics, *Articulation (Speech), Artificial Speech, Auditory Discrimination, Auditory Perception, Behavioral Science Research, *Laboratory Experiments, *Language Research, Linguistic Performance, Phonemics, Phonetics, Physiology, Psychoacoustics, *Psycholinguistics, *Speech, Speech Clinics, Speech Pathology ABSTRACT This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. The reports contained in this particular number are state-of-the-art reviews of work central to the Haskins Laboratories' areas of research. Tre papers included are: (1) "Phonetics: An Overview," (2) "The Perception of Speech," (3) "Physiological Aspects of Articulatory Behavior," (4) "Laryngeal Research in Experimental Phonetics," (5) "Speech Synthesis for Phonetic and Phonological Models," (6)"On Time and Timing in Speech," and (7)"A Study of Prosodic Features." (Author) SR-23 (1970) SPEECH RESEARCH A Report on the Status and Progress ofStudies on the Nature of Speech,Instrumentation for its Investigation, andPractical Applications 1 July - 30 September1970 U.S. DEPARTMENT OF HEALTH, EDUCATION & WELFARE OFFICE OF EDUCATION THIS DOCUMENT HAS BEEN REPRODUCED EXACTLY AS RECEIVED FROMTHE PERSON OR ORGANIZATION ORIGINATING IT,POINTS OF VIEW OR OPINIONS STATED DO NOT NECESSARILY REPRESENT OFFICIAL OFFICE OF EDUCATION POSITION OR POLICY.
    [Show full text]
  • Sensorimotor Experience in Speech Perception
    PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/168722 Please be advised that this information was generated on 2019-01-10 and may be subject to change. Sensorimotor Experience in Speech Perception c 2017, William Schuerman ISBN: 978-90-76203-82-9 Cover photo: Kees Peerdeman, “Sounds bananas”, referring to the metaphor by Albert S. Bregman, likening auditory perception to observing the ripples made by objects moving on a lake. Printed and bound by Ipskamp Drukkers b.v. Sensorimotor Experience in Speech Perception Proefschrift ter verkrijging van de graad van doctor aan de Radboud Universiteit Nijmegen op gezag van de rector magnificus prof. dr. J.H.J.M. van Krieken, volgens besluit van het college van de decanen in het openbaar te verdedigen op maandag 27 maart 2017 om 12.30 uur precies door William Lawrence Schuerman geboren op 28 september 1984 te Sonoma, Verenigde Staten Promotoren: Prof. dr. J. M. McQueen Prof. dr. A. Meyer Manuscriptcommissie: Prof. dr. M. Ernestus Prof. dr. H. Bekkering Dr. P. Adank (University College London, Verenigd Koninkrijk) This research was supported by the The Max Planck Society for the Advancement of Science, Munich, Germany. Sensorimotor Experience in Speech Perception Doctoral Thesis to obtain the degree of doctor from Radboud University Nijmegen on the authority of the Rector Magnificus prof. dr. J.H.J.M. van Krieken, according to the decision of the Council of Deans to be defended in public on Monday, March 27, 2017 at 12:30 hours by William Lawrence Schuerman born in Sonoma, USA on September 28,1984 Supervisors: Prof.
    [Show full text]
  • Teaching About Speech Perception and Production Inexpensively on Microcomputers
    Behavior Research Methods, Instruments, & Computers 1990, 22 (2), 219-222 Teaching about speech perception and production inexpensively on microcomputers JOSEPH P. BLOUNT Saint Mary's College, Notre Dame, Indiana and MARY ANN R. BLOUNT Saint Joseph's Medical Center, South Bend, Indiana It is difficult to teach an introduction to speech perception and production without hands-on experience for the students. We suggest inexpensive ways to use microcomputers to give such experience, with regard to letter-to-sound correspondences, formants, voice onset time, and other topics. Students have reported that they learn more with these approaches and enjoy them. Speech perception and production is an area of science or equipment is beyond the scope of this paper; however, in which there has recently been rapid, exciting progress. we do sample from products involved in speech produc­ Psychologists, linguists, and others have discovered tion as well as perception, from products for Macintosh astonishing new perceptual phenomena and begun to un­ as well as IBM-compatible systems, and from competing ravel the complexity of the acoustic coding. Because products as well. of the usefulness and importance of the principles, an In the remainder of this paper, we will discuss nine introduction to them is now often included in cognitive learning activities and the software or equipment that psychology courses, sensation and perception courses, makes them easy to accomplish. The first activity is in­ psychology of language courses, a few introductory psy­ tended to show the students that they already know more chology courses, speech science courses, hearing science about speech than they realize.
    [Show full text]
  • And Others TITLE Research on Speech Perception
    DOCUMEBT RESUME ED 318 055 CS 507 126 AUTHOR Pisoni, David B.; And Others TITLE Research on Speech Perception. Progress Report No. 9, January 1983-December 1983. INSTITUTION Indiana Univ., Bloomington. Dept. of Psychology. SPANS AGENCY Air Force Systems Command, Washington, D.C.; National Institutes of Health (DHHS), Bethesda, Md.; National Inst. of Mental Health (DHHS), Rockville, MD.; National Science Foundation, Washington, D.C. PUB DATE 83 CONTRACT AF-F-33615-83-K-0501 GRANT BNS-83-05387; MH-24027-07; NS-07134-05; NS-12179-08 NOTE 360p.; For other reports in this series, see CS 507 123-129. PUB TYPE Reports - Research/Technical (143) -- Collected Works - General (020) -- Information Analyses (070) EDRS PRICE MF01/PC15 Plus Postage. DESCRIPTORS *Acoustic Phonetics; Auditory Discrimination; *Auditory Perception; Communication Research; Computer Software Development; Infants; *Language Processing; Language Research; Linguistics; Speech; Speech Synthesizers IDENTIFIERS Indiana University Bloomington; *Speech Perception; Speech Research; Theory Development L3STRACT Summarizing research activities from January 1983 to Danember 1983, this is the ninth annual report of research on speech perception, analysis and synthesis conducted in the Speech Research Laboratory of the Department of Psychology at Indiana University. The report includes extended manuscripts, short reports, and progress reports. The report contains the following 17 articles: "Contextual Effects on the Consonant/Vowel Ratio in Speech Production" (P. A. Luce and J. Charles-Luce); "Some Effects of Training on the Perception of Synthetic Speech" (E. C. Schwab aLd others); "Vowel Categorization by Three-Year-Old Children" (C. A. Kubaska and R. N. Aslin); "Effects of Speech Rate and Pitch Contour on the Perception of Synthetic Speech" (L.
    [Show full text]
  • Speech Synthesis
    Contents 1 Introduction 3 1.1 Quality of a Speech Synthesizer 3 1.2 The TTS System 3 2 History 4 2.1 Electronic Devices 4 3 Synthesizer Technologies 6 3.1 Waveform/Spectral Coding 6 3.2 Concatenative Synthesis 6 3.2.1 Unit Selection Synthesis 6 3.2.2 Diaphone Synthesis 7 3.2.3 Domain-Specific Synthesis 7 3.3 Formant Synthesis 8 3.4 Articulatory Synthesis 9 3.5 HMM-Based Synthesis 10 3.6 Sine Wave Synthesis 10 4 Challenges 11 4.1 Text Normalization Challenges 11 4.1.1 Homographs 11 4.1.2 Numbers and Abbreviations 11 4.2 Text-to-Phoneme Challenges 11 4.3 Evaluation Challenges 12 5 Speech Synthesis in Operating Systems 13 5.1 Atari 13 5.2 Apple 13 5.3 AmigaOS 13 5.4 Microsoft Windows 13 6 Speech Synthesis Markup Languages 15 7 Applications 16 7.1 Contact Centers 16 7.2 Assistive Technologies 16 1 © Specialty Answering Service. All rights reserved. 7.3 Gaming and Entertainment 16 8 References 17 2 © Specialty Answering Service. All rights reserved. 1 Introduction The word ‘Synthesis’ is defined by the Webster’s Dictionary as ‘the putting together of parts or elements so as to form a whole’. Speech synthesis generally refers to the artificial generation of human voice – either in the form of speech or in other forms such as a song. The computer system used for speech synthesis is known as a speech synthesizer. There are several types of speech synthesizers (both hardware based and software based) with different underlying technologies.
    [Show full text]