Spoken Language Generation and Un- Derstanding by Machine: a Problems and Applications-Oriented Overview

Total Page:16

File Type:pdf, Size:1020Kb

Spoken Language Generation and Un- Derstanding by Machine: a Problems and Applications-Oriented Overview See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/256982222 Spoken Language Generation and Understanding by Machine: A Problems and Applications Oriented Overview Chapter · January 1980 DOI: 10.1007/978-94-009-9091-3_1 CITATIONS READS 6 826 1 author: David Hill The University of Calgary 58 PUBLICATIONS 489 CITATIONS SEE PROFILE Some of the authors of this publication are also working on these related projects: Computer-Human Interaction View project Animating speech View project All content following this page was uploaded by David Hill on 03 December 2015. The user has requested enhancement of the downloaded file. Spoken Language Generation and Un- derstanding by Machine: a Problems and Applications-Oriented Overview David R. Hill Department of Computer Science, University of Calgary, AB, Canada © 1979, 2015 David R. Hill Summary Speech offers humans a means of spontaneous, convenient and effective communication for which neither preparation nor tools are required, and which may be adapted to suit the developing requirements of a communication situation. Although machines lack the feedback, based on understanding and shared experience that is essential to this form of communication in general, speech is so fundamental to the psychol- ogy of the human, and offers such a range of practical advantages, that it is profitable to develop and apply means of speech communication with machines under constraints that allow the provision of adequate dialog and feedback. The paper details the advantages of speech com- munication and outlines speech production and perception in terms relevant to understanding the rest of the paper. The hierarchy/heterar- chy of processes involved in both automatic speech understanding and the machine generation of speech is examined from a variety of points of view and the current capabilities in terms of viable applications are noted. It is concluded that there are many economically viable appli- cations for both the machine generation of speech and the machine recognition of speech. It is also suggested that, although fundamental research problems still exist in both areas, the barriers to increased use of speech for communication with machines are mainly psychological and social, rather than technological or economic, though problems of understanding and natural dialogue design remain. This paper is largely based on an earlier paper with the same title, originally presented at the NATO Advanced Summer Institute, held in Bonas, France, June 26 to July 7, 1979 and published in J. C. Si- mon (ed.) Spoken Language Generation and Understanding, D. Reidel Publishing Co. Dordrecht, Holland/Boston/London 1980, pp 3-38. 2 D.R. Hill 1. Introduction: History and Background Historically, speech has dominated the human scene as a basis for person to person communication, writing and printing being used chiefly for record-keeping (especially for legal and govern- ment purposes) and long-distance communication. It is only com- paratively recently that written language skills have been widely acquired and practised, even in the developed western world, and it is still true that large numbers of people in the world are unable to read or write. With the advent of television and radio speech has started to gain ground again even in the west, and it is a peren- nial complaint amongst educators that the rising generation is not at ease with the written forms of language. Even universities are finding it necessary to set tests of students’ ability to comprehend and write their native language. It is in this sense that speech may be considered a more fundamental means of human communica- tion than other forms of linguistic expression. Face to face speech allows a freer and more forceful use of lan- guage, because it is aided by contextual, situational and all man- ner of linguistic and para-linguistic features (voice tone, rhythm, the rise and fall of pitch, gestures, pauses and emphasis) that are simply absent from written forms. Grammatical rules of all kinds may be broken with impunity in speech to give a stream of lan- guage that would be incomprehensible in written form. Even for careful speech (as in an important meeting) it is sobering to read a verbatim transcript of the proceedings afterwards. However, ev- eryone understands at the time and (perhaps more important) if they don’t, they can seek immediate clarification. The planning load for speech communication is less, even over a telephone, be- cause ideas can be added to and expanded as required. A written document must anticipate and deal with all questions and obscu- rities that could arise for arbitrary readers, whenever they read it, and under whatever circumstances, without all the additional resources inherent in speech. This is why written documents fol- low such a mass of conventions and rules so precisely. That kind of redundancy and familiarity is necessary to overcome the many disadvantages with which written documents are burdened. But, in this sense, written documents can remain precise despite the test of time and circumstances. A problems and applications oriented overview 3 Human beings can usually communicate far more readily, conveniently, and effectively using speech than they can by writ- ing, providing there is adequate feedback, and responses do not overload the human short-term memory. Under these conditions, the subject feels more comfortable and is less encumbered. This is what people really mean when they say that speech is more natural and convenient as a means of communication. With re- duced feedback, speech must put on some of the formal trappings of written language, which explains the difference in character between a lecture, a seminar and a conversation. With increas- ing interaction and feedback, the constraints of language may be increasingly relaxed, short cuts taken, and the communication experience greatly improved. Books record material, but teach- ers and lecturers are likely to communicate what is recorded in a more effective, succinct (if different) form, depending on their familiarity with the subject(s) by speaking. It is, of course, easier to close a book (and do something else) than to walk out of a well- delivered lecture or seminar. Speech also has immediacy, impact, and holds attention better. Thus the first reason for wanting a speech interface, as op- posed to any other, is based in the psychology of the human. It is natural, convenient, and indeed almost expected of any system worth communicating with. There are, however, a number of very practical advantages: (a) it leaves the human body free for other activities providing an extra channel for multi-modal communication in the process; (b) it is well-suited to providing “alert” or break-in messages even for quite large groups and may communicate urgency in a way only rivalled by real flames; (c) no tool is required; (d) it is omnidirectional, so does not require a fixed operator position; (e) it is equally effective in the light or the dark, or, in fact, under many conditions when visual or tactile communication would be impeded such as in smoke, or under conditions of high acceleration or vibra- tion (though the latter may cause some interference with human speech production); (f) it avoids the use of keyboards and displays thereby taking up less panel space and avoiding the need for special operator training to cope with the key layouts and display interpretation; (g) it allows simple monitoring of command/response sequences by remote personnel or recording devices; 4 D.R. Hill (h) it can allow a single channel to serve more than one purpose (e.g. com- munication with aircraft at the same time as data entry to an Air Traffic Control computer); (i) it allows blind people, or those with manual disabilities to integrate into Human-Computer Interaction (HCI) systems with ease; (j) it allows security checking on the basis of voice characteristics; and perhaps most important, (k) it is compatible with the widely available and inexpensive public tele- phone system, including mobile devices. Against these we must set a number of disadvantages: (a) it may be subject to interference by noise; (b) it is not easily scanned for purposes of selecting infrequent items from some large body of information; (c) special precautions must be taken for verification and feedback, par- ticularly for messages initiating critical actions, for reasons previously discussed; (d) unrestricted speech input to machines is likely to be unavailable for a long time to come (in fairness, it should be pointed out that there are severe restrictions on other forms of input, the removal of which would involve the solution of similar problems); (e) speech input to machines seems likely always to require additional ex- pense, compared to written input, simply because of the considerable extra processing power and memory space needed to decode the inher- ently ill-defined and ambiguous acoustic evidence—a step more akin to using handwriting for input in place of well defined machine generated character codes. However, these costs may be tolerable, given the falling price and increasing power of the necessary hardware, just as Optical Character Recognition (OCR) is now viable. It is concluded that we should use speech to communicate with machines. People need to interact with machine systems in the most effective way under a variety of circumstances in which speech offers solutions to problems, or communication enhancing advantages. Especially,
Recommended publications
  • Part 2: RHYTHM – DURATION and TIMING Część 2: RYTM – ILOCZAS I WZORCE CZASOWE
    Part 2: RHYTHM – DURATION AND TIMING Część 2: RYTM – ILOCZAS I WZORCE CZASOWE From research to application: creating and applying models of British RP English rhythm and intonation Od badań do aplikacji: tworzenie i zastosowanie modeli rytmu i intonacji języka angielskiego brytyjskiego David R. Hill Department of Computer Science The University of Calgary, Alberta, Canada [email protected] ABSTRACT Wiktor Jassem’s contributions and suggestions for further work have been an essential influence on my own work. In 1977, Wiktor agreed to come to my Human-Computer Interaction Laboratory at the University of Calgary to collaborate on problems asso- ciated with British English rhythm and intonation in computer speech synthesis from text. The cooperation resulted in innovative models which were used in implementing the world’s first completely functional and still operational real-time system for arti- culatory speech synthesis by rule from plain text. The package includes the software tools needed for developing the databases required for synthesising other languages, as well as providing stimuli for psychophysical experiments in phonetics and phonology. STRESZCZENIE Pu bli ka cje Wik to ra Jas se ma i wska zów ki do dal szej pra cy w spo sób istot ny wpły nę - ły na mo ją wła sną pra cę. W 1997 ro ku Wik tor zgo dził się na przy jazd do mo je go La - bo ra to rium In te rak cji Czło wiek -Kom pu ter na Uni wer sy te cie w Cal ga ry aby wspól nie za jąć się pro ble ma mi zwią za ny mi z ryt mem w bry tyj skim an giel skim oraz in to na cją w syn te zie mo wy z tek stu.
    [Show full text]
  • SPEECH ACOUSTICS and PHONETICS Text, Speech and Language Technology VOLUME 24
    SPEECH ACOUSTICS AND PHONETICS Text, Speech and Language Technology VOLUME 24 Series Editors Nancy Ide, Vassar College, New York Jean V´eronis, Universited´ eProvence and CNRS, France Editorial Board Harald Baayen, Max Planck Institute for Psycholinguistics, The Netherlands Kenneth W. Church, AT&TBell Labs, New Jersey, USA Judith Klavans, Columbia University, New York, USA David T. Barnard, University of Regina, Canada Dan Tufis, Romanian Academy of Sciences, Romania Joaquim Llisterri, Universitat Autonoma de Barcelona, Spain Stig Johansson, University of Oslo, Norway Joseph Mariani, LIMSI-CNRS, France The titles published in this series are listed at the end of this volume. Speech Acoustics and Phonetics by GUNNAR FANT Department of Speech, Music and Hearing, Royal Institute of Technology, Stockholm, Sweden KLUWER ACADEMIC PUBLISHERS DORDRECHT / BOSTON / LONDON A C.I.P Catalogue record for this book is available from the Library of Congress. ISBN 1-4020-2789-3 (PB) ISBN 1-4020-2373-1 (HB) ISBN 1-4020-2790-7 (e-book) Published by Kluwer Academic Publishers, P.O. Box 17, 3300 AA Dordrecht, The Netherlands. Sold and distributed in North, Central and South America by Kluwer Academic Publishers, 101 Philip Drive, Norwell, MA 02061, U.S.A. In all other countries, sold and distributed by Kluwer Academic Publishers, P.O. Box 322, 3300 AH Dordrecht, The Netherlands. Printed on acid-free paper All Rights Reserved C 2004 Kluwer Academic Publishers No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work.
    [Show full text]
  • Estudios De I+D+I
    ESTUDIOS DE I+D+I Número 51 Proyecto SIRAU. Servicio de gestión de información remota para las actividades de la vida diaria adaptable a usuario Autor/es: Catalá Mallofré, Andreu Filiación: Universidad Politécnica de Cataluña Contacto: Fecha: 2006 Para citar este documento: CATALÁ MALLOFRÉ, Andreu (Convocatoria 2006). “Proyecto SIRAU. Servicio de gestión de información remota para las actividades de la vida diaria adaptable a usuario”. Madrid. Estudios de I+D+I, nº 51. [Fecha de publicación: 03/05/2010]. <http://www.imsersomayores.csic.es/documentos/documentos/imserso-estudiosidi-51.pdf> Una iniciativa del IMSERSO y del CSIC © 2003 Portal Mayores http://www.imsersomayores.csic.es Resumen Este proyecto se enmarca dentro de una de las líneas de investigación del Centro de Estudios Tecnológicos para Personas con Dependencia (CETDP – UPC) de la Universidad Politécnica de Cataluña que se dedica a desarrollar soluciones tecnológicas para mejorar la calidad de vida de las personas con discapacidad. Se pretende aprovechar el gran avance que representan las nuevas tecnologías de identificación con radiofrecuencia (RFID), para su aplicación como sistema de apoyo a personas con déficit de distinta índole. En principio estaba pensado para personas con discapacidad visual, pero su uso es fácilmente extensible a personas con problemas de comprensión y memoria, o cualquier tipo de déficit cognitivo. La idea consiste en ofrecer la posibilidad de reconocer electrónicamente los objetos de la vida diaria, y que un sistema pueda presentar la información asociada mediante un canal verbal. Consta de un terminal portátil equipado con un trasmisor de corto alcance. Cuando el usuario acerca este terminal a un objeto o viceversa, lo identifica y ofrece información complementaria mediante un mensaje oral.
    [Show full text]
  • Half a Century in Phonetics and Speech Research
    Fonetik 2000, Swedish phonetics meeting in Skövde, May 24-26, 2000 (Expanded version, internal TMH report) Half a century in phonetics and speech research Gunnar Fant Department of Speech, Music and Hearing, KTH, Stockholm, 10044 Abstract This is a brief outlook of experiences during more than 50 years in phonetics and speech research. I will have something to say about my own scientific carrier, the growth of our department at KTH, and I will end up with an overview of research objectives in phonetics and a summary of my present activities. Introduction As you are all aware of, phonetics and speech research are highly interrelated and integrated in many branches of humanities and technology. In Sweden by tradition, phonetics and linguistics have had a strong position and speech technology is well developed and internationally respected. This is indeed an exciting field of growing importance which still keeps me busy. What have we been up to during half a century? Where do we stand today and how do we look ahead? I am not attempting a deep, thorough study, my presentation will in part be anecdotal, but I hope that it will add to the perspective, supplementing the brief account presented in Fant (1998) The early period 1945-1966 KTH and Ericsson 1945-1949 I graduated from the department of Telegraphy and Telephony of the KTH in May 1945. My supervisor, professor Torbern Laurent, a specialist in transmission line theory and electrical filters had an open mind for interdisciplinary studies. My thesis was concerned with theoretical matters of relations between speech intelligibility and reduction of overall system bandwidth, incorporating the effects of different types of hearing loss.
    [Show full text]
  • Speech Generation: from Concept and from Text
    Speech Generation From Concept and from Text Martin Jansche CS 6998 2004-02-11 Components of spoken output systems Front end: From input to control parameters. • From naturally occurring text; or • From constrained mark-up language; or • From semantic/conceptual representations. Back end: From control parameters to waveform. • Articulatory synthesis; or • Acoustic synthesis: – Based predominantly on speech samples; or – Using mostly synthetic sources. 2004-02-11 1 Who said anything about computers? Wolfgang von Kempelen, Mechanismus der menschlichen Sprache nebst Beschreibung einer sprechenden Maschine, 1791. Charles Wheatstone’s reconstruction of von Kempelen’s machine 2004-02-11 2 Joseph Faber’s Euphonia, 1846 2004-02-11 3 Modern articulatory synthesis • Output produced by an articulatory synthesizer from Dennis Klatt’s review article (JASA 1987) • Praat demo • Overview at Haskins Laboratories (Yale) 2004-02-11 4 The Voder ... Developed by Homer Dudley at Bell Telephone Laboratories, 1939 2004-02-11 5 ... an acoustic synthesizer Architectural blueprint for the Voder Output produced by the Voder 2004-02-11 6 The Pattern Playback Developed by Franklin Cooper at Haskins Laboratories, 1951 No human operator required. Machine plays back previously drawn spectrogram (spectrograph invented a few years earlier). 2004-02-11 7 Can you understand what it says? Output produced by the Pattern Playback. 2004-02-11 8 Can you understand what it says? Output produced by the Pattern Playback. These days a chicken leg is a rare dish. It’s easy to tell the depth of a well. Four hours of steady work faced us. 2004-02-11 9 Synthesis-by-rule • Realization that spectrograph and Pattern Playback are really only recording and playback devices.
    [Show full text]
  • Aplicació De Tècniques De Generació Automàtica De La Parla En Producció Audiovisual
    Aplicació de tècniques de generació automàtica de la parla en producció audiovisual Maig 2011 Investigador responsable: Francesc Alías Pujol Equip: Ignasi Iriondo Sanz Joan Claudi Socoró Carrié Lluís Formiga Fanals Alexandre Trilla Castelló VII convocatòria d‘ajuts a projectes de recerca sobre comunicació audiovisual (segons acord 96/2010 del Ple del Consell de l‘Audiovisual de Catalunya) La Salle – Universitat Ramon Llull Departament de Tecnologies Mèdia Quatre Camins, 30 08022 BARCELONA Consell de l‘Audiovisual de Catalunya (CAC) Sancho d‘Àvila, 125-129 08018 BARCELONA Agraïments Aquest estudi de recerca ha estat possible gràcies a l‘ajut concedit pel Consell de l‘Audiovisual de Catalunya en la VII convocatòria d‘Ajuts a projectes de recerca sobre comunicació audiovisual (segons acord 96/2010 del Ple del Consell de l‘Audiovisual de Catalunya). Volem agrair al Dr. Antonio Bonafonte de la Universitat Politècnica de Catalunya (UPC) per la cessió dels textos corresponents a les veus Ona i Pau del projecte FestCat, utilitzades en aquest treball. També volem agrair a les persones que han participat de forma desinteressada en l‘enquesta realitzada dins del marc del treball de camp realitzat, tant del món de l‘audiovisual com les persones amb discapacitat visual que han tingut l‘amabilitat d‘atendre les nostres qüestions. En especial, volem agrair la col·laboració de l‘Anna Torrens que, dins del marc del seu Treball Final de Carrera d‘Enginyeria Tècnica en Sistemes de Telecomunicació (La Salle, Universitat Ramon Llull) ha estat l‘encarregada de realitzar l‘estudi de camp anteriorment esmentat. ÍNDEX 1 Estat de la qüestió sobre la síntesi de veu ...........................................................
    [Show full text]
  • C H a P T E R
    CHAPTER 16 Speech SynthesisEquation Section 16 The speech synthesis module of a TTS sys- tem is the component that generates the waveform. The input of traditional speech synthesis systems is a phonetic transcription with its associated prosody. The input can also include the original text with tags, as this may help in producing higher-quality speech. Speech synthesis systems can be classified into three types according to the model used in the speech generation. Articulatory synthesis, described in Section 16.2.4, uses a physical model of speech production that includes all the articulators described in Chapter 2. Formant synthesis uses a source-filter model, where the filter is characterized by slowly varying formant frequencies; it is the subject of Section 16.2. Concatenative synthesis gen- erates speech by concatenating speech segments and is described in Section 16.3. To allow more flexibility in concatenative synthesis, a number of prosody modification techniques are described in Sections 16.4 and 16.5. Finally, a guide to evaluating speech synthesis systems is included in Section 16.6. Speech synthesis systems can also be classified according to the degree of manual in- tervention in the system design into synthesis by rule and data-driven synthesis.Inthefor- 777 778 Speech Synthesis mer, a set of manually derived rules is used to drive a synthesizer, and in the latter the syn- thesizer’s parameters are obtained automatically from real speech data. Concatenative sys- tems are, thus, data driven. Formant synthesizers have traditionally used synthesis by rule, since the evolution of formants in a formant synthesizer has been done with hand-derived rules.
    [Show full text]
  • Gnuspeech Tract Manual 0.9
    Gnuspeech TRAcT Manual 0.9 TRAcT: the Gnuspeech Tube Resonance Access Tool: a means of investigating and understanding the basic Gnuspeech vocal tract model David R. Hill, University of Calgary TRAcT and the \tube" model to which it allows access was originally known as \Synthesizer" and developed by Leonard Manzara on the NeXT computer. The name has been changed because \Synthesiser" might be taken to imply it synthesises speech, which it doesn't. It is a \low-level" sound synthesiser that models the human vocal tract, requiring the \high-level" synthesiser control inherent in Gnuspeech/Monet to speak. TRAcT provides direct interactive access to the human vocal tract model parameters and shapes as a means of exploring its sound generation capabilities, as needed for speech database creation. i (GnuSpeech TRAcT Manual Version 0.9) David R. Hill, PEng, FBCS ([email protected] or drh@firethorne.com) © 2004, 2015 David R. Hill. All rights reserved. This document is publicly available under the terms of a Free Software Foundation \Free Doc- umentation Licence" See see http://www.gnu.org/copyleft/fdl.html for the licence terms. This page and the acknowledgements section are the invariant sections. ii SUMMARY The \Tube Resonance Model" (TRM, or \tube", or \waveguide", or transmission-line) forms the acoustic basis of the Gnupeech articulatory text-to-speech system and provides a linguistics research tool. It emulates the human vocal apparatus and allows \postures" to be imposed on vocal tract model, and energy to be injected representing voicing, whispering, sibilant noises and \breathy" noise. The \Distinctive Region Model" (DRM) control system that allows simple specification of the postures, and accessed by TRAcT 1is based on research by CGM Fant and his colleagues at the Stockholm Royal Institute of Technology Speech Technology Laboratory (Fant & Pauli 1974), by Ren´eCarr´eand his colleagues at T´el´ecomParis (Carr´e,Chennoukh & Mrayati 1992), and further developed by our original research team at Trillium Sound Research and the U of Calgary).
    [Show full text]
  • DOCUMENT RESUME ED 052 654 FL 002 384 TITLE Speech Research
    DOCUMENT RESUME ED 052 654 FL 002 384 TITLE Speech Research: A Report on the Status and Progress of Studies on Lhe Nature of Speech, Instrumentation for its Investigation, and Practical Applications. 1 July - 30 September 1970. INSTITUTION Haskins Labs., New Haven, Conn. SPONS AGENCY Office of Naval Research, Washington, D.C. Information Systems Research. REPORT NO SR-23 PUB DATE Oct 70 NOTE 211p. EDRS PRICE EDRS Price MF-$0.65 HC-$9.87 DESCRIPTORS Acoustics, *Articulation (Speech), Artificial Speech, Auditory Discrimination, Auditory Perception, Behavioral Science Research, *Laboratory Experiments, *Language Research, Linguistic Performance, Phonemics, Phonetics, Physiology, Psychoacoustics, *Psycholinguistics, *Speech, Speech Clinics, Speech Pathology ABSTRACT This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. The reports contained in this particular number are state-of-the-art reviews of work central to the Haskins Laboratories' areas of research. Tre papers included are: (1) "Phonetics: An Overview," (2) "The Perception of Speech," (3) "Physiological Aspects of Articulatory Behavior," (4) "Laryngeal Research in Experimental Phonetics," (5) "Speech Synthesis for Phonetic and Phonological Models," (6)"On Time and Timing in Speech," and (7)"A Study of Prosodic Features." (Author) SR-23 (1970) SPEECH RESEARCH A Report on the Status and Progress ofStudies on the Nature of Speech,Instrumentation for its Investigation, andPractical Applications 1 July - 30 September1970 U.S. DEPARTMENT OF HEALTH, EDUCATION & WELFARE OFFICE OF EDUCATION THIS DOCUMENT HAS BEEN REPRODUCED EXACTLY AS RECEIVED FROMTHE PERSON OR ORGANIZATION ORIGINATING IT,POINTS OF VIEW OR OPINIONS STATED DO NOT NECESSARILY REPRESENT OFFICIAL OFFICE OF EDUCATION POSITION OR POLICY.
    [Show full text]
  • Michael Studdert-Kennedy
    MSK oral history: At Haskins Labs, 300 George St CAF: All right. This is February 19, 2013. Present are Carol Fowler, Donald Shankweiler and Michael Studdert-Kennedy. So you have the list of questions. Shall we start at the beginning about how you got to Haskins in the first place. MSK: Yes, well. Shall I go into a personal detail? DPS: Sure. CAF: Abslolutely, Yeah. MSK: Well in the beginning way before this…I should say that I regard my arriving at Haskins as the luckiest event of my life. And it completely shaped it. But at any rate. I originally went into psychology because I thought I was going to become an analyst. I’d read a lot of Freud, and that’s what I thought I was going to do. And so I enrolled in the clinical program at Columbia in 1957 when I was 30. I had wasted my 20s in various wasteful activities in Rome and elsewhere. They would give me an assistantship in the eXperimental department for my first year but they wouldn’t for my clinical. So the idea was that I ‘d spend the first year in experimental and then switch to clinical. However, after taking a course in abnormal psychology and a number of eXperimental courses I realiZed that I couldn’t conceivably become a clinical psychologist because I couldn't believe a word they said half the time. And so I became an eXperimentalist and it was a heavily behaviorist department, and I did rats and pigeons. And then one summer by a great fluke, I needed a summer job and Bill McGill who was later the advisor of Dom Massaro incidently, asked me to be his assistant.
    [Show full text]
  • 1. the Speech Chain
    SPEECH, HEARING & PHONETIC SCIENCES UCL Division of Psychology and Language Sciences PALS1004 Introduction to Speech Science 1. The Speech Chain Learning Objectives At the end of this topic the student should be able to: describe the domain of Speech Science as a scientific discipline within Psychology give an account of the speech chain as a model of linguistic communication describe the processes and knowledge involved in communicating information through the speech chain begin to understand concepts and terminology used in the scientific description of speech communication demonstrate insights into the size and complexity of language and the remarkable nature of speech identify some technological applications of Speech Science identify some of the key people in the history of the field Topics 1. Speech Science as a field of Psychology The use of complex language is a trait unique to humans not shared by even our closest primate cousins. There is a good argument to be made that it is our facility for linguistic communication that has made human society possible and given us civilisation, arts, sciences & technology. Although hominins have been around for about 5 million years, human language is a recent phenomenon, arising within the last 200,000 years at most; and except for the last 5000 years, occurring only in its spoken form. A period of 500 thousand years is not long in evolutionary time, and there is little evidence that humans have a biological specialisation for spoken language; that is, the anatomical and neurophysiological structures we see in humans are very similar to those we see in other primates and mammals.
    [Show full text]
  • “Síntese Computacional De Voz”
    JUSSARA RIBEIRO “SÍNTESE COMPUTACIONAL DE VOZ” Trabalho de Conclusão de Curso apresentado à Escola de Engenharia de São Carlos, da Universidade de São Paulo Curso de Engenharia Elétrica com ênfase em Eletrônica ORIENTADOR: Rodrigo Capobianco Guido São Carlos 2010 1 2 DEDICATÓRIA Dedico este trabalho primeiramente a Deus, pois sem Ele, nada seria possível. Aos meus pais José Marcos e Lezira e a minha irmã Juliana, que sempre me incentivaram e estiveram ao meu lado em todas as horas, compartilhando os momentos de felicidade e tristeza. Aos meus grandes amigos Sabrina, Ana Paula e Michel e ao meu namorado Frederico, pela amizade e companheirismo ao longo desta jornada. Dedico este trabalho a toda minha família e em especial, a meu avô José Alfeu, pois seu apoio foi fundamental na realização deste sonho. 3 4 AGRADECIMENTOS Agradeço a todas as pessoas que contribuíram, diretamente ou indiretamente, para realização deste trabalho. Aos professores, que contribuíram para meu desenvolvimento pessoal e profissional. Em especial, agradeço ao Professor Rodrigo Capobianco Guido, pela oportunidade, por sua atenção, incentivo e ajuda. 5 6 Conteúdo Índice de Figuras ........................................................................................................................ 8 1. Introdução .......................................................................................................................... 13 2. Revisão bibliográfica .......................................................................................................
    [Show full text]