Selection of Tutorials and Related Materials for Spoken Language Engineering

Total Page:16

File Type:pdf, Size:1020Kb

Selection of Tutorials and Related Materials for Spoken Language Engineering CHAPTER 3 SELECTION OF TUTORIALS AND RELATED MATERIALS FOR SPOKEN LANGUAGE ENGINEERING Klaus Fellbaum Brandenburg Technical University of Cottbus, Germany Marian Boldea Universitea Politehnica Timisoara, Romania Andrzej Drygajlo Ecole Polytechnic Federale de Lausanne, Switzerland Mircea Giurgiu Technical University of Cluj-Napoca, Romania Phil Green University of Sheffield, United Kingdom Ruediger Hoffmann Technische Universität, Dresden, Germany Michael McTear University of Ulster, Northern Ireland Bojan Petek University of Ljubljana, Slovenia Victoria Sanchez University of Granada, Spain Kyriakos Sgarbas University of Patras, Greece Spoken Language Engineering 1 Introduction This chapter summarises work by the Spoken Language Engineering (SLE) Working Group of the Socrates Thematic Network in Speech Communication Sciences. The SLE Working Group, now in its third funding year, has surveyed SLE course provision in Europe (Green et al., 1997) and has made proposals for SLE curriculum development at both undergraduate and postgraduate levels (Espain et al., 1998). The thematic network has shown that computer-based teaching aids (on- line tutorials, demonstration packages and so on) are vital to the future development of SLE education. This follows from the multidisciplinary and technical nature of SLE, which requires novel ways of presenting unfamiliar material. In recent years, such software has begun to appear, partly as a result of initiatives taken within the network, within associated projects and independently. The increasing interest in SLE courseware was demonstrated at the recent MATISSE workshop, on which much of the following review material is based (Hazan and Holland, 1999). The chapter analyses software resources available in relation to curricular requirements and educational criteria and makes recommendations for modules in an SLE curriculum. In addition, we identify areas for which high-quality courseware is, to our knowledge, unavailable and identify actions to fill these remaining gaps. According to the structuring of the second book (Bloothooft et al., 1998), we used the sections: • Introduction to Speech Communication and Speech Technology • Speech Analysis • Natural Language Processing • Speech Production and Perception • Speech Coding • Speech Synthesis • Speech Recognition • Spoken Dialogue Modelling • Language Resources Concerning the subchapters on Applications and Current Research in SLE, we did not find tutorials or other relevant teaching material. This is not very surprising since applications in speech processing are usually a commercial matter and a company (presenting applications) normally does not have a strong interest in a detailed and tutorial-like presentation. For current research in general, the time is too short to transform the results into a didactically oriented form and thus the usual presentation is in scientific articles or in proceedings. Generally speaking, we found a very heterogeneous coverage of the speech communication area, heterogeneous in both, the subjects (see above) and the media (Web, CDROM, books etc.). We could identify an accumulation of introductory material, mainly in speech production and perception, signal processing and linguistics. But other areas (for example speech coding and synthesis) are not covered satisfactorily. As to the media, very often the Web presentations were only test versions with a more advertising character offering CDROMS, books or the download of the complete material after payment. Finally, the quality of the material and the didactic quality varies strongly. 22 Spoken Language Engineering As a general remark it must be stated that SPEECH INPUT is still a difficult problem in the landscape of the Web. SPEECH OUTPUT in contrast is quite easy! There are only very few tutorials using speech input. For the moment, there are only two possibilities to perform speech input: • The shareware SoundBite (SCRAWL company) which works only for Windows 95/98 or NT and the NETSCAPE Browser 4.04 (or higher). For more details and downloading visit http://www.scrawl.com/store. • The Tcl/Tk plugin. This plugin can be used if speech input (and output) is applied to an existing tutorial. If someone wants to produce a new tutorial, the Tcl/Tk libraries are necessary. For more details see http://www.scriptics.com/plugin/ . A third possibility, based on Java2 tools, is in preparation and a release is announced for the end of 1999. For now, only pre-versions are available. One module is Java Sound API (http://java.sun.com/products/java-media/sound) which has record and playback but no storage features. Another Java product is Java Media Framework which has storage capabilities and, in addition, network transmission (RTP) features. However, it is also a pre-version. For more details see http://java.sun.com/products/java-media/jmf . The next sections will present a selection of tutorials in detail. 23 Spoken Language Engineering 2 Introduction to Speech Communication and Speech Technology This section deals with introductory material. As a matter of fact, the subjects cover a wide area between speech signal presentations in the time and frequency domain, signal processing techniques (windowing, FFT, parameter extraction etc.) and basic principles of acoustics and physiology. Although most of the tutorials are far from complete speech courses, they are very useful as appetizers and they can motivate beginners to dive into the speech area. Speech Visualisation Tutorial http://isl.ira.uka.de/~maier/speech/vistut/ University of Karlruhe, Interactive Systems Laboratories Availability: free. Requirements: WWW-browser with sound replay capabilities. Description: The tutorial covers visualisation of speech waveforms and spectrograms. It presents the waveform and a spectrogram of the utterance "speech lab". Labels have been added to the views of the speech marking the beginning of each phoneme (or speech sound) in the utterance. Impression: May be used as a short introductory text on spectrograms. comp.speech Frequently Asked Questions WWW site http://svr-www.eng.cam.ac.uk/comp.speech/ University of Cambridge, Department of Engineering Availability: free. Requirements: WWW-browser with sound replay capabilities. Description: The site provides a range of information on speech technology, including speech synthesis, speech recognition, speech coding, and related material. The information is regularly posted to the comp.speech newsgroup as the "comp.speech FAQ" posting. This site is mirrored at several other WWW sites around the world (Australia, UK, Japan and USA) and the information is also available in a plain text format. There are 250 comp.speech WWW pages and they include over 500 hyperlinks to speech technology web sites, ftp servers, mailing lists, and newsgroups. Impression: These web sites are a very useful tool to get oriented in the world of speech technology. They are not suited as teaching material but they present a collection of interesting speech themes and very many links to speech products. Speech Analysis Tutorial http://www.ling.lu.se/research/speechtutorial/tutorial.html University of Cambridge, Department of Engineering 24 Spoken Language Engineering Author: Tony Robinson Availability: free. Requirements: WWW-browser with sound replay capabilities. Description: Very brief, very introductory tutorial on speech analysis, introducing fundamental speech signal representations (waveform, Fo contour, spectrum, spectrogram, waterfall spectrogram, phonetic transcription), suitable for a first exposure to these topics. Impression: The tutorial covers a lot of details but gives only short explanations. Thus, it is suited to support lectures in speech signal analysis. Spectrogram Reading Tutorial http://cslu.cse.ogi.edu/tutordemos/SpectrogramReading/spectrogram_readin g.html Availability: free. Requirements: WWW-browser with sound replay capabilities. Description: This is a more extended introduction to speech signal representations, stressing spectrograms and transcription, with many practical exercises. Das Lesen von Sonagrammen V0.2. Begleitendes Hypertext-Dokument zur Vorlesung (in German) http://www.phonetik.uni-muenchen.de/SGL/SGLHome.html Institut für Phonetik und Sprachliche Kommunikation der Ludwig-Maximilians- Universität München Authors: K. Machelett, H.G. Tillmann Availability: free Requirements: WWW Browser Description: The tutorial deals with Spectrogram Reading following the chapters • Fundamentals • The sound classes in the sonagram • On the differentiation of sounds within the sound classes • Reading sonagrams in practice Impression: Companion material to complete lecture series in spectrogram reading with high expertise and good pictures. Das SPRACHLABOR - eine multimediale Einführung in die Welt des Sprechens/der Phonetik (in German) http://www.media-enterprise.de/sprachla/sprachla.htm Availability: Only demo version Requirements: WWW-browser with sound replay capabilities. Description: Physiology of the speech organs, acoustic fundamentals of the speech 25 Spoken Language Engineering process, spectrogram reading, speech analysis. Impression: Professional program. Tutorien und Skripte der Universität Kiel (in German, English and Swedish) http://www.ipds.uni-kiel.de/links/skripte.de.html Availability: free Requirements: WWW Browser with sound replay capabilities Description: Course papers and audio demonstrations on acoustic phonetics. Also an interactive course on linguistics, speech synthesis, speech recognition. Impression: A very useful
Recommended publications
  • La Voz Humana
    UNIVERSIDAD DE CHILE FACULTAD DE ARTES MAGISTER EN ARTES MEDIALES SHOUT! Tesis para optar al Grado de Magíster en Artes Medialess ALUMNO: Christian Oyarzún Roa PROFESOR GUÍA: Néstor Olhagaray Llanos Santiago, Chile 2012 A los que ya no tienen voz, a los viejos, a los antiguos, a aquellos que el silencio se lleva en la noche. ii AGRADECIMIENTOS Antes que nada quiero agradecer a Néstor Olhagaray, por el tesón en haberse mantenido como profesor guía de este proyecto y por la confianza manifestada más allá de mis infinitos loops y procastinación. También quisiera manifestar mis agradecimientos a Álvaro Sylleros quién, como profesor del Diplomado en Lutería Electrónica en la PUC, estimuló y ayudó a dar forma a parte sustancial de lo aquí expuesto. A Valentina Montero por la ayuda, referencia y asistencia curatorial permanente, A Pablo Retamal, Pía Sommer, Mónica Bate, Marco Aviléz, Francisco Flores, Pablo Castillo, Willy MC, Álvaro Ceppi, Valentina Serrati, Yto Aranda, Leonardo Beltrán, Daniel Tirado y a todos aquellos que olvido y que participaron como informantes clave, usuarios de prueba ó simplemente apoyando entusiastamente el proyecto. Finalmente, quiero agradecer a Claudia González, quien ha sido una figura clave dentro de todo este proceso. Gracias Clau por tu sentido crítico, por tu presencia inspiradora y la ejemplar dedicación y amor por lo que haces. iii Tabla de Contenidos RESÚMEN v INTRODUCCIÓN 1 A Hard Place 1 Descripción 4 Objetivos 6 LA VOZ HUMANA 7 La voz y el habla 7 Balbuceos, risas y gritos 11 Yo no canto por cantar 13
    [Show full text]
  • Aac 2, 3, 4, 5, 6, 50, 51, 52, 53, 55, 56, 57, 58, 59, 60, 61, 62
    315 Index A augmentative and alternative communication (AAC) 130, 145, 148 AAC 2, 3, 4, 5, 6, 50, 51, 52, 53, 55, 56, 57, Augmentative and Alternative Communication 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, (AAC) 50 69, 130, 131, 138, 140, 141, 142, 143, augmented speech 116, 126 144, 145, 147, 148, 149, 150, 153, 156, Augmented Speech Communication 117 158, 159, 160, 161, 162, 163, 164, 165, Augmented speech communication (ASC) 116 172, 173, 174, 175, 192, 194, 198, 205, automatic categorization 220 206, 207, 208, 213, 214, 215, 216, 257, automatic speech recognition 100 258, 259, 260, 263, 265 Automatic Speech Recognition (ASR) 189 able-bodied people 220 accessibility 12, 14, 15, 21 B acoustic energy 31 adaptive differential pulse code modulation background noise 56, 67, 130, 132, 134, 135, (ADPCM) 32 136, 137, 141, 142, 144 AIBO 17 BCI 17 aided communication techniques 205 bit rate 31, 32, 33, 41, 45, 47 aided communicator 234, 235, 241, 243, 245, bits per second (bps) 31 248, 250, 252 Blissymbols 235, 238, 239, 240, 241, 242, 243, aided language development 235 244, 246, 247, 250, 256 allophones 34, 35 brain-computer interfaces (BCI) 17 alternate reader 17 Brain Computer Interfaces (BCI’s) 6 alternative and augmentative communication brain-machine interface (BMI) 17 (AAC) 2, 93 C alternative communication 1 amyotrophic lateral sclerosis (ALS) 1 CALL 189, 190, 191, 192, 193, 196, 198, 199, analog-to-digital converter 30, 31 201, 202, 203 anti-aliasing (low-pass) filters 3 1 CASLT 188, 189, 190, 191, 193, 198, 199 aphasia 148, 149,
    [Show full text]
  • WIKI on ACCESSIBILITY Completion Report March 2010
    WIKI ON ACCESSIBILITY Completion report March 2010 By Nirmita Narasimhan Programme Manager Centre for Internet and Society Project Title: Wiki on “Accessibility, Disability and the Internet in India” Page | 1 REPORT Accessibility wiki: accessibility.cis-india.org The wiki project was envisaged and funded by the National Internet Exchange of India (www.nixi.in) and has been executed by the Centre for Internet and Society (www.cis-india.org), Bangalore. Project Start date: May 2009 End date: February 2010. Background India has a large percentage of disabled persons in its population— estimated to be over seven per cent as per the Census of 2001. Even this figure is believed to be a gross under representation of the total number of disabled persons residing in this large and diverse country. Taken in figures, this amounts to roughly 70-100 million persons with disabilities in the territory of India. Out of this number, a mere two per cent residing in urban areas have access to information and assistive technologies which enable them to function in society and enhance their performance. There are several reasons for this, one of them being that there is a deplorable lack of awareness which exists on the kinds of disabilities and about ways in which one can provide information and services to disabled persons. Parents, teachers, government authorities and society at large are all equally unaware about the options which exist in technology today to enable persons with disabilities to carry on independent and productive lives. Barring a few exceptions, India is still trapped in an era where a white cane and a Braille slate symbolises the future for blind people, while the world has progressed to newer forms of enabling technology such as screen readers, daisy players, the Kindle and so on.
    [Show full text]
  • Chapter 10 Speech Synthesis Ch T 11 a T Ti S H R Iti Chapter 11 Automatic
    Chapter 10 Speech Synthesis Chapt er 11 Aut omati c Speech Recogniti on 1 An Enggpineer’s Perspective Speech production Speech analysis ShSpeech Speech Speech quality coding recognition assessment Speech Speech Speaker synthesis enhancement recognition 2 3 4 History Long before modern electronic signal processing was invented, speech researchers tried to build machines to create human speech. Early examples of 'speaking heads' were made by Gerbert of Aurillac (d. 1003), Albertus Magnus (1198-1280), and Roger Bacon (1214-1294). In 1779, the Danish scientist Christian Kratzenstein, working at the time at the Russian Academy of Sciences, built models of the human vocal tract that could produce the five long vowel sounds (a, e, i, o and u). 5 Kratzenstein's resonators 6 Engineering the vocal tract: Riesz 1937 7 Homer Dudley 1939 VODER Synthesizing speech by electrical means 1939 World’s Fair •Manually controlled through complex keyboard •Operator training was a problem 8 Cooper’s Pattern Playback Haskins Labs for investigating speech perception Works like an inverse of a spectrograph Light from a lamp goes through a rotating disk then througgph spectrog ram into photovoltaic cells Thus amount of light that gets transmitted at eachfh frequency b and correspond s to amount of acoustic energy at that band 9 Cooper’s Pattern Playback 10 Modern TTS systems 1960’s first full TTS: Umeda et al (1968) 1970’s Joe Olive 1977 concatenation of linear-prediction diphones Speak and Spell 1980’s 1979 MIT MITalk (Allen, Hunnicut, Klatt) 1990’s-present Diphone synthesis Unit selection synthesis 11 Types of Modern Synthesis Articulatory Synthesis: Model movements of articulators and acoustics o f voca l trac t Formant Synthesis: Start with acoustics,,/ create rules/filters to create each formant Concatenative Synthesis: Use databases of stored speech to assemble new utterances.
    [Show full text]
  • Haskins Newsletter
    THE SCIENCE OF THE SPOKEN AND WRITTEN WORD CommuniquéWinter 2019 an evidence-based curriculum. Under Dr. Nicole Landi’s direction, we have now collected extensive From the President's brain and cognitive data on a large group of children Desk at each school and we are gaining important insights into individual differences in core learning profiles in It is a pleasure to introduce our this complex population. Look for updates on this inaugural edition of the Haskins initiative in future editions of this newsletter. Communiqué newsletter! It is our This newsletter also highlights the uniqueness of hope that the Communiqué will Haskins as an international community; a defining become a vehicle for sharing the attribute of a lab that cares about the biology of exciting discoveries and tremendous growth that I human language in all its variations. Two important am privileged to witness daily in my role as President. partnerships are highlighted here: the announcement This newsletter is a component of the development of the Haskins Global Literacy Hub (page 2), and the portfolio taken on by Dr. Julie Van Dyke, who moved founding of a joint developmental neuroscience lab into the new role of Vice President for Research and with the National Taiwan Normal University (page 3). Strategic Initiatives earlier this year. Having served Synergistic relationships across such a wide range as a Senior Scientist at Haskins for 18 years, and with of disciplines and among researchers from over 40 extesive experience in project management, science countries is what makes Haskins Laboratories so communication, and editorial service, she is well unique and special.
    [Show full text]
  • Speech Generation: from Concept and from Text
    Speech Generation From Concept and from Text Martin Jansche CS 6998 2004-02-11 Components of spoken output systems Front end: From input to control parameters. • From naturally occurring text; or • From constrained mark-up language; or • From semantic/conceptual representations. Back end: From control parameters to waveform. • Articulatory synthesis; or • Acoustic synthesis: – Based predominantly on speech samples; or – Using mostly synthetic sources. 2004-02-11 1 Who said anything about computers? Wolfgang von Kempelen, Mechanismus der menschlichen Sprache nebst Beschreibung einer sprechenden Maschine, 1791. Charles Wheatstone’s reconstruction of von Kempelen’s machine 2004-02-11 2 Joseph Faber’s Euphonia, 1846 2004-02-11 3 Modern articulatory synthesis • Output produced by an articulatory synthesizer from Dennis Klatt’s review article (JASA 1987) • Praat demo • Overview at Haskins Laboratories (Yale) 2004-02-11 4 The Voder ... Developed by Homer Dudley at Bell Telephone Laboratories, 1939 2004-02-11 5 ... an acoustic synthesizer Architectural blueprint for the Voder Output produced by the Voder 2004-02-11 6 The Pattern Playback Developed by Franklin Cooper at Haskins Laboratories, 1951 No human operator required. Machine plays back previously drawn spectrogram (spectrograph invented a few years earlier). 2004-02-11 7 Can you understand what it says? Output produced by the Pattern Playback. 2004-02-11 8 Can you understand what it says? Output produced by the Pattern Playback. These days a chicken leg is a rare dish. It’s easy to tell the depth of a well. Four hours of steady work faced us. 2004-02-11 9 Synthesis-by-rule • Realization that spectrograph and Pattern Playback are really only recording and playback devices.
    [Show full text]
  • Gnuspeech Tract Manual 0.9
    Gnuspeech TRAcT Manual 0.9 TRAcT: the Gnuspeech Tube Resonance Access Tool: a means of investigating and understanding the basic Gnuspeech vocal tract model David R. Hill, University of Calgary TRAcT and the \tube" model to which it allows access was originally known as \Synthesizer" and developed by Leonard Manzara on the NeXT computer. The name has been changed because \Synthesiser" might be taken to imply it synthesises speech, which it doesn't. It is a \low-level" sound synthesiser that models the human vocal tract, requiring the \high-level" synthesiser control inherent in Gnuspeech/Monet to speak. TRAcT provides direct interactive access to the human vocal tract model parameters and shapes as a means of exploring its sound generation capabilities, as needed for speech database creation. i (GnuSpeech TRAcT Manual Version 0.9) David R. Hill, PEng, FBCS ([email protected] or drh@firethorne.com) © 2004, 2015 David R. Hill. All rights reserved. This document is publicly available under the terms of a Free Software Foundation \Free Doc- umentation Licence" See see http://www.gnu.org/copyleft/fdl.html for the licence terms. This page and the acknowledgements section are the invariant sections. ii SUMMARY The \Tube Resonance Model" (TRM, or \tube", or \waveguide", or transmission-line) forms the acoustic basis of the Gnupeech articulatory text-to-speech system and provides a linguistics research tool. It emulates the human vocal apparatus and allows \postures" to be imposed on vocal tract model, and energy to be injected representing voicing, whispering, sibilant noises and \breathy" noise. The \Distinctive Region Model" (DRM) control system that allows simple specification of the postures, and accessed by TRAcT 1is based on research by CGM Fant and his colleagues at the Stockholm Royal Institute of Technology Speech Technology Laboratory (Fant & Pauli 1974), by Ren´eCarr´eand his colleagues at T´el´ecomParis (Carr´e,Chennoukh & Mrayati 1992), and further developed by our original research team at Trillium Sound Research and the U of Calgary).
    [Show full text]
  • DOCUMENT RESUME ED 052 654 FL 002 384 TITLE Speech Research
    DOCUMENT RESUME ED 052 654 FL 002 384 TITLE Speech Research: A Report on the Status and Progress of Studies on Lhe Nature of Speech, Instrumentation for its Investigation, and Practical Applications. 1 July - 30 September 1970. INSTITUTION Haskins Labs., New Haven, Conn. SPONS AGENCY Office of Naval Research, Washington, D.C. Information Systems Research. REPORT NO SR-23 PUB DATE Oct 70 NOTE 211p. EDRS PRICE EDRS Price MF-$0.65 HC-$9.87 DESCRIPTORS Acoustics, *Articulation (Speech), Artificial Speech, Auditory Discrimination, Auditory Perception, Behavioral Science Research, *Laboratory Experiments, *Language Research, Linguistic Performance, Phonemics, Phonetics, Physiology, Psychoacoustics, *Psycholinguistics, *Speech, Speech Clinics, Speech Pathology ABSTRACT This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. The reports contained in this particular number are state-of-the-art reviews of work central to the Haskins Laboratories' areas of research. Tre papers included are: (1) "Phonetics: An Overview," (2) "The Perception of Speech," (3) "Physiological Aspects of Articulatory Behavior," (4) "Laryngeal Research in Experimental Phonetics," (5) "Speech Synthesis for Phonetic and Phonological Models," (6)"On Time and Timing in Speech," and (7)"A Study of Prosodic Features." (Author) SR-23 (1970) SPEECH RESEARCH A Report on the Status and Progress ofStudies on the Nature of Speech,Instrumentation for its Investigation, andPractical Applications 1 July - 30 September1970 U.S. DEPARTMENT OF HEALTH, EDUCATION & WELFARE OFFICE OF EDUCATION THIS DOCUMENT HAS BEEN REPRODUCED EXACTLY AS RECEIVED FROMTHE PERSON OR ORGANIZATION ORIGINATING IT,POINTS OF VIEW OR OPINIONS STATED DO NOT NECESSARILY REPRESENT OFFICIAL OFFICE OF EDUCATION POSITION OR POLICY.
    [Show full text]
  • Sensorimotor Experience in Speech Perception
    PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/168722 Please be advised that this information was generated on 2019-01-10 and may be subject to change. Sensorimotor Experience in Speech Perception c 2017, William Schuerman ISBN: 978-90-76203-82-9 Cover photo: Kees Peerdeman, “Sounds bananas”, referring to the metaphor by Albert S. Bregman, likening auditory perception to observing the ripples made by objects moving on a lake. Printed and bound by Ipskamp Drukkers b.v. Sensorimotor Experience in Speech Perception Proefschrift ter verkrijging van de graad van doctor aan de Radboud Universiteit Nijmegen op gezag van de rector magnificus prof. dr. J.H.J.M. van Krieken, volgens besluit van het college van de decanen in het openbaar te verdedigen op maandag 27 maart 2017 om 12.30 uur precies door William Lawrence Schuerman geboren op 28 september 1984 te Sonoma, Verenigde Staten Promotoren: Prof. dr. J. M. McQueen Prof. dr. A. Meyer Manuscriptcommissie: Prof. dr. M. Ernestus Prof. dr. H. Bekkering Dr. P. Adank (University College London, Verenigd Koninkrijk) This research was supported by the The Max Planck Society for the Advancement of Science, Munich, Germany. Sensorimotor Experience in Speech Perception Doctoral Thesis to obtain the degree of doctor from Radboud University Nijmegen on the authority of the Rector Magnificus prof. dr. J.H.J.M. van Krieken, according to the decision of the Council of Deans to be defended in public on Monday, March 27, 2017 at 12:30 hours by William Lawrence Schuerman born in Sonoma, USA on September 28,1984 Supervisors: Prof.
    [Show full text]
  • Teaching About Speech Perception and Production Inexpensively on Microcomputers
    Behavior Research Methods, Instruments, & Computers 1990, 22 (2), 219-222 Teaching about speech perception and production inexpensively on microcomputers JOSEPH P. BLOUNT Saint Mary's College, Notre Dame, Indiana and MARY ANN R. BLOUNT Saint Joseph's Medical Center, South Bend, Indiana It is difficult to teach an introduction to speech perception and production without hands-on experience for the students. We suggest inexpensive ways to use microcomputers to give such experience, with regard to letter-to-sound correspondences, formants, voice onset time, and other topics. Students have reported that they learn more with these approaches and enjoy them. Speech perception and production is an area of science or equipment is beyond the scope of this paper; however, in which there has recently been rapid, exciting progress. we do sample from products involved in speech produc­ Psychologists, linguists, and others have discovered tion as well as perception, from products for Macintosh astonishing new perceptual phenomena and begun to un­ as well as IBM-compatible systems, and from competing ravel the complexity of the acoustic coding. Because products as well. of the usefulness and importance of the principles, an In the remainder of this paper, we will discuss nine introduction to them is now often included in cognitive learning activities and the software or equipment that psychology courses, sensation and perception courses, makes them easy to accomplish. The first activity is in­ psychology of language courses, a few introductory psy­ tended to show the students that they already know more chology courses, speech science courses, hearing science about speech than they realize.
    [Show full text]
  • And Others TITLE Research on Speech Perception
    DOCUMEBT RESUME ED 318 055 CS 507 126 AUTHOR Pisoni, David B.; And Others TITLE Research on Speech Perception. Progress Report No. 9, January 1983-December 1983. INSTITUTION Indiana Univ., Bloomington. Dept. of Psychology. SPANS AGENCY Air Force Systems Command, Washington, D.C.; National Institutes of Health (DHHS), Bethesda, Md.; National Inst. of Mental Health (DHHS), Rockville, MD.; National Science Foundation, Washington, D.C. PUB DATE 83 CONTRACT AF-F-33615-83-K-0501 GRANT BNS-83-05387; MH-24027-07; NS-07134-05; NS-12179-08 NOTE 360p.; For other reports in this series, see CS 507 123-129. PUB TYPE Reports - Research/Technical (143) -- Collected Works - General (020) -- Information Analyses (070) EDRS PRICE MF01/PC15 Plus Postage. DESCRIPTORS *Acoustic Phonetics; Auditory Discrimination; *Auditory Perception; Communication Research; Computer Software Development; Infants; *Language Processing; Language Research; Linguistics; Speech; Speech Synthesizers IDENTIFIERS Indiana University Bloomington; *Speech Perception; Speech Research; Theory Development L3STRACT Summarizing research activities from January 1983 to Danember 1983, this is the ninth annual report of research on speech perception, analysis and synthesis conducted in the Speech Research Laboratory of the Department of Psychology at Indiana University. The report includes extended manuscripts, short reports, and progress reports. The report contains the following 17 articles: "Contextual Effects on the Consonant/Vowel Ratio in Speech Production" (P. A. Luce and J. Charles-Luce); "Some Effects of Training on the Perception of Synthetic Speech" (E. C. Schwab aLd others); "Vowel Categorization by Three-Year-Old Children" (C. A. Kubaska and R. N. Aslin); "Effects of Speech Rate and Pitch Contour on the Perception of Synthetic Speech" (L.
    [Show full text]
  • 4. La Conversión De Texto En Habla
    MÁSTER OFICIAL EN LENGUA ESPAÑOLA Y LITERATURA HISPÁNICA Aplicación de las tecnologías de la información y de la comunicación (TIC) a la Lengua española Tecnologías del habla, Carme de-la-Mota LA CONVERSIÓN DE TEXTO EN HABLA Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 1 GUIÓN DEL CURSO Interacción entre personas y ordenadores Tema que Conversión de texto en habla trataremos Reconocimiento de habla con más detalle Sistemas de diálogo Interacción entre personas mediatizada por ordenador Traducción de lenguas orales Traducción entre lengua oral y lengua de signos Reconocimiento de habla y generación de caras animadas para lectura labial La necesidad de crear y etiquetar corpus orales Aplicaciones Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 2 Interacción ordenador persona Foto: teclarosrg Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 3 La generación automática de habla Conversión de texto en habla text-to-speech Generación de habla a partir de conceptos concept-to-speech Procesamiento del lenguaje natural PLN Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 4 ¿Qué es la conversión de texto en habla? Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 5 Conversión de texto en habla Gota. [‘gota] Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 6 Objetivos de un sistema de CTH Primero: inteligibilidad Segundo: naturalidad Carme de-la-Mota, Tecnologías del habla, Universitat Autònoma de Barcelona 7 La conversión de texto en habla 1. De los inicios a la actualidad 2. La estructura de un conversor 3.
    [Show full text]