E-LEARNING FRAMEWORKS 29 Introduction

Total Page:16

File Type:pdf, Size:1020Kb

E-LEARNING FRAMEWORKS 29 Introduction Voice Interactive Classroom, a service- oriented software architecture to enable cross-platform multi-channel access to Internet-based learning Víctor Manuel Álvarez García UNIVERSITY OF OVIEDO DEPARTMENT OF COMPUTER SCIENCE In Partial Fulfilment of the Requirements for the Degree of EUROPEAN DOCTOR OF PHILOSOPHY Universidad de Oviedo Voice Interactive Classroom, a service-oriented software architecture to enable cross-platform multi-channel access to Internet-based learning Víctor Álvarez García Depósito Legal: AS. 00480-2011 http://hdl.handle.net/10803/77825 ADVERTENCIA. El acceso a los contenidos de esta tesis doctoral y su utilización debe respetar los derechos de la persona autora. Puede ser utilizada para consulta o estudio personal, así como en actividades o materiales de investigación y docencia en los términos establecidos en el art. 32 del Texto Refundido de la Ley de Propiedad Intelectual (RDL 1/1996). Para otros usos se requiere la autorización previa y expresa de la persona autora. En cualquier caso, en la utilización de sus contenidos se deberá indicar de forma clara el nombre y apellidos de la persona autora y el título de la tesis doctoral. No se autoriza su reproducción u otras formas de explotación efectuadas con fines lucrativos ni su comunicación pública desde un sitio ajeno al servicio TDR. Tampoco se autoriza la presentación de su contenido en una ventana o marco ajeno a TDR (framing). Esta reserva de derechos afecta tanto al contenido de la tesis como a sus resúmenes e índices. WARNING. Access to the contents of this doctoral thesis and its use must respect the rights of the author. It can be used for reference or private study, as well as research and learning activities or materials in the terms established by the 32nd article of the Spanish Consolidated Copyright Act (RDL 1/1996). Express and previous authorization of the author is required for any other uses. In any case, when using its content, full name of the author and title of the thesis must be clearly indicated. Reproduction or other forms of for profit use or public communication from outside TDX service is not allowed. Presentation of its content in a window or frame external to TDX (framing) is not authorized either. These rights affect both the content of the thesis and its abstracts and indexes. 21 Provided for non-commercial research and education use. Not for reproduction, distribution or commercial use. Víctor Manuel Álvarez García (2011) “Voice Interactive Classroom, a service- oriented software architecture to enable cross-platform multi-channel access to Internet-based learning”, Ph.D. Thesis Dissertation, University of Oviedo, Spain European Ph.D. Dissertation Defence: 17th February 2011 Supervisors: Dr. María del Puerto Paule Ruiz, University of Oviedo, Spain Dr. Marcus Specht, Open University of the Netherlands, The Netherlands Reviewers: Dr. Marie Joubert, University of Bristol, England Dr. Lorella Giannandrea, University of Macerata, Italy Committee: Prof. Dr. Manuel Ortega Cantero, University of Castilla – La Mancha, Spain Prof. Dr. Rob Koper, Open University of the Netherlands, The Netherlands Prof. Dr. Juan Manuel Cueva Lovelle, University of Oviedo, Spain Prof. Dr. Manuel Pérez Cota, University of Vigo, Spain Dr. Juan Ramón Pérez Pérez, University of Oviedo, Spain Voice Interactive Classroom, a service- oriented software architecture to enable cross-platform multi-channel access to Internet-based learning Víctor Manuel Álvarez García UNIVERSITY OF OVIEDO DEPARTMENT OF COMPUTER SCIENCE In Partial Fulfilment of the Requirements for the Degree of EUROPEAN DOCTOR OF PHILOSOPHY Aula de Voz Interactiva, una arquitectura software orientada a servicios para habilitar un acceso multiplataforma y multicanal al aprendizaje basado en Internet Voice Interactive Classroom, a service- oriented software architecture to enable cross-platform multi-channel access to Internet-based learning Víctor Manuel Álvarez García UNIVERSIDAD DE OVIEDO DEPARTAMENTO DE INFORMÁTICA Tesis Doctoral depositada el día 5 de Noviembre de 2010 para la obtención del título de Doctor Europeo por la Universidad de Oviedo Dirigida por: Dra. María del Puerto Paule Ruiz (Universidad de Oviedo) Dr. Marcus Matthäus Specht (Centre for Learning Sciences and Technologies, Open Universiteit Nederland) Abstract Software technology is creating a ubiquitous context for human living and learning in which new modes of interaction are gradually being incorporated. At the same time, interaction with Internet-based learning systems has evolved from using the traditional access from a personal computer or laptop‟s web browser to more flexible access from mobile devices. However, in both cases, e-learning systems have created a context in which interaction with the user is carried out mainly by using visual perception. This situation restraints the number of scenarios in which students can make use of a learning management system. Audio-based software brings a new way of interacting with the Internet. Visual access to Internet resources can be complemented with audio interfaces to adapt e- learning systems to different educational settings and learning styles, but auditory access to web resources needs to be more broadly supported by software architectures. This dissertation proposes the use of a service-oriented approach, supported by the e-learning specifications IMS Abstract Framework and The Open Knowledge Initiative, to provide a complementary audio communication channel and enable the interoperability of audio features among the various e-learning platforms and components. In addition, the use of the W3C recommendations of VoiceXML, along with the methods and techniques of adaptive educational hypermedia, allows overcoming the limitations of using unusual or non-standard languages to describe voice dialogues and better adapt the on-line learning process to innovative e-learning scenarios. The components of this model have been mapped to a software architecture named “Voice Interactive Classroom” that serves as a starting point for the design of research case studies which aim to demonstrate the benefits of the proposed solution. The validation of this thesis has been achieved by setting up and following an active strategy for scientific outreach. Activities include planning, coordinating and supervising final year projects, master projects and master dissertations, presenting articles and communications in scientific meetings, and the publication of this research study in a prestigious scientific journal. i Resumen La tecnología software está creando un contexto ubicuo para la vida y el aprendizaje humano en el que gradualmente se incorporan nuevos modos de interacción. Además, la interacción con los sistemas de aprendizaje basados en Internet ha evolucionado desde el acceso tradicional a través del navegador web de un ordenador personal o portátil a accesos más flexibles desde dispositivos móviles. Sin embargo, en ambos casos, los sistemas de e-learning han creado un contexto en el cual la interacción con el usuario se realiza principalmente utilizando la percepción visual. Esta situación limita el número de escenarios en el que los estudiantes pueden hacer uso de los sistemas de gestión del aprendizaje. El software basado en audio proporciona una nueva manera de interaccionar con Internet. El acceso visual a los recursos de Internet puede ser complementado con interfaces de audio para adaptar los sistemas de e- learning existentes a diferentes entornos educativos y estilos de aprendizaje, pero el acceso auditivo a los recursos web necesita encontrar mayor soporte en las arquitecturas software. Esta tesis propone el uso de un enfoque orientado a servicios, apoyado en las especificaciones de e-learning IMS Abstract Framework y The Open Knowledge Initiative, para proporcionar un canal de comunicación complementario y auditivo, y permitir la interoperabilidad de las aplicaciones de audio con las plataformas y componentes de e-learning. Adicionalmente, el uso de las recomendaciones VoiceXML del W3C, junto con los métodos y técnicas de hipermedia adaptativa para sistemas educativos, permiten superar las limitaciones de usar lenguajes inusuales o no estándar para describir los diálogos de voz y adaptar mejor el proceso de aprendizaje en línea a escenarios de e-learning innovadores. Los componentes de este modelo han sido mapeados a una arquitectura software denominada “Aula de Voz Interactiva” que sirve como punto de partida en el diseño de casos de estudio que persiguen demostrar los beneficios de la solución propuesta. La validación de esta tesis se ha conseguido mediante el planteamiento y seguimiento de una estrategia de alcance científico. Las actividades incluyen la planificación, coordinación y dirección de proyectos fin de carrera y trabajos fin de máster, la presentación de artículos y comunicaciones en reuniones científicas y la publicación de esta investigación en una prestigiosa revista científica. ii Acknowledgements This Ph.D dissertation has been possible only because of the help and support of a large number of professors, colleagues, students, relatives and friends. Thanks for being part of my work and my life, this achievement is also yours. I am heartily thankful to Dr. Andrés Sampedro Nuño and Dr. María del Puerto Paule Ruíz. Without their guidance, support and encouragement over the years this thesis would not exist. My deep thanks to Dr. Juan Manuel Cueva Lovelle, Dr. Víctor Guillermo García, Dr. Jose Ramiro Varela and Dr. María Camino Rodríguez Vela,
Recommended publications
  • THE DEVELOPMENT of ACCENTED ENGLISH SYNTHETIC VOICES By
    THE DEVELOPMENT OF ACCENTED ENGLISH SYNTHETIC VOICES by PROMISE TSHEPISO MALATJI DISSERTATION Submitted in fulfilment of the requirements for the degree of MASTER OF SCIENCE in COMPUTER SCIENCE in the FACULTY OF SCIENCE AND AGRICULTURE (School of Mathematical and Computer Sciences) at the UNIVERSITY OF LIMPOPO SUPERVISOR: Mr MJD Manamela CO-SUPERVISOR: Dr TI Modipa 2019 DEDICATION In memory of my grandparents, Cecilia Khumalo and Alfred Mashele, who always believed in me! ii DECLARATION I declare that THE DEVELOPMENT OF ACCENTED ENGLISH SYNTHETIC VOICES is my own work and that all the sources that I have used or quoted have been indicated and acknowledged by means of complete references and that this work has not been submitted before for any other degree at any other institution. ______________________ ___________ Signature Date iii ACKNOWLEDGEMENTS I want to recognise the following people for their individual contributions to this dissertation: • My brother, Mr B.I. Khumalo and the whole family for the unconditional love, support and understanding. • A distinct thank you to both my supervisors, Mr M.J.D. Manamela and Dr T.I. Modipa, for their guidance, motivation, and support. • The Telkom Centre of Excellence for Speech Technology for providing the resources and support to make this study a success. • My colleagues in Department of Computer Science, Messrs V.R. Baloyi and L.M. Kola, for always motivating me. • A special thank you to Mr T.J. Sefara for taking his time to participate in the study. • The six Computer Science undergraduate students who sacrificed their precious time to participate in data collection.
    [Show full text]
  • Gender, Ethnicity, and Identity in Virtual
    Virtual Pop: Gender, Ethnicity, and Identity in Virtual Bands and Vocaloid Alicia Stark Cardiff University School of Music 2018 Presented in partial fulfilment of the requirements for the degree Doctor of Philosophy in Musicology TABLE OF CONTENTS ABSTRACT i DEDICATION iii ACKNOWLEDGEMENTS iv INTRODUCTION 7 EXISTING STUDIES OF VIRTUAL BANDS 9 RESEARCH QUESTIONS 13 METHODOLOGY 19 THESIS STRUCTURE 30 CHAPTER 1: ‘YOU’VE COME A LONG WAY, BABY:’ THE HISTORY AND TECHNOLOGIES OF VIRTUAL BANDS 36 CATEGORIES OF VIRTUAL BANDS 37 AN ANIMATED ANTHOLOGY – THE RISE IN POPULARITY OF ANIMATION 42 ALVIN AND THE CHIPMUNKS… 44 …AND THEIR SUCCESSORS 49 VIRTUAL BANDS FOR ALL AGES, AVAILABLE ON YOUR TV 54 VIRTUAL BANDS IN OTHER TYPES OF MEDIA 61 CREATING THE VOICE 69 REPRODUCING THE BODY 79 CONCLUSION 86 CHAPTER 2: ‘ALMOST UNREAL:’ TOWARDS A THEORETICAL FRAMEWORK FOR VIRTUAL BANDS 88 DEFINING REALITY AND VIRTUAL REALITY 89 APPLYING THEORIES OF ‘REALNESS’ TO VIRTUAL BANDS 98 UNDERSTANDING MULTIMEDIA 102 APPLYING THEORIES OF MULTIMEDIA TO VIRTUAL BANDS 110 THE VOICE IN VIRTUAL BANDS 114 AGENCY: TRANSFORMATION THROUGH TECHNOLOGY 120 CONCLUSION 133 CHAPTER 3: ‘INSIDE, OUTSIDE, UPSIDE DOWN:’ GENDER AND ETHNICITY IN VIRTUAL BANDS 135 GENDER 136 ETHNICITY 152 CASE STUDIES: DETHKLOK, JOSIE AND THE PUSSYCATS, STUDIO KILLERS 159 CONCLUSION 179 CHAPTER 4: ‘SPITTING OUT THE DEMONS:’ GORILLAZ’ CREATION STORY AND THE CONSTRUCTION OF AUTHENTICITY 181 ACADEMIC DISCOURSE ON GORILLAZ 187 MASCULINITY IN GORILLAZ 191 ETHNICITY IN GORILLAZ 200 GORILLAZ FANDOM 215 CONCLUSION 225
    [Show full text]
  • Masterarbeit
    Masterarbeit Erstellung einer Sprachdatenbank sowie eines Programms zu deren Analyse im Kontext einer Sprachsynthese mit spektralen Modellen zur Erlangung des akademischen Grades Master of Science vorgelegt dem Fachbereich Mathematik, Naturwissenschaften und Informatik der Technischen Hochschule Mittelhessen Tobias Platen im August 2014 Referent: Prof. Dr. Erdmuthe Meyer zu Bexten Korreferent: Prof. Dr. Keywan Sohrabi Eidesstattliche Erklärung Hiermit versichere ich, die vorliegende Arbeit selbstständig und unter ausschließlicher Verwendung der angegebenen Literatur und Hilfsmittel erstellt zu haben. Die Arbeit wurde bisher in gleicher oder ähnlicher Form keiner anderen Prüfungsbehörde vorgelegt und auch nicht veröffentlicht. 2 Inhaltsverzeichnis 1 Einführung7 1.1 Motivation...................................7 1.2 Ziele......................................8 1.3 Historische Sprachsynthesen.........................9 1.3.1 Die Sprechmaschine.......................... 10 1.3.2 Der Vocoder und der Voder..................... 10 1.3.3 Linear Predictive Coding....................... 10 1.4 Moderne Algorithmen zur Sprachsynthese................. 11 1.4.1 Formantsynthese........................... 11 1.4.2 Konkatenative Synthese....................... 12 2 Spektrale Modelle zur Sprachsynthese 13 2.1 Faltung, Fouriertransformation und Vocoder................ 13 2.2 Phase Vocoder................................ 14 2.3 Spectral Model Synthesis........................... 19 2.3.1 Harmonic Trajectories........................ 19 2.3.2 Shape Invariance..........................
    [Show full text]
  • Wormed Voice Workshop Presentation
    Wormed Voice Workshop Presentation micro_research December 27, 2017 1 some worm poetry and songs: The WORM was for a long time desirous to speake, but the rule and order of the Court enjoyned him silence, but now strutting and swelling, and impatient, of further delay, he broke out thus... [Michael Maier] He worshipped the worm and prayed to the wormy grave. Serpent Lucifer, how do you do? Of your worms and your snakes I'd be one or two; For in this dear planet of wool and of leather `Tis pleasant to need neither shirt, sleeve, nor shoe, 2 And have arm, leg, and belly together. Then aches your head, or are you lazy? Sing, `Round your neck your belly wrap, Tail-a-top, and make your cap Any bee and daisy. Two pigs' feet, two mens' feet, and two of a hen; Devil-winged; dragon-bellied; grave- jawed, because grass Is a beard that's soon shaved, and grows seldom again worm writing the the the the,eeeronencoug,en sthistit, d.).dupi w m,tinsprsool itav f bometaisp- pav wheaigelic..)a?? orerdi mise we ich'roo bish ftroo htothuloul mespowouklain- duteavshi wn,jis, sownol hof." m,tisorora angsthyedust,es, fofald,junss ownoug brad,)fr m fr,aA?a????ck;A?stelav aly, al is.'rady'lfrdil owoncorara wns t.) sh'r, oof ofr,a? ar,a???????a? fu mo towess,eethen hrtolly-l,."tigolav ict,a???!ol, w..'m,elyelil,tstreamas..n gotaillas.tansstheatsea f mb ispot inici t.) owar.**1 wnshigigholoothtith orsir.tsotic.'m, sotamimoledug imootrdeavet..t,) sh s,tranciror."wn sieee h asinied.tiear wspilotor,) bla av.nicord,ier.dy'et.*tite m.)..*d, hrouceto hie, ig il m, bsomoug,.t.'l,t, olitel bs,.nt,.dotr tat,)aa? htotitedont,j alesil, starar,ja taie ass.nishiceroouldseal fotitoonckysil, m oitispl o anteeeaicowousomirot.
    [Show full text]
  • Biomimetics of Sound Production, Synthesis and Recognition
    Design and Nature V 273 Biomimetics of sound production, synthesis and recognition G. Rosenhouse Swantech - Sound Wave Analysis & Technologies Ltd, Haifa, Israel Abstract Biomimesis of sound production, synthesis and recognition follows millenias of years of adaptation as it developed in nature. Technically such communication means were initiated since people began to build speaking machines based on natural speech concepts. The first devices were mechanical and they were in use till the end of the 19th century. Those developments later led to modern speech and music synthesis, initially applying pure mechanics. Since the beginning of the 20th century electronics has taken up the lead to independent electronic achievements in human communication abilities. As shown in the present paper, this development was intentionally made along history in order to satisfy human needs. Keywords: speaking machines, biomimetics, speech synthesis. 1 Introduction Automation is an old attempt of people to tame machines for the needs of human beings. It began mainly, as far as we know, since 70 DC (with Heron). In linguistics, the history of speaking machines began in the 2nd century when people tried to build speaking heads. However, actually, the process was initiated in the 18th century, with Wolfgang von Kempelen who invented a speaking mechanism that simulated the speech system of human beings. It opened the way to inventions of devices for producing artificial vowels and consonants. This development was an important step towards modern speech recognition
    [Show full text]
  • Siren Songs and Echo's Response: Towards a Media Theory of the Voice
    Published as _Article in On_Culture: The Open Journal for the Study of Culture (ISSN 2366-4142) SIREN SONGS AND ECHO’S RESPONSE: TOWARDS A MEDIA THEORY OF THE VOICE IN THE LIGHT OF SPEECH SYNTHESIS CHRISTOPH BORBACH [email protected] Christoph Borbach is a research fellow at the research training group Locating Media at the University of Siegen, where he conducts a research project entitled “Zeitkanäle|Kanalzeiten” (“Time Channels|Channel Times”) on the media history of the operationalization of delay time from a media archaeological perspective. Borbach studied Musicology, Media, and History at the Humboldt University of Berlin. His bachelor’s thesis dealt with radio theories between ideology and media epistemology. For his master’s thesis, he studied the media-technical implementations of echoes. His research interests include media theory of the voice, media archaeology of the echo, operationalization of the sonic, time-critical detection technologies and their visualiza- tion strategies, and the occult of media/media of the occult. KEYWORDS speech synthesis, phonography, voice theory, embodied voices, speaking machines, technotraumatic affects PUBLICATION DATE Issue 2, November 30, 2016 HOW TO CITE Christoph Borbach. “Siren Songs and Echo’s Response: Towards a Media Theory of the Voice in the Light of Speech Synthesis.” On_Culture: The Open Journal for the Study of Culture 2 (2016). <http://geb.uni-giessen.de/geb/volltexte/2016/12354/>. Permalink URL: <http://geb.uni-giessen.de/geb/volltexte/2016/12354/> URN: <urn:nbn:de:hebis:26-opus-123545> On_Culture: The Open Journal for the Study of Culture Issue 2 (2016): The Nonhuman www.on-culture.org http://geb.uni-giessen.de/geb/volltexte/2016/12354/ Siren Songs and Echo’s Response: Towards a Media Theory of the Voice in the Light of Speech Synthesis _Abstract In contrast to phonographical recording, storage, and reproduction of the voice, most media theories, especially prominent media theories of the human voice, neglected the aspect of synthesizing human-like voices by non-human means.
    [Show full text]
  • Speech Generation: from Concept and from Text
    Speech Generation From Concept and from Text Martin Jansche CS 6998 2004-02-11 Components of spoken output systems Front end: From input to control parameters. • From naturally occurring text; or • From constrained mark-up language; or • From semantic/conceptual representations. Back end: From control parameters to waveform. • Articulatory synthesis; or • Acoustic synthesis: – Based predominantly on speech samples; or – Using mostly synthetic sources. 2004-02-11 1 Who said anything about computers? Wolfgang von Kempelen, Mechanismus der menschlichen Sprache nebst Beschreibung einer sprechenden Maschine, 1791. Charles Wheatstone’s reconstruction of von Kempelen’s machine 2004-02-11 2 Joseph Faber’s Euphonia, 1846 2004-02-11 3 Modern articulatory synthesis • Output produced by an articulatory synthesizer from Dennis Klatt’s review article (JASA 1987) • Praat demo • Overview at Haskins Laboratories (Yale) 2004-02-11 4 The Voder ... Developed by Homer Dudley at Bell Telephone Laboratories, 1939 2004-02-11 5 ... an acoustic synthesizer Architectural blueprint for the Voder Output produced by the Voder 2004-02-11 6 The Pattern Playback Developed by Franklin Cooper at Haskins Laboratories, 1951 No human operator required. Machine plays back previously drawn spectrogram (spectrograph invented a few years earlier). 2004-02-11 7 Can you understand what it says? Output produced by the Pattern Playback. 2004-02-11 8 Can you understand what it says? Output produced by the Pattern Playback. These days a chicken leg is a rare dish. It’s easy to tell the depth of a well. Four hours of steady work faced us. 2004-02-11 9 Synthesis-by-rule • Realization that spectrograph and Pattern Playback are really only recording and playback devices.
    [Show full text]
  • Chapter 8 Speech Synthesis
    PRELIMINARY PROOFS. Unpublished Work c 2008 by Pearson Education, Inc. To be published by Pearson Prentice Hall, Pearson Education, Inc., Upper Saddle River, New Jersey. All rights reserved. Permission to use this unpublished Work is granted to individuals registering through [email protected] for the instructional purposes not exceeding one academic term or semester. Chapter 8 Speech Synthesis And computers are getting smarter all the time: Scientists tell us that soon they will be able to talk to us. (By ‘they’ I mean ‘computers’: I doubt scientists will ever be able to talk to us.) Dave Barry In Vienna in 1769, Wolfgang von Kempelen built for the Empress Maria Theresa the famous Mechanical Turk, a chess-playing automaton consisting of a wooden box filled with gears, and a robot mannequin sitting behind the box who played chess by moving pieces with his mechanical arm. The Turk toured Europeand the Americas for decades, defeating Napolean Bonaparte and even playing Charles Babbage. The Mechanical Turk might have been one of the early successes of artificial intelligence if it were not for the fact that it was, alas, a hoax, powered by a human chessplayer hidden inside the box. What is perhaps less well-known is that von Kempelen, an extraordinarily prolific inventor, also built between 1769 and 1790 what is definitely not a hoax: the first full-sentence speech synthesizer. His device consisted of a bellows to simulate the lungs, a rubber mouthpiece and a nose aperature, a reed to simulate the vocal folds, various whistles for each of the fricatives. and a small auxiliary bellows to provide the puff of air for plosives.
    [Show full text]
  • Documenting Your PHP Code with Phpdoc and Docbook XML -.. Phpbee
    Documenting your PHP code with PHPDoc and DocBook XML Gábor Hojtsy International PHP Conference 2002 November 3-6. 2002. Questions • Who used/know about PHPDoc? • JavaDoc? • DocBook XML? • XML at all? • Anyway who documents his/her code regularly using any format? Documenting your PHP code 2 About myself •Gábor Hojtsy • [email protected] • Student at Budapest University of Technology and Economics • PHP.net Webmasters Team • PHP Documentation Team [build system, XSLT rendering, CHM format, Hungarian translation] Documenting your PHP code 3 Session outline • Documentation types • Developer documentation • Possible formats • Inline documentation in PHP code • Standalone documentation in DocBook • Tools to handle the two and merge contents Documenting your PHP code 4 Documentation types • Software written in PHP • Plans, concepts, specification [UML] • As the software is stable – User’s documentation • Help buttons on web pages • Support pages, simple manuals – Developer’s documentation Documenting your PHP code 5 Developer’s documentation • General information – Authors, License, Links • Concepts explained – Reasons to develop this software – Technologies, libraries used • Implementation details – Meta information about classes, functions, variables, constants; with usage examples • Historical information – Previously supported API[s], future directions, todo Documenting your PHP code 6 Expected qualities • All information in one documentation • Easily accessible / modifiable – Industry- wide used format – Cross platform tools for viewing and
    [Show full text]
  • DOCUMENT RESUME ED 052 654 FL 002 384 TITLE Speech Research
    DOCUMENT RESUME ED 052 654 FL 002 384 TITLE Speech Research: A Report on the Status and Progress of Studies on Lhe Nature of Speech, Instrumentation for its Investigation, and Practical Applications. 1 July - 30 September 1970. INSTITUTION Haskins Labs., New Haven, Conn. SPONS AGENCY Office of Naval Research, Washington, D.C. Information Systems Research. REPORT NO SR-23 PUB DATE Oct 70 NOTE 211p. EDRS PRICE EDRS Price MF-$0.65 HC-$9.87 DESCRIPTORS Acoustics, *Articulation (Speech), Artificial Speech, Auditory Discrimination, Auditory Perception, Behavioral Science Research, *Laboratory Experiments, *Language Research, Linguistic Performance, Phonemics, Phonetics, Physiology, Psychoacoustics, *Psycholinguistics, *Speech, Speech Clinics, Speech Pathology ABSTRACT This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. The reports contained in this particular number are state-of-the-art reviews of work central to the Haskins Laboratories' areas of research. Tre papers included are: (1) "Phonetics: An Overview," (2) "The Perception of Speech," (3) "Physiological Aspects of Articulatory Behavior," (4) "Laryngeal Research in Experimental Phonetics," (5) "Speech Synthesis for Phonetic and Phonological Models," (6)"On Time and Timing in Speech," and (7)"A Study of Prosodic Features." (Author) SR-23 (1970) SPEECH RESEARCH A Report on the Status and Progress ofStudies on the Nature of Speech,Instrumentation for its Investigation, andPractical Applications 1 July - 30 September1970 U.S. DEPARTMENT OF HEALTH, EDUCATION & WELFARE OFFICE OF EDUCATION THIS DOCUMENT HAS BEEN REPRODUCED EXACTLY AS RECEIVED FROMTHE PERSON OR ORGANIZATION ORIGINATING IT,POINTS OF VIEW OR OPINIONS STATED DO NOT NECESSARILY REPRESENT OFFICIAL OFFICE OF EDUCATION POSITION OR POLICY.
    [Show full text]
  • Mechanical Speech Synthesis in Early Talking Automata
    Mechanical Speech Synthesis in Early Talking Automata Gordon J. Ramsay Early attempts at synthesizing speech using mechanical models of the vocal tract Address: prefigure modern embodied theories of speech production. Spoken Communication Laboratory Introduction Marcus Autism Center 1920 Briarcliff Road NE Three centuries of scientific research on speech production have seen significant Atlanta, Georgia 30329 progress in understanding the relationship between articulation and acoustics in the USA human vocal tract. Over this period, there has been a marked shift in approaches to experimentation, driven by the emergence of new technologies and the novel ideas Email: these have stimulated. The greatest advances during the last hundred years have [email protected] arisen from the use of electronic or computer simulations of vocal tract acoustics for the analysis, synthesis, and recognition of speech. Before this was possible, the focus necessarily lay in detailed observation and direct experimental manipulation of the physical mechanisms underlying speech using mechanical models of the vocal tract, which were the new technology of their time. Understanding the history of the problems encountered and solutions proposed in these largely forgotten attempts to develop speaking machines that mimic the actual physical processes governing voice production can help to highlight fundamental issues that are still outstanding in this field. Many recent embodied theories of speech production and perception actually directly recapitulate proposals that arose from early talking automata. The Voice as a Musical Instrument By the beginning of the seventeenth century, the anatomy of the head and neck was already well understood, as witnessed by the extraordinarily detailed illustrations found in many books of the period (e.g., Casserius, 1600).
    [Show full text]
  • Speech Synthesis
    Contents 1 Introduction 3 1.1 Quality of a Speech Synthesizer 3 1.2 The TTS System 3 2 History 4 2.1 Electronic Devices 4 3 Synthesizer Technologies 6 3.1 Waveform/Spectral Coding 6 3.2 Concatenative Synthesis 6 3.2.1 Unit Selection Synthesis 6 3.2.2 Diaphone Synthesis 7 3.2.3 Domain-Specific Synthesis 7 3.3 Formant Synthesis 8 3.4 Articulatory Synthesis 9 3.5 HMM-Based Synthesis 10 3.6 Sine Wave Synthesis 10 4 Challenges 11 4.1 Text Normalization Challenges 11 4.1.1 Homographs 11 4.1.2 Numbers and Abbreviations 11 4.2 Text-to-Phoneme Challenges 11 4.3 Evaluation Challenges 12 5 Speech Synthesis in Operating Systems 13 5.1 Atari 13 5.2 Apple 13 5.3 AmigaOS 13 5.4 Microsoft Windows 13 6 Speech Synthesis Markup Languages 15 7 Applications 16 7.1 Contact Centers 16 7.2 Assistive Technologies 16 1 © Specialty Answering Service. All rights reserved. 7.3 Gaming and Entertainment 16 8 References 17 2 © Specialty Answering Service. All rights reserved. 1 Introduction The word ‘Synthesis’ is defined by the Webster’s Dictionary as ‘the putting together of parts or elements so as to form a whole’. Speech synthesis generally refers to the artificial generation of human voice – either in the form of speech or in other forms such as a song. The computer system used for speech synthesis is known as a speech synthesizer. There are several types of speech synthesizers (both hardware based and software based) with different underlying technologies.
    [Show full text]