ABSTRACT Title of Document: MEG, PSYCHOPHYSICAL AND

Total Page:16

File Type:pdf, Size:1020Kb

ABSTRACT Title of Document: MEG, PSYCHOPHYSICAL AND ABSTRACT Title of Document: MEG, PSYCHOPHYSICAL AND COMPUTATIONAL STUDIES OF LOUDNESS, TIMBRE, AND AUDIOVISUAL INTEGRATION Julian Jenkins III, Ph.D., 2011 Directed By: Professor David Poeppel, Department of Biology Natural scenes and ecological signals are inherently complex and understanding of their perception and processing is incomplete. For example, a speech signal contains not only information at various frequencies, but is also not static; the signal is concurrently modulated temporally. In addition, an auditory signal may be paired with additional sensory information, as in the case of audiovisual speech. In order to make sense of the signal, a human observer must process the information provided by low-level sensory systems and integrate it across sensory modalities and with cognitive information (e.g., object identification information, phonetic information). The observer must then create functional relationships between the signals encountered to form a coherent percept. The neuronal and cognitive mechanisms underlying this integration can be quantified in several ways: by taking physiological measurements, assessing behavioral output for a given task and modeling signal relationships. While ecological tokens are complex in a way that exceeds our current understanding, progress can be made by utilizing synthetic signals that encompass specific essential features of ecological signals. The experiments presented here cover five aspects of complex signal processing using approximations of ecological signals : (i) auditory integration of complex tones comprised of different frequencies and component power levels; (ii) audiovisual integration approximating that of human speech; (iii) behavioral measurement of signal discrimination; (iv) signal classification via simple computational analyses and (v) neuronal processing of synthesized auditory signals approximating speech tokens. To investigate neuronal processing, magnetoencephalography (MEG) is employed to assess cortical processing non-invasively. Behavioral measures are employed to evaluate observer acuity in signal discrimination and to test the limits of perceptual resolution. Computational methods are used to examine the relationships in perceptual space and physiological processing between synthetic auditory signals, using features of the signals themselves as well as biologically-motivated models of auditory representation. Together, the various methodologies and experimental paradigms advance the understanding of ecological signal analytics concerning the complex interactions in ecological signal structure. MEG, PSYCHOPHYSICAL AND COMPUTATIONAL STUDIES OF LOUDNESS, TIMBRE, AND AUDIOVISUAL INTEGRATION By Julian Jenkins III Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park, in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2011 Advisory Committee: Professor David Poeppel, Chair Professor Catherine E. Carr Associate Professor Jonathan Z. Simon Associate Professor William J. Higgins Associate Professor William J. Idsardi Associate Professor Timothy Horiuchi © Copyright by Julian Jenkins III 2011 Dedication To my mother, Paula Sandra Fullerton Jenkins. And all sentient beings. ii Acknowledgements Gratitude goes to the members of the committee for assistance at various stages of my graduate school career. Thanks also goes out to the various administrative personnel (Lois Reid, Linda Dalo, Cecilia Jordan, KeCia Harper) who assisted with various queries, issues and crises. Lastly, the following people for help in writing, experimental input, analysis, collaboration, other aspects of completing the work presented and various other things (in no order of importance): Maria Chait, Huan Luo, Mary Howard, Ariane Rhone, Phil Monahan, So-One Hwang, Brian Dillon, Mathias Scharinger, Jeff Walker, Max Ehrmann, Pedro Alcocer, Patty Shields, Norbert Hornstein, Art Popper, Sharon Staples, Cognitive Neuroscience of Language Laboratory, C-CEBH and the Departments of Biology and Linguistics iii Table of Contents Dedication ..................................................................................................................... ii Acknowledgements ...................................................................................................... iii Table of Contents ......................................................................................................... iv List of Figures .............................................................................................................. vi Introduction ................................................................................................................... 1 M100 Responses to Two-Frequency Complex Tones .............................................. 3 Elicitation of Audiovisual Steady-State Responses using Pseudo-Speech Signals .. 5 Psychophysical Discrimination and Clustering of Ecologically Approximate Synthetic Signals ....................................................................................................... 7 Preattentive Classification and Physiological Measurement of Ecologically Approximate Synthetic Signals using MEG ............................................................. 9 M100 Responses to Two-Frequency Complex Tones ................................................ 12 Introduction ............................................................................................................. 12 Materials and Methods ............................................................................................ 16 Subjects ............................................................................................................... 16 Threshold Testing................................................................................................ 16 Perceptual Loudness Estimation......................................................................... 16 Stimuli ................................................................................................................. 18 Delivery ............................................................................................................... 19 Recording ............................................................................................................ 20 Data Analysis: Peak RMS and Latency Analysis ................................................ 20 Data Analysis: Dipole Source Estimation .......................................................... 23 Results ..................................................................................................................... 24 Discussion ............................................................................................................... 30 Figures..................................................................................................................... 36 Elicitation of Audiovisual Steady-State Responses using Pseudo-Speech Signals .... 45 Introduction ............................................................................................................. 45 Materials and Methods ............................................................................................ 50 Participants ......................................................................................................... 51 Stimuli ................................................................................................................. 51 Delivery ............................................................................................................... 54 Recording and Filtering ...................................................................................... 55 Sensor Selection from Pre-Test ........................................................................... 55 Onset Response Evaluation ................................................................................. 56 SSR Analysis........................................................................................................ 57 Across-Participant Response Averaging ............................................................ 57 Statistical Analyses ............................................................................................. 58 Participant Head Location ................................................................................. 59 Results ..................................................................................................................... 59 Across-Participant Power Analysis .................................................................... 60 Statistical Summary ............................................................................................ 62 SSR Power Comparisons .................................................................................... 64 iv Discussion ............................................................................................................... 67 Figures..................................................................................................................... 73 Psychophysical Discrimination and Clustering of Ecologically Approximate Synthetic Signals ......................................................................................................... 85 Introduction ............................................................................................................. 85 Part I: Psychophysical Evaluation of Signal Pairs .................................................. 92 Materials
Recommended publications
  • Phonetics and Phonology Seminar Introduction to Linguistics, Andrew
    Phonetics and Phonology Phonetics and Phonology Voicing: In voiced sounds, the vocal cords (=vocal folds, Stimmbände) are pulled together Seminar Introduction to Linguistics, Andrew McIntyre and vibrate, unlike in voiceless sounds. Compare zoo/sue, ban/pan. Tests for voicing: 1 Phonetics vs. phonology Put hand on larynx. You feel more vibrations with voiced consonants. Phonetics deals with three main areas: Say [fvfvfv] continuously with ears blocked. [v] echoes inside your head, unlike [f]. Articulatory phonetics: speech organs & how they move to produce particular sounds. Acoustic phonetics: what happens in the air between speaker & hearer; measurable 4.2 Description of English consonants (organised by manners of articulation) using devices such as a sonograph, which analyses frequencies. The accompanying handout gives indications of the positions of the speech organs Auditory phonetics: how sounds are perceived by the ear, how the brain interprets the referred to below, and the IPA description of all sounds in English and other languages. information coming from the ear. Phonology: study of how particular sounds are used (in particular languages, in languages 4.2.1 Plosives generally) to distinguish between words. Study of how sounds form systems in (particular) Plosive (Verschlusslaut): complete closure somewhere in vocal tract, then air released. languages. Examples of phonological observations: (2) Bilabial (both lips are the active articulators): [p,b] in pie, bye The underlined sound sequence in German Strumpf can occur in the middle of words (3) Alveolar (passive articulator is the alveolar ridge (=gum ridge)): [t,d] in to, do in English (ashtray) but not at the beginning or end. (4) Velar (back of tongue approaches soft palate (velum)): [k,g] in cat, go In pan and span the p-sound is pronounced slightly differently.
    [Show full text]
  • Speech Perception
    UC Berkeley Phonology Lab Annual Report (2010) Chapter 5 (of Acoustic and Auditory Phonetics, 3rd Edition - in press) Speech perception When you listen to someone speaking you generally focus on understanding their meaning. One famous (in linguistics) way of saying this is that "we speak in order to be heard, in order to be understood" (Jakobson, Fant & Halle, 1952). Our drive, as listeners, to understand the talker leads us to focus on getting the words being said, and not so much on exactly how they are pronounced. But sometimes a pronunciation will jump out at you - somebody says a familiar word in an unfamiliar way and you just have to ask - "is that how you say that?" When we listen to the phonetics of speech - to how the words sound and not just what they mean - we as listeners are engaged in speech perception. In speech perception, listeners focus attention on the sounds of speech and notice phonetic details about pronunciation that are often not noticed at all in normal speech communication. For example, listeners will often not hear, or not seem to hear, a speech error or deliberate mispronunciation in ordinary conversation, but will notice those same errors when instructed to listen for mispronunciations (see Cole, 1973). --------begin sidebar---------------------- Testing mispronunciation detection As you go about your daily routine, try mispronouncing a word every now and then to see if the people you are talking to will notice. For instance, if the conversation is about a biology class you could pronounce it "biolochi". After saying it this way a time or two you could tell your friend about your little experiment and ask if they noticed any mispronounced words.
    [Show full text]
  • Phonetics Sep 1–8, 2016
    Claire Moore-Cantwell LING2010Q Handout 2 Handout 2: Phonetics Sep 1–8, 2016 Phonetics is the study of speech sounds (phones), i.e., the sounds that we make that are involved in language. • Articulatory phonetics is the study of how speech sounds are made. • Acoustic phonetics is the study of the physical properties of speech sounds, i.e., how speech sounds are transmitted from a speaker’s mouth to a hearer’s ear. • Auditory phonetics is the study of how speech sounds are perceived (e.g., segmented, categorized). Our focus will be on articulatory phonetics, and in this part of the course we will be covering the basic properties of speech sounds as well as phonetic features and natural classes. Different languages may contain different speech sounds, but there are only so many different sounds that the human vocal apparatus can make. ! The International Phonetic Alphabet = standardized transcription system for all the world’s spoken languages, typically written between square brackets or slashes. (1) a.[ hElow] b.[ sIli] c.[ læf] d.[ ælf@bEt] e.[ SiD] f.[ dZ2dZ] A preliminary note: Phonetics (and linguistics more generally) is not concerned with orthog- raphy/spelling. English spelling is notoriously unreliable for determining speech sounds: (2) Spelled the same, different speech sounds a.c ough, dough, bough, bought b.c omb, tomb, bomb c. the, thought (3) Same spelled word, different speech sounds a. bow, bow b. close, close c. tear, tear 1 Claire Moore-Cantwell LING2010Q Handout 2 (4) Spelled differently, same speech sounds a.m ade, paid b.l augh, graph, staff c.
    [Show full text]
  • Prosody: Rhythms and Melodies of Speech Dafydd Gibbon Bielefeld University, Germany
    Prosody: Rhythms and Melodies of Speech Dafydd Gibbon Bielefeld University, Germany Abstract The present contribution is a tutorial on selected aspects of prosody, the rhythms and melodies of speech, based on a course of the same name at the Summer School on Contemporary Phonetics and Phonology at Tongji University, Shanghai, China in July 2016. The tutorial is not intended as an introduction to experimental methodology or as an overview of the literature on the topic, but as an outline of observationally accessible aspects of fundamental frequency and timing patterns with the aid of computational visualisation, situated in a semiotic framework of sign ranks and interpretations. After an informal introduction to the basic concepts of prosody in the introduction and a discussion of the place of prosody in the architecture of language, a stylisation method for visualising aspects of prosody is introduced. Relevant acoustic phonetic topics in phonemic tone and accent prosody, word prosody, phrasal prosody and discourse prosody are then outlined, and a selection of specific cases taken from a number of typologically different languages is discussed: Anyi/Agni (Niger-Congo>Kwa, Ivory Coast), English, Kuki-Thadou (Sino-Tibetan, North-East India and Myanmar), Mandarin Chinese, Tem (Niger- Congo>Gur, Togo) and Farsi. The main focus is on fundamental frequency patterns. Issues of timing and rhythm are discussed in the pre-final section. In the concluding section, further reading and possible future research directions, for example for the study of prosody in the evolution of speech, are outlined. 1 Rhythms and melodies Rhythms are sequences of alternating values of some feature or features of speech (such as the intensity, duration or melody of syllables, words or phrases) at approximately equal time intervals, which play a role in the aesthetics and rhetoric of speech, and differ somewhat from one language or language variety to another under the influence of syllable, word, phrase, sentence, text and discourse structure.
    [Show full text]
  • Subfields an Overview of the Language System What Is Phonetics?
    An overview of the language system Subfields Pragmatics – Meaning in context ↑↓ Articulatory Phonetics - the study of the production of speech sounds Semantics – Literal meaning The oldest form of phonetics. ↑↓ Syntax – Sentence structure A typical articulatory observation: “The sound at the beginning of ↑↓ the word ‘foot’ is produced by bringing the lower lip into contact with Morphology – Word structure the upper teeth and forcing air out of the mouth.” ↑↓ Phonology – Sound patters Auditory Phonetics - the study of the perception ↑↓ Related to neurology and cognitive science. Phonetics – Sounds A typical auditory observation: “The sounds s, sh, z, zh are called ↑ – understanding language expressions; ↓ – producing language expressions ‘sibilants’ because they share the property of sounding ‘hissy’ ” Linguistics 201, Detmar Meurers Handout 10 (May 3, 2004) 1 3 What is Phonetics? Acoustic Phonetics - the study of the physical properties of speech sounds. A relatively new subfield (only in last 50 years or so) due to sophisticated equipment (spectrograph, etc) needed for research. Closely related to Phonetics is the study of speech sounds acoustics, the subfield of physics dealing with sound waves. • how they are produced, A typical acoustic observation: “The strongest concentration of acoustic energy in the sound [s] is above 4000 Hz” • how they are perceived, and • what their physical properties are. The technical word for a speech sound is a phone (hence, phonetics; cf. also telephone, headphone, phonograph, homophone). Linguistics 201, Detmar Meurers Handout 10 (May 3, 2004) 2 4 Why do we need a new alphabet for sounds? More on English spelling The relation between English spelling and pronounciation is very complex: • We want to be able to write down how things are pronounced and the • same spelling – different sounds traditional orthography is not good enough for it.
    [Show full text]
  • Acoustic Phonetics.Pdf
    A Acoustic Phonetics Acoustic phonetics is the study of the acoustic charac- amplitude relations result in perceived differences in teristics of speech. Speech consists of variations in air timbre or quality. pressure that result from physical disturbances of air Fourier’s theorem enables us to describe speech molecules caused by the flow of air out of the lungs. sounds in terms of the frequency and amplitude of This airflow makes the air molecules alternately crowd each of its constituent simple waves. Such a descrip- together and move apart (oscillate), creating increases tion is known as the spectrum of a sound. A spectrum and decreases, respectively, in air pressure. The result- is visually displayed as a plot of frequency vs. ampli- ing sound wave transmits these changes in pressure tude, with frequency represented from low to high from speaker to hearer. Sound waves can be described along the horizontal axis and amplitude from low to in terms of physical properties such as cycle, period, high along the vertical axis. frequency, and amplitude. These concepts are most The usual energy source for speech is the airstream easily illustrated when considering a simple wave cor- generated by the lungs. This steady flow of air is con- responding to a pure tone. A cycle is a sequence of one verted into brief puffs of air by the vibrating vocal increase and one decrease in air pressure. A period is folds, two muscular folds housed in the larynx. The the amount of time (expressed in seconds or millisec- dominant way of conceptualizing the process of onds) that one cycle takes.
    [Show full text]
  • Neurobiology of Language
    NEUROBIOLOGY OF LANGUAGE Edited by GREGORY HICKOK Department of Cognitive Sciences, University of California, Irvine, CA, USA STEVEN L. SMALL Department of Neurology, University of California, Irvine, CA, USA AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Academic Press is an imprint of Elsevier SECTION C BEHAVIORAL FOUNDATIONS This page intentionally left blank CHAPTER 12 Phonology William J. Idsardi1,2 and Philip J. Monahan3,4 1Department of Linguistics, University of Maryland, College Park, MD, USA; 2Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD, USA; 3Centre for French and Linguistics, University of Toronto Scarborough, Toronto, ON, Canada; 4Department of Linguistics, University of Toronto, Toronto, ON, Canada 12.1 INTRODUCTION have very specific knowledge about a word’s form from a single presentation and can recognize and Phonology is typically defined as “the study of repeat such word-forms without much effort, all speech sounds of a language or languages, and the without knowing its meaning. Phonology studies the laws governing them,”1 particularly the laws govern- regularities of form (i.e., “rules without meaning”) ing the composition and combination of speech (Staal, 1990) and the laws of combination for speech sounds in language. This definition reflects a seg- sounds and their sub-parts. mental bias in the historical development of the field Any account needs to address the fact that and we can offer a more general definition: the study speech is produced by one anatomical system (the of the knowledge and representations of the sound mouth) and perceived with another (the auditory system of human languages.
    [Show full text]
  • UC Berkeley Phonlab Annual Report
    UC Berkeley UC Berkeley PhonLab Annual Report Title Acoustic and Auditory Phonetics, 3rd Edition -- Chapter 5 Permalink https://escholarship.org/uc/item/0p18z2s3 Journal UC Berkeley PhonLab Annual Report, 6(6) ISSN 2768-5047 Author Johnson, Keith Publication Date 2010 DOI 10.5070/P70p18z2s3 eScholarship.org Powered by the California Digital Library University of California UC Berkeley Phonology Lab Annual Report (2010) Chapter 5 (of Acoustic and Auditory Phonetics, 3rd Edition - in press) Speech perception When you listen to someone speaking you generally focus on understanding their meaning. One famous (in linguistics) way of saying this is that "we speak in order to be heard, in order to be understood" (Jakobson, Fant & Halle, 1952). Our drive, as listeners, to understand the talker leads us to focus on getting the words being said, and not so much on exactly how they are pronounced. But sometimes a pronunciation will jump out at you - somebody says a familiar word in an unfamiliar way and you just have to ask - "is that how you say that?" When we listen to the phonetics of speech - to how the words sound and not just what they mean - we as listeners are engaged in speech perception. In speech perception, listeners focus attention on the sounds of speech and notice phonetic details about pronunciation that are often not noticed at all in normal speech communication. For example, listeners will often not hear, or not seem to hear, a speech error or deliberate mispronunciation in ordinary conversation, but will notice those same errors when instructed to listen for mispronunciations (see Cole, 1973).
    [Show full text]
  • L3: Organization of Speech Sounds
    L3: Organization of speech sounds • Phonemes, phones, and allophones • Taxonomies of phoneme classes • Articulatory phonetics • Acoustic phonetics • Speech perception • Prosody Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 1 Phonemes and phones • Phoneme – The smallest meaningful contrastive unit in the phonology of a language – Each language uses small set of phonemes, much smaller than the number of sounds than can be produced by a human – The number of phonemes varies per language, with most languages having 20-40 phonemes • General American has ~40 phonemes (24 consonants, 16 vowels) • The Rotokas language (Paupa New Guinea) has ~11 phonemes • The Taa language (Botswana) has ~112 phonemes • Phonetic notation – International Phonetic Alphabet (IPA): consists of about 75 consonants, 25 vowels, and 50 diacritics (to modify base phones) – TIMIT corpus: uses 61 phones, represented as ASCII characters for machine readability • TIMIT only covers English, whereas IPA covers most languages Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 2 [Gold & Morgan, 2000] Introduction to Speech Processing | Ricardo Gutierrez-Osuna | CSE@TAMU 3 • Phone – The physical sound produced when a phoneme is articulated – Since the vocal tract can have an infinite number of configurations, there is an infinite number of phones that correspond to a phoneme • Allophone – A class of phones corresponding to a specific variant of a phoneme • Example: aspirated [ph] and unaspirated [p] in the words pit and spit • Example:
    [Show full text]
  • Eolss Sample Chapters
    LINGUISTICS - Phonetics - V. Josipović Smojver PHONETICS V. Josipović Smojver Department of English, University of Zagreb, Croatia Keywords: phonetics, phonology, sound, articulation, speech organ, phonation, voicing, consonant, vowel, notation, acoustics, spectrogram, auditory phonetics, hearing, measurement, application, forensic phonetics, clinical phonetics, speech synthesis, automatic speech recognition Contents 1. Introduction 2. Articulatory phonetics 2.1. The organs and physiology of speech 2.2. Consonants 2.3. Vowels 3. IPA notation 4. Acoustic phonetics 5. Auditory phonetics 6. Instrumental measurements and experiments 7. Suprasegmentals 8. Practical applications of phonetics 8.1. Clinical phonetics 8.2. Forensic phonetics 8.3. Other areas of application Acknowledgments Glossary Bibliography Biographical Sketch Summary Phonetics deals with the production, transmission and reception of speech. It focuses on the physical, rather than functional aspects of speech. It is divided into three main sub- disciplines: articulatory phonetics, dealing with the production of speech; acoustic phonetics,UNESCO studying the transmission of– speech; EOLSS and auditory phonetics, focusing on speech reception. Two major categories of speech sounds are distinguished: consonants and vowels, and phonetic research often deals with suprasegmentals, features stretching over domains SAMPLElarger than individual sound segments. CHAPTERS Phoneticians all over the world use a uniform system of notation, employing the symbols and following the principles of the International Phonetic Association. Present- day phonetics relies on sophisticated methods of instrumental analysis. Phonetics has a number of possible practical applications in diverse areas of life, including language learning and teaching, clinical phonetics, forensics, and various aspects of communication and computer technology. ©Encyclopedia of Life Support Systems (EOLSS) LINGUISTICS - Phonetics - V. Josipović Smojver 1.
    [Show full text]
  • INTRODUCTION Phonetics and Phonology Are Concerned with The
    & Introduction 33 There are three main dimensions to phonetics: articulatory phonetiCS, acoustic phonetics, and auditory phonetiCS. These correspond to the production, transmission, and reception of sound. (i) Articulatory phonetics This is concerned with studying the processes by which we actually make, or articulate, speech sounds. It's probably the aspect of phonetics that students of linguistiCS encounter first, and arguably the most acces­ sible to non-specialists since it doesn't involve the use of complicated machines. The organs that are employed in the articulation of speech INTRODUCTION sounds the tongue, teeth, lips, lungs and so on - all have more basic biological functions as they are part of the respiratory and digestive Phonetics and phonology are concerned with the study of speech systems of our bodies. Humans are unique in using these biological more particularly, with the dependence of speech on sound. In order to organs for the purpose of articulating speech sounds. Speech is formed understand the distinction between these two terms it is important to a stream of air coming up from the lungs through the glottis into the grasp the fact that sound is both a physical and a mental phenomenon. mouth or nasal cavities and being expelled through the lips or nose. It's Both speaking and hearing involve the performance of certain physical the modifications we make to this stream of air with organs normally functions, either with the organs in our mouths or with those in our ears. used for breathing and eating that produce sounds. The principal organs At the same time, however, neither speaking nOr hearing are simply used in the production of are shown in Figure 6.
    [Show full text]
  • Articulatory Phonetics in the English Languaje Pronunciation Development
    ARTICULATORY PHONETICS IN THE ENGLISH LANGUAJE PRONUNCIATION DEVELOPMENT RECIBIDO EL 26 DE ABRIL DE 2018 - ACEPTADO EL 12 DE JUNIO DE 2018 ARTICULATORY PHONETICS IN THE ENGLISH LANGUAJE PRONUNCIATION DEVELOPMENT wilson Gonzalo Rojas Yumisaca Nancy Georgina Rodríguez Arellano MAGISTER EN DOCENCIA UNIVERSITARIA Y MAGISTER EN EDUCACION A DISTANCIA Y ADMINISTRACIÓN EDUCATIVA ABIERTA [email protected] [email protected] Escuela Superior Politécnica de Chimborazo Escuela Superior Politécnica de Chimborazo (ESPOCH) (ESPOCH) Riobamba-Ecuador Riobamba-Ecuador Nanci Margarita Inca Chunata MAGISTER EN DOCENCIA UNIVERSITARIA Y ADMINISTRACION EDUCATIVA RESUMEN [email protected] Escuela Superior Politécnica de Chimborazo El presente proyecto de investigación “La (ESPOCH) Fonética Articulatoria en el Desarrollo de la Riobamba-Ecuador Pronunciación del Idioma Inglés” tuvo como objetivo principal determinar si el uso de la Fonética Articulatoria ayuda al mejoramiento María Guadalupe Escobar Murillo de la pronunciación del Idioma Inglés y MAGISTER EN DOCENCIA UNIVERSITARIA Y también describir los sonidos del lenguaje, ADMINISTRACIÓN EDUCATIVA vocales y consonantes, mediante su forma de [email protected] producción y clasificación, acento y entonación; Escuela Superior Politécnica de Chimborazo además de, realizar una intervención dentro (ESPOCH) del aula y analizar los datos obtenidos antes y Riobamba-Ecuador después de la misma; finalmente concienciar en el estudiante la necesidad de adquirir conocimientos relacionados con Fonética Articulatoria para mejorar su comunicación oral · 152 · BOLETÍN VIRTUAL-JULIO-VOL 7-7 ISNN 2266-1536 ARTICULATORY PHONETICS IN THE ENGLISH LANGUAJE PRONUNCIATION DEVELOPMENT mediante el uso del Idioma Inglés. Este estudio and analyze the obtained data before and after descriptivo tuvo un abordaje cuali-cuantitativo a it.
    [Show full text]