Module 5: Phonation and Oro-Nasal Processes

Total Page:16

File Type:pdf, Size:1020Kb

Module 5: Phonation and Oro-Nasal Processes Module 5: Phonation and Oro-nasal Processes Subject name: Linguistics Paper number 2; Introduction to Phonetics & Phonology & name: Paper Coordinator Pramod Pandey name & contact: Centre for Linguistics, SLL&CS, Jawaharlal Nehru University, New Delhi-110067 Module id Lings_P2_M5 Module name Phonation and Oro-nasal Processes Content Writer (CW) Pramod Pandey Name Email id [email protected] Phone 011-26741258, -9810979446 Objectives: • To look at the physiological structures and functions of the larynx and the velum in relation to speech sounds • To help students get practice in the articulation of the sounds produced by these organs of the vocal tract Contents: 5.1 Introduction 5.2 Phonation Process 5.2.1 The larynx 5.2.2 States of the glottis and Phonation Types 5.2.3 Practice exercises 5.3 Oro-nasal Process 5.3.1 The Velum 5.3.2 Oral sounds, Nasal consonants and Nasalized sounds 5.4 Summary 1 5.1 Introduction In the present module, we discuss the following two processes of speech- the phonation process and the oro-nasal process. The phonation process deals with the various types of sounds that are produced with vocal cords held in different positions. We will see how the phonation process is dependent on the structure of the larynx. The oro-nasal process makes available the options for letting the air escape from the two cavities- the oral cavity and the nasal cavity and thereby for the production of three different types of sounds. 5.2 The Phonation Process Once the air-stream process, which we dealt with in Module 4, is set in motion, the phonation process, dependent on the larynx, takes over. 5.2.1 The larynx The larynx is placed in the neck, below the pharynx and the oral and the nasal cavities. Evolutionary biologists (see e.g. Lieberman & Crelin 1971, Fitch 2002) tell us that in the evolution of humans from chimpanzees and apes, there was a crucial development in the positioning of the larynx: it got lowered, adding a vertical tube in the human vocal tract. According to Fitch (2002), “…the two-tube vocal tract allows us to produce wider range of vowels, and probably other speech sounds, than would a single-tube tract.” For a difference in the structures of humans and chimpanzees, see figure 5-1 below. 2 Figure 5-1: The larynx in humans and chimpanzees Downloaded from: http://users.ugent.be/~mvaneech/Verhaegen%20&Munro.%202004.%20Speech.%20Hu man%20Evolution%2019,%2053-70_files/image004.jpg The location of the larynx leads to three important functions it performs: it controls the breath flow, protects the windpipe and regulates the production of speech sounds. The central organ of the larynx is the vocal folds, which perform all the three functions. The vocal folds are “… made of muscles covered by a thin layer called mucosa. There is a right and left fold, forming a "V" when viewed from above. At the rear portion of each vocal fold is a small structure made of cartilage called the arytenoid. Many small muscles, described below, are attached to the arytenoids. These muscles pull the arytenoids apart from each other during breathing, thereby opening the airway. During speech the arytenoids and therefore the vocal folds are brought close together. As the air passes by the vocal folds in this position, they open and close very quickly. The rapid pulsation of air passing through the vocal folds produces a sound that is then modified by the remainder of the vocal tract to produce speech.” Take a look at closed and open the vocal cords in Figure 5-2. 3 Figure 5-2: Closed and open vocal cords Downloaded from: http://img.webmd.com/dtmcms/live/webmd/consumer_assets/site_images/media/medic al/hw/h9991587_001.jpg The moving vocal folds can be viewed in the following edited video of laryngeal stroboscopy (source: https://www.youtube.com/watch?v=mJedwz_r2Pc): LaryngealStroboscopy edited.ogg Some of the shapes that the glottis takes can be seen in Figure 5-4. 4 Figure 5-4: some shapes of the glottis during speech Downloaded from: https://encrypted- tbn1.gstatic.com/images?q=tbn:ANd9GcS9WOSMIG8a54vs0aacSekkNKyxaTJIqosHI57X1GE_xP- JPgi5 There are two types of muscles and cartilages that control the movement of the larynx, one, broadly known as the intrinsic muscles and cartilages, lead to the horizontal contraction and expansion of the vocal folds, and the other, known as extrinsic muscles or straps, control the movement of the larynx up and down. The vertical movement of the larynx is responsible for many features of speech sounds, especially voice quality and certain types of sounds such as implosives and ejectives, discussed in Module 4. The horizontal movement of the vocal folds controls the different shapes of the vocal folds giving rise to different phonation types. The shapes of the vocal folds are also referred to as ‘glottis’, the opening between the vocal folds. The horizontal shape of the vocal folds looks as shown in Figure 5- 3. There are two main muscles- the thyroarytenoid muscle and the interaretenoid muscle that move back and forth or sideways. The muscles are attached to the arytenoids cartilage that can contract and expand. The two muscles and the cartilage are shown in Figure 5-3. The opening between the arytenoids is known as the ‘glottis’. Figure5-3: Intrinsic muscles and cartilage around the vocal fold https://www.evms.edu/patient_care/services/otolaryngology_ent/patient_education/v oice__swallowing/anatomy/ 5.2.2 States of the glottis and Phonation Types We should keep in mind the fact that the different types of phonation depend on factors such as the force of the airflow and vibration of the vocal folds as much the settings of the vocal fold or the glottis. The most important phonation types found to contrast sounds in world languages are the following: Glottal stop, Voiceless sounds, Whisper, Voiced sounds, Breathy voice or Murmur and Creaky or laryngealized sounds. 5 These are described below. A Glottal stop, symbolized as [ʔ], is produced with the vocal folds tightly closed. As the vocal folds are tightly closed, air coming from the lungs cannot escape through them during the closure period. When the vocal folds are released, an audible plosion can be heard. Although called stops, glottal stops lack many of the features of stops- they cannot be voiced or aspirated. Since the place of the glottal stops is the vocal cords themselves, any modification in the state of the glottis will not yield a stop. Voiceless sounds are produced with the vocal folds held wide apart so that the air from the lungs passes freely through them, as in breathing. Although the most common voiceless sounds are found among obstruents, that is, plosives, fricatives and affricates, sonorants (e.g. laterals and nasals), too, can be voiceless. The voiceless obstruents are assigned independent symbols alongside their voiced counterparts, and they are placed on the left in a slot on e IPA chart, as, for example, [p b], [s z] or [ʧ ʤ]. For voiceless sonorants, a subscript diacritic, an empty dot [ ̥ ], is often used, for example, [n̥ ], a voiceless alveolar nasal. Some of the sonorants, of course, also have independent symbols, e.g. [ ɬ ], a voiceless lateral fricative, as compared to [ɮ], a voiced alveolar lateral fricative. Beginners in phonetics usually find it difficult to tell a voiceless from a voiced plosive, mainly on account of the fact that when voiceless plosives are released, they come together for the articulation of the following vowel, which is voiced. The release of plosives is always heard as voiced. Voiced sounds are produced with vocal folds held loosely together with the help of the arytenoids cartilages, unlike for glottal stops, for which they are held tightly closer. As they are loosely closed, the air-stream passing through them causes them to vibrate by means of the Bernauilli Principle. According to this principle, as the air passes out with force through the vocal folds, the pressure drops. As the air pressure drops, the vocal cords come together again, to be forced again to open. The cycle produces the effect of voicing, a buzzing sound. You can feel the difference between a voiced sound and a voiceless sound by putting your fingers on the outer projection of the vocal folds on the middle front portion of the neck, known as the Adam ’s apple. Say the sounds [s] and [z] over a prolonged period for 5- 10 seconds, and you will feel the buzzing sound for the prolonged [z], but not for the prolonged [s]. Aspirated sounds are found for stops for the most part, as e.g. [pʰ kʰ ʧʰ]. Aspirated sounds are produced by the vocal folds held in the position for voiceless sounds, with the difference that for aspiration, the vocal folds do not come together immediately after the release, but continue to be held apart leaving the glottis open for a short period, allowing extra force of air to pass through them. The distinction between voiceless, voiced and aspirated sounds is conveniently measured in terms of the parameter of Voice Onset Time (a.k.a. VOT). VOT quantifies the interval 6 between the release of constriction of a consonant and the onset of voicing. For voiced sounds, VOT begins before the end of the consonantal constriction, for voiceless consonants it is more or less simultaneous with the end of the constriction, and for aspirated consonants, voicing begins even after the end of the consonantal constriction. Figure Figure 5-5 shows the difference between bilabial plosives with the three laryngeal states: Figure 5-5: VOT for voiceless unaspirated, voiceless aspirated and voiced plosives Downloaded from: http://www.indiana.edu/~hlw/PhonUnits/vot.gif The glottis may be partially open or partially closed to produce other types of sounds.
Recommended publications
  • Pg. 5 CHAPTER-2: MECHANISM of SPEECH PRODUCTION AND
    Acoustic Analysis for Comparison and Identification of Normal and Disguised Speech of Individuals CHAPTER-2: MECHANISM OF SPEECH PRODUCTION AND LITERATURE REVIEW 2.1. Anatomy of speech production Speech is the vocal aspect of communication made by human beings. Human beings are able to express their ideas, feelings and emotions to the listener through speech generated using vocal articulators (NIDCD, June 2010). Development of speech is a constant process and requires a lot of practice. Communication is a string of events which allows the speaker to express their feelings and emotions and the listener to understand them. Speech communication is considered as a thought that is transformed into language for expression. Phil Rose & James R Robertson (2002) mentioned that voices of individuals are complex in nature. They aid in providing vital information related to sex, emotional state or age of the speaker. Although evidence from DNA always remains headlines, DNA can‟t talk. It can‟t be recorded planning, carrying out or confessing to a crime. It can‟t be so apparently directly incriminating. Perhaps, it is these features that contribute to interest and importance of forensic speaker identification. Speech signal is a multidimensional acoustic wave (as seen in figure 2.1.) which provides information about the words or message being spoken, speaker identity, language spoken, physical and mental health, race, age, sex, education level, religious orientation and background of an individual (Schetelig T. & Rabenstein R., May 1998). pg. 5 Acoustic Analysis for Comparison and Identification of Normal and Disguised Speech of Individuals Figure 2.1.: Representation of properties of sound wave (Courtesy "Waves".
    [Show full text]
  • Phonation Types of Korean Fricatives and Affricates
    ISSN 2005-8063 2017. 12. 31. 말소리와 음성과학 Vol.9 No.4 pp. 51-57 http://dx.doi.org/10.13064/KSSS.2017.9.4.051 Phonation types of Korean fricatives and affricates Goun Lee* Abstract The current study compared the acoustic features of the two phonation types for Korean fricatives (plain: /s/, fortis : /s’/) and the three types for affricates (aspirated : /tsʰ/, lenis : /ts/, and fortis : /ts’/) in order to determine the phonetic status of the plain fricative /s/. Considering the different manners of articulation between fricatives and affricates, we examined four acoustic parameters (rise time, intensity, fundamental frequency, and Cepstral Peak Prominence (CPP) values) of the 20 Korean native speakers’ productions. The results showed that unlike Korean affricates, F0 cannot distinguish two fricatives, and voice quality (CPP values) only distinguishes phonation types of Korean fricatives and affricates by grouping non-fortis sibilants together. Therefore, based on the similarity found in /tsʰ/ and /ts/ and the idiosyncratic pattern found in /s/, this research concludes that non-fortis fricative /s/ cannot be categorized as belonging to either phonation type. Keywords: Korean fricatives, Korean affricates, phonation type, acoustic characteristics, breathy voice 1. Introduction aspirated-like characteristic by remaining voiceless in intervocalic position. Whereas the lenis stops become voiced in intervocalic Korean has very well established three-way contrasts (e.g., position (e.g., /pata/ → / [pada] ‘sea’), the aspirated stops do not aspirated, lenis, and fortis) both in stops and affricates in three undergo intervocalic voicing (e.g., /patʰaŋ/ → [patʰaŋ] places of articulation (bilabial, alveolar, velar). However, this ‘foundation’). Since the plain fricative /s/, like the aspirated stops, distinction does not occur in fricatives, leaving only a two-way does not undergo intervocalic voicing, categorizing the plain contrast between fortis fricative /s’/ and non-fortis (hereafter, plain) fricative as aspirated is also possible.
    [Show full text]
  • Phonological Use of the Larynx: a Tutorial Jacqueline Vaissière
    Phonological use of the larynx: a tutorial Jacqueline Vaissière To cite this version: Jacqueline Vaissière. Phonological use of the larynx: a tutorial. Larynx 97, 1994, Marseille, France. pp.115-126. halshs-00703584 HAL Id: halshs-00703584 https://halshs.archives-ouvertes.fr/halshs-00703584 Submitted on 3 Jun 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Vaissière, J., (1997), "Phonological use of the larynx: a tutorial", Larynx 97, Marseille, 115-126. PHONOLOGICAL USE OF THE LARYNX J. Vaissière UPRESA-CNRS 1027, Institut de Phonétique, Paris, France larynx used as a carrier of paralinguistic information . RÉSUMÉ THE PRIMARY FUNCTION OF THE LARYNX Cette communication concerne le rôle du IS PROTECTIVE larynx dans l'acte de communication. Toutes As stated by Sapir, 1923, les langues du monde utilisent des physiologically, "speech is an overlaid configurations caractéristiques du larynx, aux function, or to be more precise, a group of niveaux segmental, lexical, et supralexical. Nous présentons d'abord l'utilisation des différents types de phonation pour distinguer entre les consonnes et les voyelles dans les overlaid functions. It gets what service it can langues du monde, et également du larynx out of organs and functions, nervous and comme lieu d'articulation des glottales, et la muscular, that come into being and are production des éjectives et des implosives.
    [Show full text]
  • Part 1: Introduction to The
    PREVIEW OF THE IPA HANDBOOK Handbook of the International Phonetic Association: A guide to the use of the International Phonetic Alphabet PARTI Introduction to the IPA 1. What is the International Phonetic Alphabet? The aim of the International Phonetic Association is to promote the scientific study of phonetics and the various practical applications of that science. For both these it is necessary to have a consistent way of representing the sounds of language in written form. From its foundation in 1886 the Association has been concerned to develop a system of notation which would be convenient to use, but comprehensive enough to cope with the wide variety of sounds found in the languages of the world; and to encourage the use of thjs notation as widely as possible among those concerned with language. The system is generally known as the International Phonetic Alphabet. Both the Association and its Alphabet are widely referred to by the abbreviation IPA, but here 'IPA' will be used only for the Alphabet. The IPA is based on the Roman alphabet, which has the advantage of being widely familiar, but also includes letters and additional symbols from a variety of other sources. These additions are necessary because the variety of sounds in languages is much greater than the number of letters in the Roman alphabet. The use of sequences of phonetic symbols to represent speech is known as transcription. The IPA can be used for many different purposes. For instance, it can be used as a way to show pronunciation in a dictionary, to record a language in linguistic fieldwork, to form the basis of a writing system for a language, or to annotate acoustic and other displays in the analysis of speech.
    [Show full text]
  • The Acoustic Consequences of Phonation and Tone Interactions in Jalapa Mazatec
    The acoustic consequences of phonation and tone interactions in Jalapa Mazatec Marc Garellek & Patricia Keating Phonetics Laboratory, Department of Linguistics, UCLA [email protected] San Felipe Jalapa de Dıaz⁄ (Jalapa) Mazatec is unusual in possessing a three-way phonation contrast and three-way level tone contrast independent of phonation. This study investigates the acoustics of how phonation and tone interact in this language, and how such interactions are maintained across variables like speaker sex, vowel timecourse, and presence of aspiration in the onset. Using a large number of words from the recordings of Mazatec made by Paul Kirk and Peter Ladefoged in the 1980s and 1990s, the results of our acoustic and statistical analysis support the claim that spectral measures like H1-H2 and mid- range spectral measures like H1-A2 best distinguish each phonation type, though other measures like Cepstral Peak Prominence are important as well. This is true regardless of tone and speaker sex. The phonation type contrasts are strongest in the first third of the vowel and then weaken towards the end. Although the tone categories remain distinct from one another in terms of F0 throughout the vowel, for laryngealized phonation the tone contrast in F0 is partially lost in the initial third. Consistent with phonological work on languages that cross-classify tone and phonation type (i.e. ‘laryngeally complex’ languages, Silverman 1997), this study shows that the complex orthogonal three-way phonation and tone contrasts do remain acoustically distinct according to the measures studied, despite partial neutralizations in any given measure. 1 Introduction Mazatec is an Otomanguean language of the Popolocan branch.
    [Show full text]
  • Relations Between Speech Production and Speech Perception: Some Behavioral and Neurological Observations
    14 Relations between Speech Production and Speech Perception: Some Behavioral and Neurological Observations Willem J. M. Levelt One Agent, Two Modalities There is a famous book that never appeared: Bever and Weksel (shelved). It contained chapters by several young Turks in the budding new psycho- linguistics community of the mid-1960s. Jacques Mehler’s chapter (coau- thored with Harris Savin) was entitled “Language Users.” A normal language user “is capable of producing and understanding an infinite number of sentences that he has never heard before. The central problem for the psychologist studying language is to explain this fact—to describe the abilities that underlie this infinity of possible performances and state precisely how these abilities, together with the various details . of a given situation, determine any particular performance.” There is no hesi­ tation here about the psycholinguist’s core business: it is to explain our abilities to produce and to understand language. Indeed, the chapter’s purpose was to review the available research findings on these abilities and it contains, correspondingly, a section on the listener and another section on the speaker. This balance was quickly lost in the further history of psycholinguistics. With the happy and important exceptions of speech error and speech pausing research, the study of language use was factually reduced to studying language understanding. For example, Philip Johnson-Laird opened his review of experimental psycholinguistics in the 1974 Annual Review of Psychology with the statement: “The fundamental problem of psycholinguistics is simple to formulate: what happens if we understand sentences?” And he added, “Most of the other problems would be half­ way solved if only we had the answer to this question.” One major other 242 W.
    [Show full text]
  • My Dissertation Will Be Composed of Roughly 7 Chapters
    Copyright by Hansang Park 2002 Temporal and Spectral Characteristics of Korean Phonation Types by Hansang Park, B.A., M.A. Dissertation Presented to the Faculty of the Graduate School of The University of Texas at Austin in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy The University of Texas at Austin August, 2002 Dedication To my parents, Kapjo Park and Chanam Nam, and to my family, Yookyung Bae, Chanho Park, Dongho Park Acknowledgements I owe this work to so many people. This work would have been impossible without their help. First of all, I am so much indebted to Professor Robert T. Harms. He was my supervisor, professor, and graduate advisor, while I was studying in the Ph.D. program in Linguistics at the University of Texas at Austin. His comments on my work always kept me from deviating from the path I should be following. He encouraged me with wit and humor when I was frustrated. I was nourished to the fullest from discussions with him. I must confess that his broad, profound, and brewed knowledge about phonetics and linguistics is distilled into my work. When I was his teaching assistant, I was impressed at his professionalism in two ways. He was creative in preparing teaching materials and most kind to students during his classes. He taught me how to avoid mechanical grading and how to cherish students’ potentials and insights. He has two things I would like to inherit from him: the artificial larynx buzzer which is very useful to demonstrate the role of the source and filter in speech production and a peace of pipe organ which is helpful to teach the acoustic characteristics of the tube.
    [Show full text]
  • Theories of Speech Perception
    Theories of Speech Perception • Motor Theory (Liberman) • Auditory Theory – Close link between perception – Derives from general and production of speech properties of the auditory • Use motor information to system compensate for lack of – Speech perception is not invariants in speech signal species-specific • Determine which articulatory gesture was made, infer phoneme – Human speech perception is an innate, species-specific skill • Because only humans can produce speech, only humans can perceive it as a sequence of phonemes • Speech is special Wilson & friends, 2004 • Perception • Production • /pa/ • /pa/ •/gi/ •/gi/ •Bell • Tap alternate thumbs • Burst of white noise Wilson et al., 2004 • Black areas are premotor and primary motor cortex activated when subjects produced the syllables • White arrows indicate central sulcus • Orange represents areas activated by listening to speech • Extensive activation in superior temporal gyrus • Activation in motor areas involved in speech production (!) Wilson and colleagues, 2004 Is categorical perception innate? Manipulate VOT, Monitor Sucking 4-month-old infants: Eimas et al. (1971) 20 ms 20 ms 0 ms (Different Sides) (Same Side) (Control) Is categorical perception species specific? • Chinchillas exhibit categorical perception as well Chinchilla experiment (Kuhl & Miller experiment) “ba…ba…ba…ba…”“pa…pa…pa…pa…” • Train on end-point “ba” (good), “pa” (bad) • Test on intermediate stimuli • Results: – Chinchillas switched over from staying to running at about the same location as the English b/p
    [Show full text]
  • Appendix B: Muscles of the Speech Production Mechanism
    Appendix B: Muscles of the Speech Production Mechanism I. MUSCLES OF RESPIRATION A. MUSCLES OF INHALATION (muscles that enlarge the thoracic cavity) 1. Diaphragm Attachments: The diaphragm originates in a number of places: the lower tip of the sternum; the first 3 or 4 lumbar vertebrae and the lower borders and inner surfaces of the cartilages of ribs 7 - 12. All fibers insert into a central tendon (aponeurosis of the diaphragm). Function: Contraction of the diaphragm draws the central tendon down and forward, which enlarges the thoracic cavity vertically. It can also elevate to some extent the lower ribs. The diaphragm separates the thoracic and the abdominal cavities. 2. External Intercostals Attachments: The external intercostals run from the lip on the lower border of each rib inferiorly and medially to the upper border of the rib immediately below. Function: These muscles may have several functions. They serve to strengthen the thoracic wall so that it doesn't bulge between the ribs. They provide a checking action to counteract relaxation pressure. Because of the direction of attachment of their fibers, the external intercostals can raise the thoracic cage for inhalation. 3. Pectoralis Major Attachments: This muscle attaches on the anterior surface of the medial half of the clavicle, the sternum and costal cartilages 1-6 or 7. All fibers come together and insert at the greater tubercle of the humerus. Function: Pectoralis major is primarily an abductor of the arm. It can, however, serve as a supplemental (or compensatory) muscle of inhalation, raising the rib cage and sternum. (In other words, breathing by raising and lowering the arms!) It is mentioned here chiefly because it is encountered in the dissection.
    [Show full text]
  • Illustrating the Production of the International Phonetic Alphabet
    INTERSPEECH 2016 September 8–12, 2016, San Francisco, USA Illustrating the Production of the International Phonetic Alphabet Sounds using Fast Real-Time Magnetic Resonance Imaging Asterios Toutios1, Sajan Goud Lingala1, Colin Vaz1, Jangwon Kim1, John Esling2, Patricia Keating3, Matthew Gordon4, Dani Byrd1, Louis Goldstein1, Krishna Nayak1, Shrikanth Narayanan1 1University of Southern California 2University of Victoria 3University of California, Los Angeles 4University of California, Santa Barbara ftoutios,[email protected] Abstract earlier rtMRI data, such as those in the publicly released USC- TIMIT [5] and USC-EMO-MRI [6] databases. Recent advances in real-time magnetic resonance imaging This paper presents a new rtMRI resource that showcases (rtMRI) of the upper airway for acquiring speech production these technological advances by illustrating the production of a data provide unparalleled views of the dynamics of a speaker’s comprehensive set of speech sounds present across the world’s vocal tract at very high frame rates (83 frames per second and languages, i.e. not restricted to English, encoded as conso- even higher). This paper introduces an effort to collect and nant and vowel symbols in the International Phonetic Alphabet make available on-line rtMRI data corresponding to a large sub- (IPA), which was devised by the International Phonetic Asso- set of the sounds of the world’s languages as encoded in the ciation as a standardized representation of the sounds of spo- International Phonetic Alphabet, with supplementary English ken language [7]. These symbols are meant to represent unique words and phonetically-balanced texts, produced by four promi- speech sounds, and do not correspond to the orthography of any nent phoneticians, using the latest rtMRI technology.
    [Show full text]
  • Effects of Consonant Manner and Vowel Height on Intraoral Pressure and Articulatory Contact at Voicing Offset and Onset for Voiceless Obstruentsa)
    Effects of consonant manner and vowel height on intraoral pressure and articulatory contact at voicing offset and onset for voiceless obstruentsa) Laura L. Koenigb) Haskins Laboratories, 300 George Street, New Haven, Connecticut 06511 and Long Island University, One University Plaza, Brooklyn, New York 11201 Susanne Fuchs Center for General Linguistics (ZAS), Schuetzenstrasse 18, Berlin 10117, Germany Jorge C. Lucero Department of Mathematics, University of Brasilia, Brasilia DF 70910-900, Brazil (Received 13 September 2010; revised 3 February 2011; accepted 12 February 2011) In obstruent consonants, a major constriction in the upper vocal tract yields an increase in intraoral pressure (Pio). Phonation requires that subglottal pressure (Psub) exceed Pio by a threshold value, so as the transglottal pressure reaches the threshold, phonation will cease. This work investigates how Pio levels at phonation offset and onset vary before and after different German voiceless obstruents (stop, fricative, affricates, clusters), and with following high vs low vowels. Articulatory contacts, measured using electropalatography, were recorded simultaneously with Pio to clarify how supra- glottal constrictions affect Pio. Effects of consonant type on phonation thresholds could be explained mainly in terms of the magnitude and timing of vocal-fold abduction. Phonation offset occurred at lower values of Pio before fricative-initial sequences than stop-initial sequences, and onset occurred at higher levels of Pio following the unaspirated stops of clusters compared to frica- tives, affricates, and aspirated stops. The vowel effects were somewhat surprising: High vowels had an inhibitory effect at voicing offset (phonation ceasing at lower values of Pio) in short-duration consonant sequences, but a facilitating effect on phonation onset that was consistent across conso- nantal contexts.
    [Show full text]
  • Articulatory Phonetics
    Articulatory Phonetics Lecturer: Dr Anna Sfakianaki HY578 Digital Speech Signal Processing Spring Term 2016-17 CSD University of Crete What is Phonetics? n Phonetics is a branch of Linguistics that systematically studies the sounds of human speech. 1. How speech sounds are produced Production (Articulation) 2. How speech sounds are transmitted Acoustics 3. How speech sounds are received Perception It is an interdisciplinary subject, theoretical as much as experimental. Why do speech engineers need phonetics? n An engineer working on speech signal processing usually ignores the linguistic background of the speech he/she analyzes. (Olaszy, 2005) ¡ How was the utterance planned in the speaker’s brain? ¡ How was it produced by the speaker’s articulation organs? ¡ What sort of contextual influences did it receive? ¡ How will the listener decode the message? Phonetics in Speech Engineering Combined knowledge of articulatory gestures and acoustic properties of speech sounds Categorization of speech sounds Segmentation Speech Database Annotation Algorithms Speech Recognition Speech Synthesis Phonetics in Speech Engineering Speech • diagnosis Disorders • treatment Pronunciation • L2 Teaching Tools • Foreign languages Speech • Hearing aids Intelligibility Enhancement • Other tools A week with a phonetician… n Tuesday n Thursday Articulatory Phonetics Acoustic Phonetics ¡ Speech production ¡ Formants ¡ Sound waves ¡ Fundamental Frequency ¡ Places and manners of articulation ¡ Acoustics of Vowels n Consonants & Vowels n Articulatory vs Acoustic charts ¡ Waveforms of consonants - VOT ¡ Acoustics of Consonants n Formant Transitions ¡ Suprasegmentals n Friday More Acoustic Phonetics… ¡ Interpreting spectrograms ¡ The guessing game… ¡ Individual Differences Peter Ladefoged Home Page: n Professor UCLA (1962-1991) http://www.linguistics.ucla.edu/people/ladefoge/ n Travelled in Europe, Africa, India, China, Australia, etc.
    [Show full text]