A Course in Phonetics Third Edition
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Description of Acoustic and Articulatory Parameters of Vowels in Mapudungun Speakers
Int. J. Odontostomat., 14(2):205-212, 2020. Description of Acoustic and Articulatory Parameters of Vowels in Mapudungun Speakers. Pilot Study Descripción de Parámetros Acusticos y Articulatorios de las Vocales en Hablantes de Mapudungun. Estudio Piloto Giannina Álvarez1; Magaly Ruiz2; Alain Arias1,3,4; María Florencia Lezcano1,3 & Ramón Fuentes1,3 ÁlvareZ, G.; RUIZ, M.; ARIAS, A.; LEZCANO, M. F. & FUENTES, R. Description of acoustic and articulatory parame- ters of vowels in Mapudungun speakers. Pilot study. Int. J. Odontostomat., 14(2):205-212, 2020. ABSTRACT: Mapudungun is a language used by Mapuche people in some regions of Chile and Argentina. The aim of this study was to describe the vowel phonemes with regard to the articulatory parameters (position of the tongue with respect to the palate and jaw opening) and acoustic parameters (f0, F1, F2 and F3) in Mapudungun speakers in the Region of La Araucanía. The vocalic phonemes of Mapudungun are six, where the first five are similar to those used in Spanish (/a e i o u/), to which is added a sixth vowel (/ɨ/) with its vocalic allophones (/ɨ/) and [Ә]. Three Mapudungun speakers were evaluated. The tongue movements were collected by Electromagnetic Articulography 3D and the data were processed with MATLAB and PRAAT software. It was possible to describe the trajectory of each third of the tongue during the production of the vowels. It was observed that the sixth vowel /Ә/ had minimal jaw opening during its pronunciation. In addition, the characteristic of /Ә/ as an unrounded mid-central vowel was corroborated. In this study, the tongue of mapudungun speakers was in a more posterior position than the found in other studies. -
Vowels and Consonants
VOWELS VOWELS AND CONSONANTS THIRD EDITION Praise for the previous edition: “This is a fascinating, accessible, and reader-friendly book by a master phonetician, about AND how speech sounds are made, and how they can be analyzed. I warmly recommend the book to everyone with an interest, professional or otherwise, in spoken language.” John Laver, Queen Margaret University College Praise for the third edition: CONSONANTS “This book conveys an amazing range of current science, including phonetics, CONSONANTS psycholinguistics, and speech technology, while being engaging and accessible for novices. This edition maintains Ladefoged’s friendly, enthusiastic style while adding important updates.” Natasha Warner, University of Arizona This popular introduction to phonetics describes how languages use a variety of different sounds, many of them quite unlike any that occur in well-known languages. Peter Ladefoged rightly earned his reputation as one of the world’s leading linguists, and students benefitted from his accessible writing and skill in communicating ideas. The third edition of his engaging introduction to phonetics has now been fully updated to reflect the latest trends in the field. It retains Peter Ladefoged’s expert writing and knowledge, and combines them with Sandra Ferrari Disner’s essential updates on topics including speech technology. Vowels and Consonants explores a wide range of topics, including the main forces operating on the sounds of languages; the acoustic, articulatory, and perceptual THIRD EDITION components of speech; and the inner workings of the most modern text-to-speech systems and speech recognition systems in use today. The third edition is supported by an accompanying website featuring new data, and even more reproductions of the sounds of a wide variety of languages, to reinforce learning and bring the descriptions to life, at www.wiley.com/go/ladefoged. -
Phonetics? Phonetic Transcription Articulation of Sounds
What Is Phonetics? Phonetic Transcription Articulation of Sounds Phonetics Darrell Larsen Linguistics 101 Darrell Larsen Phonetics What Is Phonetics? Phonetic Transcription Articulation of Sounds Outline 1 What Is Phonetics? 2 Phonetic Transcription Phonetic Alphabet Transcription 3 Articulation of Sounds Articulation of Consonants Articulation of Vowels Other Languages Darrell Larsen Phonetics What Is Phonetics? Phonetic Transcription Articulation of Sounds What Is Phonetics? Definition the study of speech sounds The Branches of Phonetics 1 acoustic (the physics of sound) 2 auditory (how the ear processes sound) 3 articulatory (how we produce speech sounds) Darrell Larsen Phonetics What Is Phonetics? Phonetic Transcription Articulation of Sounds Notes Darrell Larsen Phonetics What Is Phonetics? Phonetic Transcription Articulation of Sounds Articulatory Phonetics We will examine the following questions: How can we accurately transcribe speech sounds? What speech organs are involved in speech production? How do we manipulate the flow of air to produce sounds? Darrell Larsen Phonetics What Is Phonetics? Phonetic Transcription Articulation of Sounds Notes Darrell Larsen Phonetics What Is Phonetics? Phonetic Alphabet Phonetic Transcription Transcription Articulation of Sounds Why Do We Need a Phonetic Alphabet? Linguists use a phonetic transcription system to record speech sounds. In this class, we will use the International Phonetic Alphabet (IPA) Question Why not just use the Roman alphabet? Darrell Larsen Phonetics What Is Phonetics? Phonetic -
Articulatory Phonetics
Articulatory Phonetics Lecturer: Dr Anna Sfakianaki HY578 Digital Speech Signal Processing Spring Term 2016-17 CSD University of Crete What is Phonetics? n Phonetics is a branch of Linguistics that systematically studies the sounds of human speech. 1. How speech sounds are produced Production (Articulation) 2. How speech sounds are transmitted Acoustics 3. How speech sounds are received Perception It is an interdisciplinary subject, theoretical as much as experimental. Why do speech engineers need phonetics? n An engineer working on speech signal processing usually ignores the linguistic background of the speech he/she analyzes. (Olaszy, 2005) ¡ How was the utterance planned in the speaker’s brain? ¡ How was it produced by the speaker’s articulation organs? ¡ What sort of contextual influences did it receive? ¡ How will the listener decode the message? Phonetics in Speech Engineering Combined knowledge of articulatory gestures and acoustic properties of speech sounds Categorization of speech sounds Segmentation Speech Database Annotation Algorithms Speech Recognition Speech Synthesis Phonetics in Speech Engineering Speech • diagnosis Disorders • treatment Pronunciation • L2 Teaching Tools • Foreign languages Speech • Hearing aids Intelligibility Enhancement • Other tools A week with a phonetician… n Tuesday n Thursday Articulatory Phonetics Acoustic Phonetics ¡ Speech production ¡ Formants ¡ Sound waves ¡ Fundamental Frequency ¡ Places and manners of articulation ¡ Acoustics of Vowels n Consonants & Vowels n Articulatory vs Acoustic charts ¡ Waveforms of consonants - VOT ¡ Acoustics of Consonants n Formant Transitions ¡ Suprasegmentals n Friday More Acoustic Phonetics… ¡ Interpreting spectrograms ¡ The guessing game… ¡ Individual Differences Peter Ladefoged Home Page: n Professor UCLA (1962-1991) http://www.linguistics.ucla.edu/people/ladefoge/ n Travelled in Europe, Africa, India, China, Australia, etc. -
Phonetics and Phonology Seminar Introduction to Linguistics, Andrew
Phonetics and Phonology Phonetics and Phonology Voicing: In voiced sounds, the vocal cords (=vocal folds, Stimmbände) are pulled together Seminar Introduction to Linguistics, Andrew McIntyre and vibrate, unlike in voiceless sounds. Compare zoo/sue, ban/pan. Tests for voicing: 1 Phonetics vs. phonology Put hand on larynx. You feel more vibrations with voiced consonants. Phonetics deals with three main areas: Say [fvfvfv] continuously with ears blocked. [v] echoes inside your head, unlike [f]. Articulatory phonetics: speech organs & how they move to produce particular sounds. Acoustic phonetics: what happens in the air between speaker & hearer; measurable 4.2 Description of English consonants (organised by manners of articulation) using devices such as a sonograph, which analyses frequencies. The accompanying handout gives indications of the positions of the speech organs Auditory phonetics: how sounds are perceived by the ear, how the brain interprets the referred to below, and the IPA description of all sounds in English and other languages. information coming from the ear. Phonology: study of how particular sounds are used (in particular languages, in languages 4.2.1 Plosives generally) to distinguish between words. Study of how sounds form systems in (particular) Plosive (Verschlusslaut): complete closure somewhere in vocal tract, then air released. languages. Examples of phonological observations: (2) Bilabial (both lips are the active articulators): [p,b] in pie, bye The underlined sound sequence in German Strumpf can occur in the middle of words (3) Alveolar (passive articulator is the alveolar ridge (=gum ridge)): [t,d] in to, do in English (ashtray) but not at the beginning or end. (4) Velar (back of tongue approaches soft palate (velum)): [k,g] in cat, go In pan and span the p-sound is pronounced slightly differently. -
Natasha R. Abner –
Natasha R. Abner Interests Sign languages, morpho-syntax, syntax-semantics interface, language development & emergence. Employment 2017–present University of Michigan, Ann Arbor, MI. Assistant Professor Linguistics Department Sign Language & Multi-Modal Communication Lab Language Across Modalities 2014–2017 Montclair State University, Montclair, NJ. Assistant Professor Linguistics Department 2012–2014 University of Chicago, Chicago, IL. Postdoctoral Scholar Goldin-Meadow Laboratory Department of Psychology Education 2007–2012 PhD in Linguistics. University of California, Los Angeles Los Angeles, CA Fall 2010 Visiting student. Département d’études cognitives École normale supérieure Paris, France 2001–2005 BA in Linguistics, Summa Cum Laude. University of Michigan, Ann Arbor Ann Arbor, MI Theses Dissertation Title There Once Was a Verb: The Predicative Core of Possessive and Nominalization Structures in American Sign Language Committee Hilda Koopman and Edward Stabler (chairs), Karen Emmorey, and Edward Keenan Master’s Thesis Title Right Where You Belong: Right Peripheral WH-Elements and the WH-Question Paradigm in American Sign Language Linguistics Department – 440 Lorch Hall – Ann Arbor, MI Ó (734)764-0353 • Q [email protected] Last Updated: April 27, 2019 1/11 Committee Edward Stabler and Anoop Mahajan (chairs), and Pamela Munro Honor’s Thesis Title Resultatives Gone Minimal Advisors Acrisio Pires and Samuel Epstein Grants & Awards 2019 Honored Instructor University of Michigan 2018 New Initiative New Infrastructure (NINI) Grant ($12,500) University of Michigan 2016 Summer Grant Proposal Development Award Montclair State University 2011–2012 Dissertation Year Fellowship UCLA 2010–2011 Charles E. and Sue K. Young Award UCLA + Awarded annually to one student from the UCLA Humanities Division for exemplary achievement in scholarship, teaching, and University citizenship. -
Sounds Difficult? Why Phonological Theory Needs 'Ease of Articulation'
SOAS Working Papers in Linguistics Vol. 14 (2006): 207-226 Sounds difficult? Why phonological theory needs ‘ease of articulation’ David Shariatmadari [email protected] Introduction In this paper I will try to show that theories of phonological structure must incorporate some measure of phonetic naturalness, specifically the idea that there exists a tendency to conserve energy in the use of the articulatory organs, with ‘easy’ sounds being those that require less physical effort to produce on the part of the speaker. I will call this the Ease of Articulation Hypothesis (EoA) 1. A strong form of EoA would state that articulatory phonetics is the sole motivating factor for sound patterns including the structure of phonemic inventories, phonotactics and morphophonemic variation. This is clearly false. If it were the case, phonology would be indistinguishable from phonetics. There would be no reason for a given phonological process not to apply in all languages, since all human beings have the same vocal apparatus and what is phonetically natural for a speaker of Turkish must also be phonetically natural for a speaker of Japanese. And yet, there are clearly many phonological differences between these two, and among all other languages. In contrast, a weak version of EoA might hold that articulation is one of a number of pressures competing for influence over the shape of the speech string. Some of the time it will win out. Whether or not it does depends, crucially, on the structure of the language concerned. Since every language has a different structure, the way in which phonetic influence is felt will be different for every language. -
Intro to Linguistics – Phonetics Jirka Hana – October 9, 2011
Intro to Linguistics { Phonetics Jirka Hana { October 9, 2011 Overview of topics 1. What is Phonetics 2. Subfields of Phonetics 3. Phonetic alphabet 4. Czech and English Speech Sounds 5. Narrow vs. Broad Transcription 6. Some Other Speech Sounds 1 What is Phonetics Phonetics is the study of speech sounds: • how they are produced, • how they are perceived, • what their physical properties are. The technical word for a speech sound is phone (hence, phonetics). Cf. telephone, head- phone, phonograph, homophone. Place of phonetics in the language system: Pragmatics { Meaning in context "# Semantics { Literal meaning "# Syntax { Sentence structure "# Morphology { Word structure "# Phonology { Sound patterns, language dependent abstraction over sounds "# Phonetics { Sounds; (nearly) language independent " { understanding language expressions; # { producing language expressions 1 2 Subfields of Phonetics Articulatory Phonetics { the study of the production of speech sounds. The oldest form of phonetics. A typical observation: \The sound at the beginning of the word `foot' is produced by bringing the lower lip into contact with the upper teeth and forcing air out of the mouth." Auditory Phonetics { the study of the perception of speech sounds. Related to neurology and cognitive science. A typical observation: \The sounds [s, S, z, Z] are called sibilants because they share the property of sounding like a `hiss'." Acoustic Phonetics { the study of the physical properties of speech sounds. A relatively new subfield (circa 50 years); uses sophisticated equipment (spectrograph, etc). Related to acoustics (the subfield of physics dealing with sound waves). A typical observation: \The strongest concentration of acoustic energy in the sound [s] is above 4000 Hz." 3 Phonetic Alphabet Why do we need a new alphabet? Because: We want to be able to write down how things are pronounced and the traditional Roman alphabet is not good enough for it: • Words are pronounced differently depending on region, speaker, mood, . -
An Electropalatographic and Acoustic Study of Affricates and Fricatives in Two Catalan Dialects
An electropalatographic and acoustic study of affricates and fricatives in two Catalan dialects Daniel Recasens Universitat Autonoma` de Barcelona & Institut d’Estudis Catalans, Barcelona [email protected] Aina Espinosa Institut d’Estudis Catalans, Barcelona The present study is an electropalatographic and acoustic investigation of the fricatives /s, S/ and the affricates /ts, dz, tS, dZ/ based on data from five speakers of Majorcan Catalan and five speakers of Valencian Catalan. Results show that the articulatory characteristics of fricatives and affricates agree in several respects: the sounds traditionally labeled /S/ and /tS, dZ/ are alveolopalatal, and are articulated at a less anterior location, are less constricted and show more dorsopalatal contact than the alveolars /s/ and /ts, dz/; the two place categories are closer to each other in Valencian than in Majorcan. Compared to voiceless affricates, voiced affricates are more anterior and more constricted, and show less dorsopalatal contact. Data also show that closure location for /tS, dZ/ occurs at the alveolar zone, and that articulatory differences among affricates are better specified at frication than at closure. Strict homorganicity between the stop and frication components of affricates appears to hold provided that constriction location at frication is compared with place of articulation at closure offset. In comparison to voiceless affricates, voiced affricates were shorter, and exhibited a longer closure and a shorter frication period, in Majorcan; in Valencian, on the other hand, closures were shortest for /dZ/, and frication was systematically longer for voiceless vs. voiced affricates. These duration data appear to conform to a universal trend in Valencian but not in Majorcan where voiced affricates are lengthened intentionally. -
The Control of Speech
Speculations on the control of speech Peter Ladefoged Speech, like most skilled movements, is best described as a goal directed activity. The speech centers in the brain send out instructions for the speech apparatus to achieve certain goals. But it is not at all clear how these goals are defined. One possibility has been outlined by proponents of Articulatory Phonology (Browman & Goldstein, 1986, 1992, Saltzman & Kelso 1987, int. al.), They suggest that the goals are a series of articulatory gestures, being careful to point out that we must distinguish between the higher level gestural goals and the lower level system that implements them. When you produce a word (e.g. cat) their notion is that the speech centers in the brain do not issue instructions for the use of particular muscles. Instead they issue instructions equivalent to “Make the following complex gestures: voiceless aspirated velar stop, follow this by a low front vowel, finally make a voiceless glottaled alveolar stop”. The different parts of each of these instructions specify goals for different parts of the speech mechanism. “Voiceless aspirated” is an instruction for a certain laryngeal gesture, “velar stop” a lingual gesture, and so on. We can never be sure what the goals of the speech centers in the brain are. We cannot find out by asking speakers what they are trying to control. If you ask an unsophisticated speaker of English what they are controlling when they say cat they will probably say that they don’t know, or that they are trying to produce a ‘c’ followed by an ‘a’ and then by a ‘t’. -
Obituary for Peter Ladefoged (1925-2006)
Obituary for Peter Ladefoged (1925-2006) Peter Ladefoged died suddenly on January 24, 2006, at the age of 80 in London, on his way home from field work in India. He was born on September 17, 1925, in Sutton, Surrey, England. In 1951 he received his MA at the University of Edinburgh and in 1959 his PhD from the same University. At Edinburgh he studied phonetics with David Abercrombie, a pupil of Daniel Jones and so also connected to Henry Sweet. Peter’s dissertation, The Nature of Vowel Quality, focused on the cardinal vowels and their articulatory vs. auditory basis. In 1953, he married Jenny MacDonald. In 1959-62 he carried out field researches in Africa that resulted in A Phonetic Study of West African Languages. In 1962 he moved to America permanently and joined the UCLA English Department where he founded the UCLA Phonetics Laboratory. Not long after he arrived at UCLA, he was asked to work as the phonetics consultant for the movie My Fair Lady (1964). Three main cornerstones characterize Peter’s career: the fieldwork on little-studied sounds, instrumental laboratory phonetics and linguistic phonetic theory. He was particularly concerned about documenting the phonetic properties of endangered languages. His field research work covered a world-wide area, including the Aleutians, Australia, Botswana, Brazil, China, Ghana, India, Korea, Mexico, Nepal, Nigeria, Papua New Guinea, Scotland, Senegal, Sierra Leone, Tanzania, Thailand, Uganda and Yemen. Among many distinctive contributions to phonetics was his insistence on the huge diversity of phonetic phenomena in the languages of the world. Ladefoged’s fieldwork originated or refined many data collections and analytic techniques. -
A Tutorial on Acoustic Phonetic Feature Extraction for Automatic Speech Recognition (ASR) and Text-To-Speech (TTS) Applications in African Languages
Linguistic Portfolios Volume 9 Article 11 2020 A Tutorial on Acoustic Phonetic Feature Extraction for Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) Applications in African Languages Ettien Koffi St. Cloud State University, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/stcloud_ling Part of the Applied Linguistics Commons Recommended Citation Koffi, Ettien (2020)A " Tutorial on Acoustic Phonetic Feature Extraction for Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) Applications in African Languages," Linguistic Portfolios: Vol. 9 , Article 11. Available at: https://repository.stcloudstate.edu/stcloud_ling/vol9/iss1/11 This Article is brought to you for free and open access by theRepository at St. Cloud State. It has been accepted for inclusion in Linguistic Portfolios by an authorized editor of theRepository at St. Cloud State. For more information, please contact [email protected]. Koffi: A Tutorial on Acoustic Phonetic Feature Extraction for Automatic Linguistic Portfolios–ISSN 2472-5102 –Volume 9, 2020 | 130 A TUTORIAL ON ACOUSTIC PHONETIC FEATURE EXTRACTION FOR AUTOMATIC SPEECH RECOGNITION (ASR) AND TEXT-TO-SPECH (TTS) APPLICATIONS IN AFRICAN LANGUAGES ETTIEN KOFFI ABSTRACT At present, Siri, Dragon Dictate, Google Voice, and Alexa-like functionalities are not available in any indigenous African language. Yet, a 2015 Pew Research found that between 2002 to 2014, mobile phone usage increased tenfold in Africa, from 8% to 83%.1 The Acoustic Phonetic Approach (APA) discussed in this paper lays the foundation that will make Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) applications possible in African languages. The paper is written as a tutorial so that others can use the information therein to help digitalize many of the continent’s indigenous languages.