Effect of Multimodal Training on the Perception and Production of French Nasal Vowels by American English Learners of French
Total Page:16
File Type:pdf, Size:1020Kb
EFFECT OF MULTIMODAL TRAINING ON THE PERCEPTION AND PRODUCTION OF FRENCH NASAL VOWELS BY AMERICAN ENGLISH LEARNERS OF FRENCH By Solène Inceoglu A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Second Language Studies – Doctor of Philosophy 2014 ABSTRACT EFFECT OF MULTIMODAL TRAINING ON THE PERCEPTION AND PRODUCTION OF FRENCH NASAL VOWELS BY AMERICAN ENGLISH LEARNERS OF FRENCH By Solène Inceoglu Face-to-face interaction often involves the simultaneous perception of the speaker’s voice and facial cues (e.g., lip movements) making speech perception a multimodal experience (Rosenblum, 2005). Research in second language (L2) speech perception suggests that participants benefit from visual information (Hazan, Sennema, Iba, & Faulkner, 2005; Wang, Behne, & Jiang, 2008) and that perception training can transfer to improvement in production (Iverson, Pinet, & Evans, 2011) and can be generalizable to novel stimuli (Hardison, 2003). Most studies so far have investigated consonants and, despite a couple of studies looking at the multimodal perception of vowels (Hirata & Kelly, 2010; Soto-Faraco et al., 2007), little is known about the effect of multimodal training on the acquisition of vowels. More specifically, no study has looked at the contribution of visual cues in the perception and production of L2 French. The aim of this study is to provide a better understanding of the role of facial cues by examining the effect of training on the perception and production of French nasal vowels by American learners of French. The following research questions guide this study: 1) Does Audio- Visual (AV) perceptual training lead to greater improvement in perception of nasal vowels than Audio-Only (A-only) training does? 2) Does AV perceptual training lead to greater improvement in production of nasal vowels than A-only training does? 3) Does perception accuracy vary in relation to consonantal context? 4) Is training generalizable to novel stimuli? Sixty intermediate American learners of French were randomly assigned to one of the following training groups: AV, A, and control. All participants completed a production pretest and posttest, and a perception pretest, posttest and generalization test. The perception tests (monosyllabic words with various consonantal contexts) were presented within three modalities and with two counterbalanced orders: AV, A, V or A, AV, V. During the three weeks between the pretest and posttests, the AV and A groups received six sessions of perception training. The results of the perception task showed that, contrary to the control group, both the A- only and AV groups improved significantly from the pretest to the posttest, but that the differences between the AV and A-only groups were not statistically significant. When comparing each vowel in each of the three modalities, there was however a trend in favor of the AV training group. The analysis of the consonantal context revealed that, for both training groups, accurate perception of the vowel was higher when the initial consonant was a velar (occlusive non labial) and was lower with palatals (fricative labial). Training was also shown to be generalizable to new stimuli with novel consonantal contexts. In addition, although both groups improved at the production posttest, the oral production of the AV training group improved significantly more than the production of the A-only training group did, suggesting that AV perceptual training (i.e., seeing facial gestures) leads to greater improvement in pronunciation. To learners of French all over the world, and especially to Johannes iv ACKNOWLEDGMENTS Although I had always been interested in how people learn languages, I was only formally introduced to the field of second language acquisition during an exchange program at the University of Western Ontario when I took a class taught by Dr. Jeff Tenant. Although he doesn’t know it, it is in his class that I decided that I would one day pursue a PhD in second language studies. Now that my graduate studies are coming to an end, I would like to recognize my professors who have provided me with great learning and teaching opportunities, and continuous support. I am deeply indebted to my chair Dr. Debra Hardison for her support and encouragements over the last five years, for her countless letters of recommendations, and for her expertise in audiovisual speech perception. The idea of this dissertation originated in her L2 speech perception class and I’m lucky to have had her as a mentor. I would also like to thank the other members of my committee, Dr. Aline Godfroid, Dr. Shawn Loewen, and Dr. Anne Violin- Wigent for their feedback and support during this project. This dissertation would not have been possible without the help of several people: Elodie Hablot who kindly let me video-record her (twice!), the French instructors who facilitated recruitment in their classes, and, above all, the students who participated in my study. Their enthusiasm, their willingness to come to the lab eight times (and not dropping out!), and their genuine interest in my study made them the best participants ever! I also need to thank Hisham Abboud (from Cedrus) for his help in programming SuperLab, Russ Werner for his technical assistance, and acknowledge the financial support from a Language Learning Dissertation Grant, a Dissertation Completion Fund from the College of v Arts and Letters at Michigan State University, and a Research Enhancement Award from the Graduate School at Michigan State University. I am grateful for the friendships that have developed throughout my years in the Second Language Studies program. I deeply enjoyed sharing this journey with my cohort: Hyojung Lim, Wen-Hsin (Kelly) Chen, Jimin Kahng, and Yeon Heo. Thanks for the mutual support, for the many lunches and dinners, and for teaching me about your cultures! Special thanks go to Soo Hyon Kim, Luke Plonsky, Ryan Miller, and Tomoko Okumo for their help regarding job searching and graduation. Thanks to all my other SLS/applied linguistics friends at Michigan State and elsewhere. I can’t wait for the next conference where we can meet again and catch up! I would also like to thank my non-SLS friends at MSU who helped me balance academic and social life and with whom life in East Lansing, MI was so much better: Swasti Mishra, Chili Cai, Mahdi Moazzami, Sarah Mécheneau, Shinya Yuge, Aditya Singh, Irene Carbonell, Samaneh Rahimi, and Valentina Denzel. Thanks also go to my family members in France, Turkey, Germany, and the USA who supported me throughout my graduate studies and were always proud of me. Finally, all of my most heartfelt thanks go to Johannes who encouraged me to pursue a PhD in the US even if it meant being 6,817km (4,236 miles) apart and supported me throughout the ups and downs of (graduate) life. Danke, dass du immer für mich da bist. vi TABLE OF CONTENTS LIST OF TABLES ......................................................................................................................... ix LIST OF FIGURES ....................................................................................................................... xi INTRODUCTION .......................................................................................................................... 1 CHAPTER 1: REVIEW OF THE LITERATURE ......................................................................... 4 1.1 Models of cross-language speech perception ....................................................................... 4 1.2 Cross-language studies on auditory speech perception ........................................................ 6 1.2.1 Previous studies ............................................................................................................. 6 1.2.2 Training studies .............................................................................................................. 9 1.2.3 Effect of perceptual training on production ................................................................. 11 1.3 Multimodal speech perception ............................................................................................ 15 1.3.1 Theories of multimodal speech perception .................................................................. 19 1.3.2 Audio-visual (L2) studies on consonants ..................................................................... 22 1.3.3 Audio-visual (L2) studies on vowels ........................................................................... 32 1.4 Effect of consonantal context on speech perception ........................................................... 38 1.5 French nasal vowels ............................................................................................................ 43 1.5.1 Generality ..................................................................................................................... 43 1.5.2 Articulatory and acoustic characteristics ..................................................................... 44 1.5.3 Variation and transcription .......................................................................................... 47 1.5.4 Previous L2 studies on the perception and production of French nasal vowels .......... 48 CHAPTER 2: CURRENT STUDY .............................................................................................. 52 2.1 Research questions and hypotheses .................................................................................... 52 2.2