Bimodal Bilingualism: Code-Blending Between Spoken English and American Sign Language

Total Page:16

File Type:pdf, Size:1020Kb

Bimodal Bilingualism: Code-Blending Between Spoken English and American Sign Language Bimodal Bilingualism: Code-blending between Spoken English and American Sign Language Karen Emmorey,1 Helsa B. Borinstein,1 and Robin Thompson1, 2 1The Salk Institute for Biological Studies and 2University of California, San Diego 1. Introduction The vast majority of bilingual studies involve two spoken languages. Such “unimodal” bilingualism automatically entails a severe production constraint because one cannot physically produce two spoken words or phrases at the same time. For unimodal (speech-speech) bilinguals, there is a single output channel, the vocal tract, for both languages. In contrast, for bimodal (speech- sign) bilinguals, there are two output channels: the vocal tract and the hands. In addition, for unimodal bilinguals both languages are perceived by the same sensory system (audition), whereas for bimodal bilinguals one language is perceived auditorily and the other is perceived visually. In this article, we present a preliminary investigation of bimodal bilingual communication among hearing people who are native users of American Sign Language (ASL) and who are also native English speakers. First, it is important to emphasize that American Sign Language has a grammar that is independent of and quite distinct from English (see Emmorey (2002) for a review). For example, ASL allows much freer word order compared to English. English marks tense morphologically on verbs, whereas ASL (like many languages) expresses tense lexically via temporal adverbs. Conversely, ASL contains several verbal aspect markers (expressed as distinct movement patterns superimposed on a verb root) that are not found in English, but are found in many other spoken languages (e.g., habitual, punctual, and durational aspect). Obviously, ASL and English also differ in structure at the level of phonology. Signed languages, like spoken languages, exhibit a level of sublexical structure that involves segments and combinatorial rules, but phonological features are manual rather than oral (see Brentari (1998), Corina & Sandler (1993) for reviews). Finally, English and ASL differ quite dramatically with respect to how spatial information is encoded. English, like many spoken languages, expresses locative information with prepositions, such as “in,” “on,” or “under.” In contrast, ASL encodes locative and motion information with verbal classifier constructions. In these constructions, handshape morphemes specify object type, and the position of the hands in signing space schematically represents the spatial relation between two objects. Movement of the hand specifies the movement of an object through space (within whole-entity classifier constructions, see Emmorey, 2003). Thus, English and ASL are quite distinct from each other within phonological, morphological, and syntactic domains. In the current study, we chose to examine hearing ASL-English bilinguals because although Deaf1 signers are generally bilingual in ASL and English, many Deaf people prefer to read and write English, rather than use spoken English. Also, Deaf individuals do not acquire spoken English in the same way that a second language is acquired by unimodal bilinguals. For example, early speech bilinguals may be exposed to two languages in the home or one language may be used in the home and another in the community. In contrast, spoken language is not accessible in the environment of a deaf person and deaf children require special intervention, including training in speech articulation, speech perception, and lip reading, unlike hearing children acquiring a spoken language (see Blamey, 2003). Therefore, we examined hearing bilinguals who have Deaf parents for whom speech and sign are equally accessible within the environment. Bimodal bilingual children acquire a signed language and a spoken language in the same way that unimodal bilingual children acquire two spoken languages (Petitto, Katerelos, Levy, Gauna, Tetreault, & Ferraro, 2001; Newport & Meier, 1985) To our knowledge, this is the first study to examine bilingual communication in adults who acquired both signed and spoken languages naturally without explicit instruction. Hearing adults who grew up in Deaf families constitute an important bilingual community. Many identify themselves as © 2005 Karen Emmorey, Helsa B. Borinstein, and Robin Thompson. ISB4: Proceedings of the 4th International Symposium on Bilingualism, ed. James Cohen, Kara T. McAlister, Kellie Rolstad, and Jeff MacSwan, 663-673. Somerville, MA: Cascadilla Press. CODAs or Children of Deaf Adults who have a cultural identity defined in part by their bimodal bilingualism, as well as by shared childhood experiences in Deaf families. CODA is also the name of the organization that hosts events, workshops, and meetings for hearing sons and daughters of Deaf parents. This group of bilinguals shares cultural similarities with other bilingual communities. The unique nature of bimodal bilingualism raises several questions regarding the nature of bilingual communication, which we have begun to investigate. First, what are the ramifications of removing a major articulatory constraint on bilingual language production? Specifically, we examined whether code-switching occurs when articulatory constraints are lifted. For unimodal code-switching, the speaker must stop using one language and switch to a second language, either within a sentence or cross-sententially. What happens when the phonologies of the two languages are expressed by different articulators, thus allowing simultaneous expression? Do bimodal bilinguals pattern like unimodal bilinguals with respect to the temporal sequencing of code-switches? Or do they produce simultaneous speech-sign expressions? Second, does being a bimodal bilingual influence communication with monolinguals? In particular, we investigated whether the gestures that accompany speech in a conversation with a monolingual English speaker are affected by being an ASL-English bilingual. Research by McNeill has shown that co-speech gesture is ubiquitous and that gesture and speech are not separate independent systems, but form an expressive unit (McNeill, 1992; McNeill, 2000). We investigated whether bimodal bilinguals might actually produce ASL signs as part of their co-speech “gesture” when conversing with a monolingual English speaker who has no knowledge of ASL. In her master’s thesis comparing ASL and gesture production, Naughton (1996) reported that one hearing (non-native) ASL signer produced a few clearly identifiable ASL signs while conversing in English with a non- signer. A micro-analysis of the timing of these ASL signs with respect to co-occuring speech indicated that the signs were produced with the same tight temporal alignment found for co-speech gesture. Naughton suggests that knowledge of ASL may change the gesture-speech interface for ASL- English bilinguals. We investigated this intriguing hypothesis with a larger group of native ASL- English bilingual participants. Finally, we investigated how bilingual communication differs from simultaneous communication or SimCom. SimCom (sometimes referred to as Total Communication) is the communication system frequently used in deaf education. The use of SimCom is an attempt to produce grammatically correct spoken English and ASL at the same time. Grammatically correct English is important for educational purposes, and ASL is usually mixed with some form of Manually Coded English (an invented sign system designed to represent the morphology and syntax of English). SimCom is also used by bilinguals with a mixed audience of hearing persons who do not know ASL and of Deaf persons for whom spoken English is not accessible. The goal is to present the same information in both vocal and manual modalities, essentially a dual task because the two modalities have different properties and the two linguistic systems are distinct and non-identical. How does SimCom with its dual task properties differ from bilingual communication where speakers have both languages at their disposal, but are not forced to use the languages simultaneously? In sum, our goal is to characterize the nature of bimodal bilingualism to provide insight into the nature of the bilingual mind and bilingual communication in general. Bimodal bilingualism offers a unique vantage point from which to study the temporal and linguistic constraints on code-mixing, the semantic, pragmatic, and sociolinguistic functions of bilingual communication (particularly when temporal constraints on simultaneous language production are removed), and the impact of bilingualism on language production in general (specifically on the production of co-speech gesture). 2. Method 2.1 Participants Eleven ASL-English fluent bilinguals participated in the study (4 males; 7 females) with a mean age of 32 years (range 22-41 years). All were hearing and grew up in families with one or two Deaf parents. Participants rated themselves as fluent in both ASL and in English (a rating of 6 or higher on • 664 • a 7-point fluency scale). In addition, all of the participants had participated in CODA meetings or events. 2.2 Procedure Participants were told that we were interested in how CODAs talked to each other, and they were asked to perform three tasks. In the first part of the study, each participant conversed with either another ASL-English bilingual whom they knew (either one of the experimenters (HB) or another friend) or with a monolingual English speaker whom they did not know and who did not know any ASL. Whether participants initially interacted with a bilingual
Recommended publications
  • Assessing the Bimodal Bilingual Language Skills of Young Deaf Children
    ANZCED/APCD Conference CHRISTCHURCH, NZ 7-10 July 2016 Assessing the bimodal bilingual language skills of young deaf children Elizabeth Levesque PhD What we’ll talk about today Bilingual First Language Acquisition Bimodal bilingualism Bimodal bilingual assessment Measuring parental input Assessment tools Bilingual First Language Acquisition Bilingual literature generally refers to children’s acquisition of two languages as simultaneous or sequential bilingualism (McLaughlin, 1978) Simultaneous: occurring when a child is exposed to both languages within the first three years of life (not be confused with simultaneous communication: speaking and signing at the same time) Sequential: occurs when the second language is acquired after the child’s first three years of life Routes to bilingualism for young children One parent-one language Mixed language use by each person One language used at home, the other at school Designated times, e.g. signing at bath and bed time Language mixing, blending (Lanza, 1992; Vihman & McLaughlin, 1982) Bimodal bilingualism Refers to the use of two language modalities: Vocal: speech Visual-gestural: sign, gesture, non-manual features (Emmorey, Borinstein, & Thompson, 2005) Equal proficiency in both languages across a range of contexts is uncommon Balanced bilingualism: attainment of reasonable competence in both languages to support effective communication with a range of interlocutors (Genesee & Nicoladis, 2006; Grosjean, 2008; Hakuta, 1990) Dispelling the myths….. Infants’ first signs are acquired earlier than first words No significant difference in the emergence of first signs and words - developmental milestones are met within similar timeframes (Johnston & Schembri, 2007) Slight sign language advantage at the one-word stage, perhaps due to features being more visible and contrastive than speech (Meier & Newport,1990) Another myth….
    [Show full text]
  • Code-Blending with Depicting Signs
    Code-blending with Depicting Signs Code-blending with Depicting Signs Ronice Quadros1, Kathryn Davidson2, Diane Lillo-Martin3, and Karen Emmorey4 1Universidade Federal de Santa Catarina/2Harvard University/3University of Connecticut/4San Diego State University Abstract Bimodal bilinguals sometimes use code-blending, simultaneous production of (parts of) an utterance in both speech and sign. We ask what spoken language material is blended with entity and handling depicting signs (DS), representations of action that combine discrete components with iconic depictions of aspects of a referenced event in a gradient, analog manner. We test a semantic approach that DS may involve a demonstration, involving a predicate which selects for an iconic demonstrational argument, and adopt a syntactic analysis which crucially distinguishes between entity and handling DS. Given this analysis and our model of bilingualism, we expect DS to be blended with restricted structures: either iconic vocal gestures/demonstrations or with spoken language predicates. Further we predict that handling, but not entity, DS may occur in blends with subjects. Data from three hearing native bimodal bilinguals from the United States and one from Brazil support these predictions. Keywords: bimodal bilingualism; code-blending; depicting signs; demonstration; semantics 1 Code-blending with Depicting Signs Code-blending with Depicting Signs 1. Introduction In this squib, we analyze production data from hearing bimodal bilinguals – adults whose native languages include a sign language and a spoken language. Bimodal bilinguals engage in a bilingual phenomenon akin to code-switching, but unique to the bimodal situation: code-blending (Emmorey, Giezen, & Gollan, 2016). In code-blending, aspects of a spoken and signed utterance are produced simultaneously; this is possible since the articulators of speech and sign are largely separate.
    [Show full text]
  • Early Bimodal Bilingual Development of ASL Narrative Referent Cohesion: Using a Heritage Language Framework
    Early bimodal bilingual development of ASL narrative referent cohesion: Using a heritage language framework Wanette Reynolds A DISSERTATION Submitted to the Department of Linguistics and the Graduate School of Gallaudet University in partial fulfillment of the requirements for the degree of Doctor of Philosophy July, 2016 Acknowledgements First, I would like to express my deepest gratitude to Dr. Deborah Chen Pichler for your on-going support, advising, teaching, mentorship, astute observations, guidance, patience, cheerleading, and friendship. This dissertation would have not been possible without you, Deb! Second, I sincerely thank my dissertation committee Dr. Gaurav Mathur, Dr. Lourdes Ortega, and Dr. Mary Thumann for all your time, and guidance. Also, thank you to my doctoral program cohort for being a part of this journey, Viola Kozak, Carla Morris, and Jeffrey Palmer. I also send my deepest gratitude to Dr. Diane Lillo-Martin, Dr. Deborah Chen Pichler, and Dr. Ronice Quadros for establishing the Development of Bimodal Bilingualism project and lab and your guidance, and to all the research assistants for making this much needed research happen. A big two-handed THANK- YOU to the bimodal bilingual and Deaf children and their families who participated in the Development of Bimodal Bilingualism project in their longitudinal and experimental studies. I am also grateful for financial support from the Gallaudet Research Institute; CNPQ (Brazilian National Council of Technological and Scientific Development) Grant #200031/2009-0 and #470111/2007-0; and award number R01DC009263 from the National Institutes of Health (National Institute on Deafness and Other Communication Disorders). The content is solely the responsibility of the author and does not necessarily represent the official views of the NIDCD or the NIH.
    [Show full text]
  • The Neurobiology of Reading Differs for Deaf and Hearing Adults Karen
    The Neurobiology of Reading Differs for Deaf and Hearing Adults Karen Emmorey Please cite as: Emmorey, K. (2020). The neurobiology of reading differs for deaf and hearing adults. In M. Marschark and H. Knoors (Eds). Oxford Handbook of Deaf Studies in Learning and Cognition, pp. 347–359, Oxford University Press. Running head: The neurobiology of reading Karen Emmorey Laboratory for Language and Cognitive Neuroscience 6495 Alvarado Road, Suite 200 San Diego, CA 92120 USA [email protected] Acknowledgements This work was supported by a grant from the National Institutes of Health (DC014246) and grants from the National Science Foundation (BCS-1651372; BCS-1756403). The neurobiology of reading 2 Abstract Recent neuroimaging and electrophysiological evidence reveal how the reading system successfully adapts when phonological codes are relatively coarse-grained due to reduced auditory input during development. New evidence suggests that the optimal end-state for the reading system may differ for deaf versus hearing adults and indicates that certain neural patterns that are maladaptive for hearing readers may be beneficial for deaf readers. This chapter focuses on deaf adults who are signers and have achieved reading success. Although the left-hemisphere dominant reading circuit is largely similar, skilled deaf readers exhibit a more bilateral neural response to written words and sentences compared to their hearing peers, as measured by event- related potentials and functional magnetic resonance imaging. Skilled deaf readers may also rely more on neural regions involved in semantic processing compared to hearing readers. Overall, emerging evidence indicates that the neural markers for reading skill may differ for deaf and hearing adults.
    [Show full text]
  • The Writing Process and the Written Product in Bimodal Bilingual Deaf and Hard of Hearing Children
    languages Article The Writing Process and the Written Product in Bimodal Bilingual Deaf and Hard of Hearing Children Moa Gärdenfors Department of Linguistics, Stockholm University, 10691 Stockholm, Sweden; [email protected] Abstract: How does bimodal bilingualism—a signed and a spoken language—influence the writing process or the written product? The writing outcomes of twenty deaf and hard of hearing (DHH) children and hearing children of deaf adults (CODA) (mean 11.6 years) with similar bimodal bilingual backgrounds were analyzed. During the writing of a narrative text, a keylogging tool was used that generated detailed information about the participants’ writing process and written product. Unlike earlier studies that have repeatedly shown that monolingual hearing children outperform their DHH peers in writing, there were few differences between the groups that likely were caused by their various hearing backgrounds, such as in their lexical density. Signing knowledge was negatively correlated with writing flow and pauses before words, and positively correlated with deleted characters, but these did not affect the written product negatively. Instead, they used different processes to reach similar texts. This study emphasizes the importance of including and comparing participants with similar language experience backgrounds. It may be deceptive to compare bilingual DHH children with hearing children with other language backgrounds, risking showing language differences. This should always be controlled for through including true control groups with similar language experience as the examined groups. Citation: Gärdenfors, Moa. 2021. The Writing Process and the Written Keywords: deaf and hard of hearing; DHH; CODA; bimodal bilingualism; bilingualism; sign Product in Bimodal Bilingual Deaf language; written product; writing process; keystroke logging; literacy and Hard of Hearing Children.
    [Show full text]
  • Enhanced Imagery Abilities in Deaf and Hearing ASL Signers*
    Cognition, 46 (1993) 139-181 Visual imagery and visual-spatial language: Enhanced imagery abilities in deaf and hearing ASL signers* Karen Emmorey,” Stephen M. Kosslynb and Ursula Bellugi” “Laboratory for Cognitive Neuroscience, The Salk Institute for Biological Studies, 10010 North Torrey Pines Road, La Jolla, CA 92037, USA bDepartment of Psychology and Social Relations, Harvard University, Cambridge, MA 02138, USA Received April 22, 1991, final version accepted September 7, 1992 Abstract Emmorey, K., Kosslyn, S.M., and Bellugi, U., 1993. Visual imagery and visual-spatial language: Enhanced imagery abilities in deaf and hearing ASL signers. Cognition, 46: 139-181. The ability to generate visual mental images, to maintain them, and to rotate them was studied in deaf signers of American Sign Language (ASL), hearing signers who have deaf parents, and hearing non-signers. These abilities are hypothesized to be integral to the production and comprehension of ASL. Results indicate that both deaf and hearing ASL signers have an enhanced ability to generate relatively complex images and to detect mirror image reversals. In contrast, there were no group differences in ability to maintain information in images for brief periods or to imagine objects rotating. Signers’ enhanced visual imagery abilities may be tied to specific linguistic requirements of ASL (referent visualization, topological classifiers, perspective shift, and reversals during sign perception). Correspondence to: Dr. Karen Emmorey, Laboratory for Cognitive Neuroscience, The Salk Institute for Biological Studies, 10010 North Torrey Pines Road, La Jolla, CA 92037, USA; e-mail: [email protected]. *This work was supported by NIH grant HD-13249 awarded to Ursula Bellugi and Karen Emmorey, as well as NIH grants DC-00146, DC-0021 and NSF grant BNS86-09085.
    [Show full text]
  • Linguistic Cognition and Bimodalism: a Study of Motion and Location in the Confluence of Spanish and Spain’S Sign Language
    University of Massachusetts Amherst ScholarWorks@UMass Amherst Doctoral Dissertations Dissertations and Theses March 2015 Linguistic Cognition and Bimodalism: A Study of Motion and Location in the Confluence of Spanish and Spain’s Sign Language Francisco Meizoso University of Massachusetts Amherst Follow this and additional works at: https://scholarworks.umass.edu/dissertations_2 Part of the Cognition and Perception Commons, Cognitive Psychology Commons, First and Second Language Acquisition Commons, Other Linguistics Commons, Psycholinguistics and Neurolinguistics Commons, and the Spanish Linguistics Commons Recommended Citation Meizoso, Francisco, "Linguistic Cognition and Bimodalism: A Study of Motion and Location in the Confluence of Spanish and Spain’s Sign Language" (2015). Doctoral Dissertations. 315. https://doi.org/10.7275/6488839.0 https://scholarworks.umass.edu/dissertations_2/315 This Open Access Dissertation is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. LINGUISTIC COGNITION AND BIMODALISM: A STUDY OF MOTION AND LOCATION IN THE CONFLUENCE OF SPANISH AND SPAIN’S SIGN LANGUAGE ADissertationPresented by FRANCISCO MEIZOSO Submitted to the Graduate School of the University of Massachusetts Amherst in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY February 2015 Department of Languages, Literatures and Cultures c Copyright by Francisco Meizoso 2015 All Rights Reserved LINGUISTIC COGNITION AND BIMODALISM: A STUDY OF MOTION AND LOCATION IN THE CONFLUENCE OF SPANISH AND SPAIN’S SIGN LANGUAGE ADissertationPresented by FRANCISCO MEIZOSO Approved as to style and content by: Eduardo Negueruela, Chair Luiz Amaral, Member Craig S.
    [Show full text]
  • CURRICULUM VITAE Karen Emmorey 6630 Towhee Lane
    CURRICULUM VITAE Karen Emmorey 6630 Towhee Lane Speech, Language, and Hearing Sciences Carlsbad, CA 92011 San Diego State University Home: (760) 931-8152 Director, Laboratory for Language and Cognitive Neuroscience 6495 Alvarado Road, Suite 200 Office: (619) 594-8080 San Diego, CA 92120 Lab: (619) 594-8049 [email protected] Website: http://slhs.sdsu.edu/llcn/ EDUCATION University of California, Los Angeles Ph.D. 1987 Linguistics University of California, Los Angeles M.A. 1984 Linguistics University of California, Los Angeles B.A. 1982 Linguistics, Psychology EMPLOYMENT San Diego State University Distinguished Professor 2013 – present San Diego State University Professor 2005 – 2013 School of Speech, Language, and Hearing Sciences San Diego State University Adjunct Professor 2006 – present Department of Psychology The University of California, Adjunct Professor 1998 – present San Diego Department of Psychology The Salk Institute Associate Director 2002 – 2005 Lab for Cognitive Neuroscience The Salk Institute Senior Staff Scientist 1996 – 2005 The Salk Institute Staff Scientist 1990 – 1996 The Salk Institute Senior Research Associate 1988 – 1990 The Salk Institute Post-Doctoral Fellow 1987 – 1988 HONORS Fellow, Linguistic Society of America 2019 Chair, Society for the Neurobiology of Language 2018 Chair, Linguistics Section of the AAAS 2014 Distinguished Professor, SDSU 2013 Outstanding Faculty Alumni Award 2011 Fellow, American Association for the Advancement for Science 2010 Top 25 Service Award, SDSU 2009 DISTINGUISHED SERVICE Language
    [Show full text]
  • 1 Two Languages at Hand – Code-Switching in Bilingual Deaf
    Article Two languages at hand: Code-switching in bilingual deaf signers Zeshan, Ulrike and Panda, Sibaji Available at http://clok.uclan.ac.uk/14262/ Zeshan, Ulrike ORCID: 0000-0002-8438-3701 and Panda, Sibaji (2015) Two languages at hand: Code-switching in bilingual deaf signers. Sign Language & Linguistics, 18 (1). pp. 90-131. ISSN 1387-9316 It is advisable to refer to the publisher’s version if you intend to cite from the work. http://dx.doi.org/10.1075/sll.18.1.03zes For more information about UCLan’s research in this area go to http://www.uclan.ac.uk/researchgroups/ and search for <name of research Group>. For information about Research generally at UCLan please go to http://www.uclan.ac.uk/research/ All outputs in CLoK are protected by Intellectual Property Rights law, including Copyright law. Copyright, IPR and Moral Rights for the works on this site are retained by the individual authors and/or other copyright owners. Terms and conditions for use of this material are defined in the policies page. CLoK Central Lancashire online Knowledge www.clok.uclan.ac.uk Two languages at hand – Code-switching in bilingual deaf signers Ulrike Zeshan & Sibaji Panda, International Institute for Sign Languages and Deaf Studies, University of Central Lancashire, UK Abstract This article explores patterns of co-use of two sign languages in casual conversational data from four deaf bilinguals, who are fluent in Indian Sign Language (ISL) and Burundi Sign Language (BuSL). We investigate the contributions that both sign languages make to these conversations at lexical, clause, and discourse level, including a distinction between signs from closed grammatical classes and open lexical classes.
    [Show full text]
  • FAMILY LANGUAGE POLICY in AMERICAN SIGN LANGUAGE and ENGLISH BILINGUAL FAMILIES By
    FAMILY LANGUAGE POLICY IN AMERICAN SIGN LANGUAGE AND ENGLISH BILINGUAL FAMILIES by Bobbie Jo Kite A Dissertation Submitted to the Graduate Faculty of George Mason University in Partial Fulfillment of The Requirements for the Degree of Doctor of Philosophy Education Committee: ___________________________________________ Chair ___________________________________________ ___________________________________________ ___________________________________________ Program Director ___________________________________________ Dean, College of Education and Human Development Date: _____________________________________ Fall Semester 2017 George Mason University Fairfax, VA Family Language Policy in American Sign Language and English Bilingual Families A Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at George Mason University by Bobbie Jo Kite Master of Arts Gallaudet University, 2005 Bachelor of Arts Gallaudet University, 2004 Director: Julie Kidd, Professor College of Education and Human Development Fall Semester 2017 George Mason University Fairfax, VA Copyright 2017 Bobbie Jo Kite All Rights Reserved ii DEDICATION This is dedicated to the love of my life, Blake, my voice of reason throughout this madness, and to my two pit bulls, Queen Kuma and Lord Rummy for they spent hours sitting at my feet. I am grateful to my family, who supported and inquired about my work constantly, pushing me forward daily. Thank you, Grandma Kite, for I know you are watching down on me, gently guiding me daily. Thank you, Grandpa Hotel, for keeping me in check. I would not be where I am today without my family, and your names deserve to be on this dissertation as much as mine. I dedicate this dissertation to you. iii ACKNOWLEDGEMENTS Gratitude goes to my colleagues at the Department of Education at Gallaudet University for their continued support, especially Dr.
    [Show full text]
  • Effects of Learning American Sign Language on Co-Speech Gesture
    Effects of Learning American Sign Language on Co-speech Gesture Shannon Casey Karen Emmorey Laboratory for Language and Cognitive Neuroscience San Diego State University Anecdotally, people report gesturing more after learning ASL • If true, this would indicate an unusual effect of the non-dominant L2 language (ASL) on the dominant L1 language (English) – Gesture creation interacts on-line with speech production processes (e.g., McNeill, 2005) • For spoken languages, cross-language interference rarely occurs from the L2 to the L1 Overview • Study 1: Survey of signed vs. spoken language learners after one year of instruction • Study 2: Longitudinal study of signed vs. spoken language learners before and after one year of instruction Study 1: Survey • Students surveyed after two semesters of a foreign language at San Diego State University: – ASL, N = 102 – French, N = 72 – Italian, N = 47 – Spanish, N = 119 (total spoken learners = 238) Survey Questions 1. After learning French/Italian/Spanish/ASL, do you think you gesture while talking (in English): less more the same 2. Do you feel that gestures you make while talking have changed since learning French/Italian/ Spanish/ASL? yes no 3. If yes, please explain how you think your gestures have changed. Most ASL learners felt their gesture frequency increased after 1 year Perceived Gesture Frequency 100 90 80 70 60 Increase 50 Decrease Same 40 30 Percent of Respondents 20 10 0 ASL (N = 101) French (N = 71) Italian (N = 47) Spanish (N = 119) ASL learners, unlike spoken language learners,
    [Show full text]
  • Teckenspråk Och Tecken Som Stöd I Förskolan
    Språk-, läs- och skrivutveckling – Förskola Modul: Flera språk i barngruppen Del 1: Flerspråkighet och andraspråksutveckling Teckenspråk och tecken som stöd i förskolan Carin Roos, Högskolan Kristianstad I läroplanen för förskolan (Lpfö 18, Skolverket, 2018) finns formuleringar som förskolans personal och dess rektor har att omsätta i praktiken. I den här artikeln behandlas det som gäller teckenspråk i förskolan. Språklagen säger att ”den som är döv eller hörselskadad och den som av andra skäl har behov av teckenspråk ska ges möjlighet att lära sig, utveckla och använda teckenspråket” (14 §, SFS 2009:600, Regeringskansliet, 2010). Det är också denna formulering som ligger till grund för liknande ordalydelser i läroplanen för förskolan. Så varför är då teckenspråk viktigt för förskolans barn? Denna artikel redogör för det genom att beskriva den forskning som ligger till grund för när, hur och varför stöd av enskilda tecken och interaktion på teckenspråk kan vara både användbart och avgörande för barns utveckling. Artikeln beskriver vad forskning hittills har kommit fram till om tecken för olika grupper av barn. Teckenspråk Innan vi kommer in på vad svenskt teckenspråk och varianter på användning av enstaka tecken hämtade från svenskt teckenspråk (TAKK, TSS) är för något så behövs en bakgrund till själva begreppet teckenspråk. Teckenspråk är de språk som används av dövsamhället (SOU, 2006). Varken teckenspråk eller handalfabet är internationella. Med internationellt menar vi ofta de språk som används som konferensspråk, som engelskan. På samma sätt är det amerikanska teckenspråket ett internationellt konferensspråk. Teckenspråken i världen används både av hörande och av döva. I själva verket är gruppen teckenspråkiga till största delen bestående av hörande personer som dagligen använder det i sitt arbete, i familjen eller på sin fritid (Språkrådet, 2019).
    [Show full text]