Bimodal Bilingualism: Code-Blending Between Spoken English and American Sign Language
Total Page:16
File Type:pdf, Size:1020Kb
Bimodal Bilingualism: Code-blending between Spoken English and American Sign Language Karen Emmorey,1 Helsa B. Borinstein,1 and Robin Thompson1, 2 1The Salk Institute for Biological Studies and 2University of California, San Diego 1. Introduction The vast majority of bilingual studies involve two spoken languages. Such “unimodal” bilingualism automatically entails a severe production constraint because one cannot physically produce two spoken words or phrases at the same time. For unimodal (speech-speech) bilinguals, there is a single output channel, the vocal tract, for both languages. In contrast, for bimodal (speech- sign) bilinguals, there are two output channels: the vocal tract and the hands. In addition, for unimodal bilinguals both languages are perceived by the same sensory system (audition), whereas for bimodal bilinguals one language is perceived auditorily and the other is perceived visually. In this article, we present a preliminary investigation of bimodal bilingual communication among hearing people who are native users of American Sign Language (ASL) and who are also native English speakers. First, it is important to emphasize that American Sign Language has a grammar that is independent of and quite distinct from English (see Emmorey (2002) for a review). For example, ASL allows much freer word order compared to English. English marks tense morphologically on verbs, whereas ASL (like many languages) expresses tense lexically via temporal adverbs. Conversely, ASL contains several verbal aspect markers (expressed as distinct movement patterns superimposed on a verb root) that are not found in English, but are found in many other spoken languages (e.g., habitual, punctual, and durational aspect). Obviously, ASL and English also differ in structure at the level of phonology. Signed languages, like spoken languages, exhibit a level of sublexical structure that involves segments and combinatorial rules, but phonological features are manual rather than oral (see Brentari (1998), Corina & Sandler (1993) for reviews). Finally, English and ASL differ quite dramatically with respect to how spatial information is encoded. English, like many spoken languages, expresses locative information with prepositions, such as “in,” “on,” or “under.” In contrast, ASL encodes locative and motion information with verbal classifier constructions. In these constructions, handshape morphemes specify object type, and the position of the hands in signing space schematically represents the spatial relation between two objects. Movement of the hand specifies the movement of an object through space (within whole-entity classifier constructions, see Emmorey, 2003). Thus, English and ASL are quite distinct from each other within phonological, morphological, and syntactic domains. In the current study, we chose to examine hearing ASL-English bilinguals because although Deaf1 signers are generally bilingual in ASL and English, many Deaf people prefer to read and write English, rather than use spoken English. Also, Deaf individuals do not acquire spoken English in the same way that a second language is acquired by unimodal bilinguals. For example, early speech bilinguals may be exposed to two languages in the home or one language may be used in the home and another in the community. In contrast, spoken language is not accessible in the environment of a deaf person and deaf children require special intervention, including training in speech articulation, speech perception, and lip reading, unlike hearing children acquiring a spoken language (see Blamey, 2003). Therefore, we examined hearing bilinguals who have Deaf parents for whom speech and sign are equally accessible within the environment. Bimodal bilingual children acquire a signed language and a spoken language in the same way that unimodal bilingual children acquire two spoken languages (Petitto, Katerelos, Levy, Gauna, Tetreault, & Ferraro, 2001; Newport & Meier, 1985) To our knowledge, this is the first study to examine bilingual communication in adults who acquired both signed and spoken languages naturally without explicit instruction. Hearing adults who grew up in Deaf families constitute an important bilingual community. Many identify themselves as © 2005 Karen Emmorey, Helsa B. Borinstein, and Robin Thompson. ISB4: Proceedings of the 4th International Symposium on Bilingualism, ed. James Cohen, Kara T. McAlister, Kellie Rolstad, and Jeff MacSwan, 663-673. Somerville, MA: Cascadilla Press. CODAs or Children of Deaf Adults who have a cultural identity defined in part by their bimodal bilingualism, as well as by shared childhood experiences in Deaf families. CODA is also the name of the organization that hosts events, workshops, and meetings for hearing sons and daughters of Deaf parents. This group of bilinguals shares cultural similarities with other bilingual communities. The unique nature of bimodal bilingualism raises several questions regarding the nature of bilingual communication, which we have begun to investigate. First, what are the ramifications of removing a major articulatory constraint on bilingual language production? Specifically, we examined whether code-switching occurs when articulatory constraints are lifted. For unimodal code-switching, the speaker must stop using one language and switch to a second language, either within a sentence or cross-sententially. What happens when the phonologies of the two languages are expressed by different articulators, thus allowing simultaneous expression? Do bimodal bilinguals pattern like unimodal bilinguals with respect to the temporal sequencing of code-switches? Or do they produce simultaneous speech-sign expressions? Second, does being a bimodal bilingual influence communication with monolinguals? In particular, we investigated whether the gestures that accompany speech in a conversation with a monolingual English speaker are affected by being an ASL-English bilingual. Research by McNeill has shown that co-speech gesture is ubiquitous and that gesture and speech are not separate independent systems, but form an expressive unit (McNeill, 1992; McNeill, 2000). We investigated whether bimodal bilinguals might actually produce ASL signs as part of their co-speech “gesture” when conversing with a monolingual English speaker who has no knowledge of ASL. In her master’s thesis comparing ASL and gesture production, Naughton (1996) reported that one hearing (non-native) ASL signer produced a few clearly identifiable ASL signs while conversing in English with a non- signer. A micro-analysis of the timing of these ASL signs with respect to co-occuring speech indicated that the signs were produced with the same tight temporal alignment found for co-speech gesture. Naughton suggests that knowledge of ASL may change the gesture-speech interface for ASL- English bilinguals. We investigated this intriguing hypothesis with a larger group of native ASL- English bilingual participants. Finally, we investigated how bilingual communication differs from simultaneous communication or SimCom. SimCom (sometimes referred to as Total Communication) is the communication system frequently used in deaf education. The use of SimCom is an attempt to produce grammatically correct spoken English and ASL at the same time. Grammatically correct English is important for educational purposes, and ASL is usually mixed with some form of Manually Coded English (an invented sign system designed to represent the morphology and syntax of English). SimCom is also used by bilinguals with a mixed audience of hearing persons who do not know ASL and of Deaf persons for whom spoken English is not accessible. The goal is to present the same information in both vocal and manual modalities, essentially a dual task because the two modalities have different properties and the two linguistic systems are distinct and non-identical. How does SimCom with its dual task properties differ from bilingual communication where speakers have both languages at their disposal, but are not forced to use the languages simultaneously? In sum, our goal is to characterize the nature of bimodal bilingualism to provide insight into the nature of the bilingual mind and bilingual communication in general. Bimodal bilingualism offers a unique vantage point from which to study the temporal and linguistic constraints on code-mixing, the semantic, pragmatic, and sociolinguistic functions of bilingual communication (particularly when temporal constraints on simultaneous language production are removed), and the impact of bilingualism on language production in general (specifically on the production of co-speech gesture). 2. Method 2.1 Participants Eleven ASL-English fluent bilinguals participated in the study (4 males; 7 females) with a mean age of 32 years (range 22-41 years). All were hearing and grew up in families with one or two Deaf parents. Participants rated themselves as fluent in both ASL and in English (a rating of 6 or higher on • 664 • a 7-point fluency scale). In addition, all of the participants had participated in CODA meetings or events. 2.2 Procedure Participants were told that we were interested in how CODAs talked to each other, and they were asked to perform three tasks. In the first part of the study, each participant conversed with either another ASL-English bilingual whom they knew (either one of the experimenters (HB) or another friend) or with a monolingual English speaker whom they did not know and who did not know any ASL. Whether participants initially interacted with a bilingual