<<

RUNNING HEAD: of perception in ASL

The language of perception in American

Karen Emmorey1, Brenda Nicodemus2, Lucinda O’Grady1

1Laboratory for Language and Cognitive San Diego State University

2Gallaudet University

Author details:

Karen Emmorey Laboratory for Language and San Diego State University 6495 Alvarado Road #200 San Diego, CA 92120 [email protected] 619-594-8080

To appear in A. Majid & Stephen C. Levinson (Eds), Language of Perception: The comparative codability of the senses across . Oxford University Press.

1 Abstract

We investigated linguistic codability for sensory information (colour, taste, shape, touch, taste, smell, and sound) and the use of iconic labels in (ASL) by deaf native signers. Colour was highly codable in ASL, but few iconic labels were produced. Shape labels were highly iconic (lexical signs and classifier constructions), and touch descriptions relied on iconic classifier constructions that depicted the shape of the tactile source object. Lexical taste-specific signs also exhibited iconic properties (articulated near the mouth), but taste codability was relatively low. No smell-specific lexical signs were elicited (all descriptions were source-based). Descriptions of sound stimuli were elicited through tactile vibrations and were often described using classifier constructions that visually depicted different sound qualities. Results indicated that iconicity of linguistic forms was not constant across the senses; rather, iconicity was most frequently observed for shape, touch, and sound stimuli, and least frequently for colour and smell.

Keywords: American Sign Language, sensory perception, codability, iconicity, classifier constructions, , lexical signs

Biography:

KAREN EMMOREY Karen Emmorey is Distinguished Professor in the School of Speech, Language, and Hearing Sciences at San Diego State University and the Director of the Laboratory for Language and Cognitive Neuroscience. Her research interests include identifying the neural systems that support human language (both signed and spoken), the neural and cognitive consequences of (acquiring both a spoken and a signed language), and mapping the neural reading circuits for skilled and less-skilled deaf readers.

BRENDA NICODEMUS Brenda Nicodemus is Associate Professor in the Department of Interpretation at and Director of the Interpretation and Translation Research Center. She has conducted research on translation asymmetry in bimodal bilinguals, psycholinguistic methods in signed language research, and cross-linguistic variation in interpretation. Her publications include Prosodic Markers and Utterance Boundaries in American Sign Language Interpreting (Gallaudet University Press, 2009) and, with co-editor Laurie Swabey, Advances in Interpreting Research (Benjamins, 2011). Email: [email protected]

LUCINDA O’GRADY Lucinda O'Grady is Research Assistant in the Laboratory for Language and Cognitive Neuroscience at San Diego State University. Her research involves studies of fingerspelling and reading, as well as psycholinguistic studies of sign language processing.

2 1. The language and its speakers

American Sign Language (ASL) is the predominant language used by Deaf communities in the and English-speaking Canada. ASL is articulated with the hands, face, and body and is perceived visually (or tactilely by Deaf-blind individuals). In ASL, a closed set of , locations, and movements (along with grammatical features) are used to form a lexicon of signs (synonymous with words), which are combined in rule-governed ways to form its syntactic structure. Today ASL is acknowledged to be a natural human language; however, for most of its history it was thought to be lacking in linguistic structure at virtually every level

(Liddell 1984). Until the latter half of the 20th century, signs were regarded as nothing more than unanalyzable pantomime-like .

Recognition of ASL as a language was prompted by the groundbreaking work of a professor at Gallaudet University, an institution of higher education with programs specifically designed for deaf and hard of hearing students. Based on observation and analysis of deaf students’ signing, and his colleagues published a book in 1965 that, for the first time, described ASL as a fully developed language (Stokoe, Casterline and Croneberg 1965).

Initially, the claim that signing was a language in its own right was mostly ignored, and sometimes ridiculed, by the larger academic community; however, the assertion gained acceptance over time as further linguistic evidence came to light (Maher 1996). The recognition of ASL as a language was a turning point in how deaf people viewed themselves and their community, as well as a benchmark in numerous disciplines including , , cognitive science, and .

While there are no reliable figures, it has been estimated that there are at least 750,000

Americans and Canadians who are deaf and use ASL as their primary language (Canadian

3 Association of the Deaf 2012; Mitchell 2004; Mitchell, Young, Bachleda and Karchmer 2006).

In North America, deaf people are surrounded by English in their daily lives, and therefore frequently communicate with members of mainstream society in English by speaking or writing the language. However, most members of the Deaf community use ASL as their preferred means of communication with other people who know ASL. ASL is a language of limited diffusion, and while mainstream society often views deafness from a disability perspective, many deaf individuals consider themselves as members of a linguistic and cultural minority group with specific norms and values (Obasi 2008; Padden and Humphries 2005). Of paramount importance to the Deaf community in North America is the use and maintenance of ASL. As expressed by

Kannapell (1980), “ASL has a unifying function since deaf people are unified by their common language. It is important to understand that ASL is the only thing we have that belongs to deaf people completely” (p. 112).

2. Ethnographic background

Little is known about the deaf people who lived in North America before 1817, but it can be assumed that some were born deaf, others became deaf due to illness or accidents, while some deaf people came from other countries either through immigration or the slave trade (Lane

1984). Deaf people born in North America probably developed a sign language within their own communities, while deaf individuals from other countries may have brought their indigenous sign languages with them. Because large numbers of deaf people were rarely in close proximity prior to 1817, it may be that several different types of sign language were in use in North

America at that time, including variations (Valli and Lucas 1992). These variations

4 arise from contact between a sign language and an oral (or written) language, or when different sign languages come into contact (Ann 1998).

In 1817, a hearing American pastor, , and a deaf French educator, , established a school for deaf children in Hartford, Connecticut, now called the American School for the Deaf. Bringing their various idiolects, deaf students were immersed in sign language at the school, which no doubt included some French signs used by

Clerc. The need for students and teachers to communicate with one another provided fertile ground for developing shared norms of vocabulary and grammar – the roots of ASL structure as we know it today. As the students graduated from the American School, they established other deaf schools across the country, thus spreading the use of sign language and creating a network of schools for deaf children.

An assault against deaf education and the use of sign language took place at the Congress of Milan in 1880 where oralism (speech-based instruction) was proclaimed the one acceptable method for educating deaf children since, it was argued, deaf people could only participate in society, develop morally and intellectually, and hold employment if they developed speech (Lane

1984; Van Cleve and Crouch 1989). At that time, sign languages were regarded as merely random gestures, limited to concrete references and incapable of conveying abstract and nuanced thought. Following the Congress, deaf teachers were dismissed from working in deaf schools and sign language was banned in many classrooms for deaf children. Despite protests by deaf leaders, hearing educators in favor of the oral method prevailed, and deaf people were systematically denied access to their linguistic and cultural heritage (Lane 1992). Despite the suppression of signing, deaf people maintained their language, in part through natural language transmission by deaf parents to their children and social interaction with other deaf people.

5 Over the next 75 years, in the midst of numerous social uprisings, deaf Americans slowly began to develop a collective identity as a group, due in large part to recognition of their language. The U.S. civil rights in the 1960s resulted in legislation that protected the rights of various minority groups, including deaf people. Deaf individuals had more freedoms than they had in the past, made possible by technological advances, the right to interpreters, and changes in deaf education. In March 1988, an event occurred at Gallaudet University, the effects of which reverberated around the world. Gallaudet students, faculty and staff, and the Deaf community at large, staged a dramatic protest against the selection of Elizabeth Zinser, a hearing woman, to become the new president of their university. The Deaf President Now movement was successful, and Dr. I. King Jordan was installed as Gallaudet University’s first deaf president

(Christiansen and Barnartt 2002; Gannon 2002; Ramos 2003).

This event launched a decade of empowerment for deaf people and provided the impetus to reinvigorate advocacy efforts to improve their lives. Among the issues of critical importance to the Deaf community are the use and preservation of ASL, deaf education practices, and the transmission of behavioral norms within the community. These values, known as , make up the social beliefs, behaviors, history, and shared institutions shaped by the experience of being deaf and using ASL (Lane, Hoffmeister and Bahan 1996). A strong tradition of deaf art

(Sonnenstrahl 2002), literature (Christie and Wilkins 1997; Harmon and Nelson 2012), poetry

(Bauman 2003), folklore (Rutherford 1993), and storytelling (Erting et al. 1996; Winston 1999) emerged as means to both repeat and reconstruct the deaf experience and teach the “wisdom of the group” to one another (Padden and Humphries 1988: 38).

6 3. Grammatical overview

Signed languages arise spontaneously and develop structure when deaf people come together regularly over time. Signed languages almost always exist alongside spoken languages, as in the contact situation between ASL and English. However, despite being embedded within a dominant environment, ASL maintains independent phonological, morphological, and syntactic structures (see Emmorey 2002 for a review). The phonological structure of ASL is characterized by phonological parameters: hand configuration (), (place of articulation in relation to the body), and movement (dynamic features of a sign). The notion of a sign syllable, defined as a dynamic unit, is widely accepted among linguists (for an exception see van der Hulst 1993). Originally proposed as a fourth parameter by

Battison (1978), orientation is now generally viewed as a subcategory of hand configuration

(Sandler 1989). Phonological parameters, first thought to have a strictly simultaneous organization, are now understood to exhibit a degree of sequentiality as well (Liddell 1984;

Liddell and Johnson 1989). Nonetheless, sign languages tend to have less sequential structure than spoken languages (for further accounts of phonological structure in signed languages see

Brentari 1998; Corina and Sandler 1993).

The nature of the visual-gestural modality allows sign languages to display a high degree of iconicity in their lexical forms, i.., there is often a resemblance between form and meaning

(e.g., Mandel 1977; Taub 2001). Although spoken languages contain iconic words that sound like their referents (e.g., onomatopoetic words), most referents are not easily depicted with sound. In contrast, the visual three-dimensional modality of sign languages allows for iconic expression of a wide range of basic conceptual structures, such as object and human actions, movements, locations, and shapes (Taub 2001). For example, the sign BALL is made with two

7 hands, fingers curved and touching, depicting a spherical shape. ASL also has a rich and complex morphological system that is used to express noun-verb distinctions, temporal aspect, verb agreement, compounding, etc. (Sandler and Lillo-Martin 2006; Valli and Lucas 2002).

At the syntactic level, ASL allows much freer word order in comparison to English.

Although still controversial, the predominant view is that the basic order of elements in ASL is

Subject–Verb–Object (Fischer 1975; Liddell 1980). Liddell (1980) argues that ASL has both subordination and coordination of clauses, with subordinate clauses seen as complements, relative clauses, or adverbials and can be signalled by non-manual markers such as body movements and facial expressions. Specific facial articulations linguistically mark distinct types of syntactic structures in ASL, e.g., raised and furrowed brows signal Yes-No and WH- questions, respectively. Facial articulations can also delineate prosodic constituents (Sandler

1999) and perform adverbial and adjectival functions (Baker and Cokely 1980; Wilbur 2000).

For example, pursed lips produced during the sign SMOOTH can indicate a texture that is completely smooth. Further, it has been argued that non-manual markings are frequently associated with features in the syntactic head of an utterance (Neidle et al. 2000). For an overview of ASL syntactic structure, see Liddell (1980) and Neidle et al. (2000).

The ASL lexicon contains both native and non-native (or foreign) vocabulary and exhibits lexical structures found in spoken languages (e.g., compounds), as well as properties unique to signed languages (e.g., fingerspelled loan signs). In Figure 1, the structure of the ASL lexicon is schematized (based on Brentari and Padden 2001; Padden 1998). With respect to the

ASL lexicon, Padden (1998) proposes a core vs. periphery framework in which native signs exist in the core and non-native vocabulary in the periphery. Non-native vocabulary consists of fingerspelled English words that may move into the core lexicon by conforming to the

8 formational constraints of the native ASL vocabulary. ASL fingerspelling is a symbolic representation of English orthography and is comprised of handshapes produced with various orientation and movements. Fingerspelling is used in a variety of contexts, including serving as a means to borrow words into the lexicon (Brentari and Padden 2001).

Figure 1. The ASL lexicon (with the native lexicon in bold). From Emmorey (2002), reprinted with permission.

Individual fingerspelled handshapes may be also used to disambiguate signs through initialization. Initialization occurs when two or more concepts that are indicated by a single sign are disambiguated through the use of handshapes that denote the initial letter of the English word for each of the concepts. For example, many ASL colour terms are produced by the dominant hand oscillating in space while incorporating a specific handshape to distinguish the colour – Y handshape for yellow, G handshape for green, B handshape for blue, and so on.

A separate component within the ASL lexicon contains classifier constructions (or predicates), which are analyzed as predicates that denote existence, location, motion, or states

(Schick 1990; Supalla 1986; see papers in Emmorey 2003). These complex forms use morphemic handshapes to specify a particular referent. Classifier constructions are productively formed from an inventory of bound handshape morphemes, but in a manner that contrasts with the formation of lexical signs (Supalla and Newport 1978). The most obvious difference between

9 classifier constructions and lexical signs is that the formational primitives are meaningful in classifier forms while in signs they are often meaningless (Sandler and Lillo-Martin 2006).

Classifier constructions may be also be used to indicate the size and/or shape of an object by using handshapes and/or tracing movements to represent its physical attributes (Supalla 1982,

1986). They may also depict general semantic classes rather than physical properties of a referent

(Engberg-Pedersen 1994; Schembri 2003; Supalla 1982, 1986) or may represent the handling of an object (Schick 1990). Further, classifier constructions may be used to indicate the texture and consistency of a substance (Supalla 1986). For example, a signer might describe a rough tabletop by using a B handshape (fingers extended and touching, palm down) while tracing a bumpy surface (see Figure 6C below). Another way of classifying the shape of an object is to use the fingers or hand to trace an outline of the referent object; for example, the index finger can trace the shape of a geometric form (Mandel 1977; Supalla 1986; see Figure 4A below). However, since tracing classifier constructions behave differently than other classifier forms (in that they do not combine with verbal elements), they may represent a different grammatical phenomenon

(Zwitserlood 2003). With classifier constructions, the movement path and the manner of movement can convey the mere existence of an object, designate where something is located with respect to another referent or location, or indicate a specific kind of motion. Classifier constructions are prominent in all signed languages, and they provide a lexicalization source for the lexemes in these languages (Sandler and Lillo-Martin 2006). Notably, classifier predicates can become lexicalized and move to the core lexicon over time (see Figure 1).

10 4. The language of perception experiment

4.1. Participants

Thirteen native deaf signers (5 males) participated in the study (mean age = 33 years; SD

= 9 years). All participants were prelingually and profoundly deaf (80dB loss or greater) and acquired ASL from their deaf signing families. All participants completed at least 12 years of schooling, and nine participants had college degrees. The participants were all bilingual in written English, but ASL was their primary and preferred language. The majority of the participants (N = 7) were from the West Coast of the United States (California, Oregon), four participants were from the East Coast (New York, New , Maryland), and two were from the Southwest or Midwest (Colorado, Indiana). There were two signers who did not participate in the taste and smell tasks. ASL descriptions of sound were collected in a separate testing session with a subset of six participants (2 males) and an additional six participants who only participated in the sound task (3 males; mean age = 28 years; SD = 5 years); all were profoundly and congenitally deaf.

4.2. Methods

Participants were instructed to provide a linguistic description in American Sign

Language for six sensory stimuli subtests (color, shape, texture smell, taste, and sound). The experimental stimuli, developed by Majid and Levinson (this volume), consisted of the following components: colour: 80 colour swatches, a subset of the Munsell chips from the World Color

Survey that represented 20 equally spaced hues at four degrees of brightness all at maximum saturation (the chips were presented in a fixed random order); shape: 20 two- and three- dimensional geometric line drawings, which were selected to represent both Gestalt (“good

11 shapes”) and non-prototypical shapes; touch: 10 textural samples that differed in roughness/smoothness, hardness/softness, and consistency/composition; smell: 12 scratch-and- sniff squares (from the The Brief Smell Identification Test); taste: 5 flavours which were either eaten (umami – glutemate capsules) or sprayed on the participant’s tongue (100 ml solutions with 10 g of sucrose (sweet), 7.5 g of sodium chloride (salty), 0.05 g of quinine hydrochloride

(bitter), and 5 g of citric acid monohydrate (sour)); and sound: 20 audio files with paired tonal samples that varied in loudness, pitch, and duration. Participants were presented each individual sensory stimulus (or stimulus pairs for the sound task) and asked to provide a name that best described their perception of the stimuli. Participants were told that signed, rather than fingerspelled responses, were preferred, although fingerspelled labels were accepted if the participant indicated that he or she had no lexical sign for the item. The testing sessions were conducted in ASL by a deaf native ASL signer. All testing sessions were conducted indoors, while seated at a table, and were videotaped for later analysis. The total testing time was approximately one hour.

We made two alterations to the testing procedure used for hearing speakers. First, the deaf participants were not blindfolded for the touch task in order to avoid possible discomfort from blocking their visual access to the environment. Instead, we covered the touch stimulus booklet with a small blanket that allowed participants to feel, but not see, the stimuli.

Additionally, for the sound task, a balloon (inflated to 9 inches in diameter) was placed in contact with an audio speaker with the volume set at the maximum level (without distortion).

Participants were instructed to place their hands on the balloon as the sound stimuli were presented. Thus, the deaf participants perceived the sound primarily through vibrations felt through their hands. Participants did not wear hearing aids during this task. Pilot testing indicated

12 that deaf individuals felt more comfortable and provided more natural ASL descriptions when perceiving sounds in this manner, rather than through a hand-held vibrotactile device designed primarily for speech perception. ASL data from the sound task were not included in the cross- linguistic codability study.

ASL responses were transcribed from the video using standardized English glossing techniques that included codes for non-manual markers (i.e., linguistic facial expressions), fingerspelled responses, and classifier constructions (see notation conventions in Emmorey,

2002). Following the cross-linguistic study procedures, we determined which sign(s) constituted the “main” semantic response (i.e., the head) and which sign(s) or non-manual marker was a modifier for this response. We then coded whether each main response was source-based (e.g., the participant used an object as the basis for the description, such as BANANA1 for a smell stimulus), evaluative (e.g. the participant used a judgment term, such as AWFUL for a taste stimulus), or abstract (i.e., a descriptive response that captures the domain property; for example, the signs BLUE, SQUARE, SMOOTH, CLEAN, and SWEET for the domains of colour, shape, touch, smell, and taste, respectively).

5. Results

Figure 2 provides boxplots indicating the codability of first responses for each sense

(except sound, which was analyzed separately). Unless specified otherwise, the results reported below are based on the first response produced by each participant.

1 By convention, ASL signs are glossed in capital letters.

13

Figure 2. ASL codability results from Majid & Levinson (this volume).

5.1. Colour

Colour was highly codable for ASL, with a median codability score of 71.1 (see Majid, this volume, for details regarding how codability scores were derived). In comparison, the median codability score for American English was 58.5, and the median codability score for all

14 languages was 41.9. Of the 1,039 responses (80 colour chips labeled by 13 participants – one response was unintelligible), the vast majority of first responses (93.4%) were lexical signs (e.g.,

RED, BLACK, ORANGE), and the remaining 6.6% were fingerspelled borrowings from English

(e.g., T-E-A-L, S-A-L-M-O-N). No evaluative responses were given for the colour stimuli. Only

9.3% of responses were source-based terms (e.g., COFFEE, PEACH, SKIN, WINE, R-U-S-T), and

90.7% of responses were color terms (e.g., RED, BLACK, BLUE). Table 1 lists all of the lexical signs that were used to label colours.

Table 1. Exhaustive list of lexical signs and fingerspelled words that were used by at least one participant when labeling colour swatches.

Lexical signs* BLACK GOLD PEACH SKIN BLUE GREEN PINK TAN BROWN MAROON PURPLE WINE COFFEE ORANGE RED YELLOW

Fingerspelled words A-Q-U-A G-O-L-D M-U-D R-U-S-T A-Q-U-A-M-A-R-I-N-E L-A-V-E-N-D-E-R N-A-V-Y S-A-G-E C-O-P-P-E-R L-I-L-A-C O-L-I-V-E S-A-L-M-O-N C-O-R-A-L L-I-M-E P-A-S-T-E-L T-A-N C-R-A-N-B-E-R-R-Y M-A-G-E-N-T-A P-E-A-C-H T-E-A-L F-O-R-E-S-T M-A-R-O-O-N R-O-S-Y T-U-R-Q-U-O-I-S-E V-I-O-L-E-T *Underscoring of the initial letter indicates an .

As can be seen, the majority of signs used to label colour were initialized (10/16; 63%), indicating a strong influence from English on ASL colour vocabulary. Only two non-initialized signs were abstract colour terms (i.e., not source-based): BLACK and RED. These signs are illustrated in Figure 3 and likely have gestural origins in pointing to the hair and lips. The ASL sign WHITE is also a non-initialized abstract colour term, but a white colour chip was not presented in this study.

15

Figure 3. Native (non-initialized) colour signs in ASL.

Colour signs could be modified by lexical signs, facial expressions, and/or prosody (i.e., the manner in which the sign is produced). Colour intensity could be indicated by a tense or lax movement of the lexical color sign, which indicated a dark or light shade of the colour denoted by the lexical sign. Prosodic intensity markers could also accompany lexical modifiers, e.g.,

DARK produced with a tense movement and LIGHT produced with a lax movement. Tense marking was more common than lax marking to indicate colour intensity. Furrowed brows also accompanied lexical colour signs to indicate a more intense (darker) colour, and furrowed brows also frequently accompanied the lexical modifier DARK. Similarly, raised brows could accompany lexical colour signs to indicate a lighter or brighter shade of colour, and raised brows often also accompanied the lexical modifier LIGHT. Neither facial expressions nor prosodic movements appeared to be obligatory or particularly consistent across signers. Nonetheless, all signers produced at least one colour description using tense movement to indicate a darker colour. In addition, 23% of all responses (including secondary descriptions) were modified by facial expression (furrowed or raised brows) and tense or lax manual movement. Common lexical modifiers included DARK, LIGHT, MEDIUM, BRIGHT, and SORT-OF. Unlike British Sign

Language, ASL signers never produced English to differentiate among similar

16 colours, e.g., producing the ASL sign GREEN while “olive” or the sign BLUE while mouthing “teal” (see Woll, this volume).

As noted above, fingerspelled words made up less than 10% of first responses, and the variety of fingerspelled colour terms was high. A few colour terms were fingerspelled even though an ASL sign exists (e.g., GOLD, PEACH), but most fingerspelled responses had no corresponding ASL sign. Table 1 lists all of the fingerspelled words produced by at least one participant.

The finding that colour was more codable in ASL than in English – despite parallels between English and ASL colour vocabularies – indicates that ASL signers were more consistent in their choice of colour labels than were English speakers. The majority of ASL signers (77% or

10/13) agreed on main colour terms for 65% (52/80) of the Munsell colour chips, and these labels were all lexical signs. Fingerspelled words were most often given for colour chips with relatively low name agreement across signers. It is not clear why colour exhibited such high lexical codability for ASL. High colour codability does not appear to be associated with language modality because other sign languages did not exhibit high codability for colour, e.g.,

Kata Kolok (de Vos 2011; this volume) and (Woll, this volume). It is possible that colour signs are more consistent for ASL signers because there is less dialectal variability compared to BSL. In addition, ASL has fewer lexical gaps compared to Kata Kolok, which has only four colour signs (BLACK, WHITE, RED, BLUE-GREEN), and for ASL signers, pointing to or naming relevant objects is not a common or consistent method of providing colour labels, in contrast to Kata Kolok (de Vos 2011).

There is no reason to expect that deaf ASL signers have cultural or educational experiences with colour that differ significantly from those of hearing American English

17 speakers. Nonetheless, an early study by Lantz and Lenneberg (1966) found that deaf adults produced English colour names and descriptions that differed significantly from those produced by hearing English speakers. The task in their study required one participant to describe in

English each of 20 colour chips (from the Farnsworth-Munsell 100 Hue Test) so that their communication partner could select the correct colour from the same set of 20. Deaf individuals were paired with each other and were all likely ASL signers because they attended Gallaudet

College (now Gallaudet University) where ASL flourished in the 1960s. Lantz and Lenneberg

(1966) concluded “The deaf when communicating to each other . . . apparently make a different use of English than the hearing population” (p. 777). The different pattern of colour labeling could reflect an influence from ASL. The Lantz and Lenneberg (1966) findings also indicate that the use of English colour terms differs for deaf and hearing Americans.

5.2. Shape

Following the cross-linguistic pattern, shape was less codable than colour (median codability score = 46.1). However, shape was also more codable for ASL signers than for

English speakers (median codability score = 39.0) and compared to all languages (median codability score = 26.2). No evaluative responses were provided for the shape stimuli, and

13.85% of shape labels were source based (e.g., STAR, EGG, BALL). Table 2 lists the categories of ASL shape descriptions, examples of each category, and their relative frequency. Examples of each response type are provided in Figure 4.

18 Table 2. Type and frequency of shape labels for ASL signers Percent of Category of response Examples first responses O-V-A-L, C-O-N-E, B-A-L-L, Fingerspelled word 13.8% C-U-B-E STAR, EGG, BOX, FLOWER, Lexical sign 20.8% BALL CIRCLE, SQUARE, Lexicalized classifier sign 54.6% RECTANGLE, TRIANGLE CL: 1 handshape (2h), two straight objects cross Classifier constructionsª

(non-standardized shape 10.8% CL: L to G handshape (2h), descriptions) finger and thumb trace angles of a triangle ªCL = Classifier construction; 1 handshape = fist with extended index finger; 2h = two hands; L to G handshape = L handshape (index and thumb extended) moves to a G handshape (thumb and index finger touch). Illustrations of these examples are provided in Figure 4.

The majority of shapes (54.6%) were described using ASL lexical signs that are historically derived from classifier constructions. For two-dimensional shapes (e.g., circle, square, triangle), the lexical signs are tracing constructions in which the signer uses the 1 handshape (extended index finger) to trace the outline of the shape (see Figure 4A). For three- dimensional shapes tracing signs were not used; rather, ASL signers produced two-handed classifier signs in which the configuration and movement of the hands depict the shape (e.g.,

SPHERE, CYLINDER; see Figure 4B). Classifier constructions that were not lexical signs were relatively rare (10.8%) and were used somewhat sporadically across the shape stimuli. The most common classifier construction (produced by eight of the thirteen participants) was crossed index fingers to indicate an X-like shape (see Figure 4C). Another example of a non-lexicalized classifier construction is shown in Figure 4D: the thumb and finger trace the angle of a right triangle.

19

Figure 4. Illustration of ASL classifier constructions describing shape. A) Tracing construction lexicalized classifier sign OVAL; B) a 3D tracing construction for CYLINDER; C) a novel classifier construction indicating that two straight objects cross; D) a novel classifier construction indicating a triangular shape.

In our coding system, we attempted to distinguish between classifier constructions that were more standardized from those that were more novel. However, there may not be a clear division between classifier constructions and lexical signs that are derived from classifier constructions. It is possible that some signs that we have coded as lexical classifier signs may in fact constitute the productive use of classifier morphology. These two categories could be

20 collapsed into a single category of classifier-type signs. In this case, 65.4% of shape labels involve such signs.

Finally, the majority (77%) of first responses were iconic signs. All classifier-type signs are iconic with a clear mapping between the form of the sign and the object shape. The lexical signs BOX and BALL are also iconic, but the form of the signs STAR, EGG, and FLOWER do not depict shape information. And of course, fingerspelled responses are non-iconic. The high incidence of iconic shape descriptions is not particularly surprising, but the result highlights the ease with which visual shape, in contrast to colour and other sensory information, can be easily codified with iconic representations. In addition, the contrast between colour and shape with respect to iconicity and codability indicates that iconicity does not drive codability for sign languages. Even though shape information is easily depicted within the visual-manual modality, this ability did not lead to a high level of lexical consistency across signers. Other factors must be at play. For example, Meir, Sandler, Padden, and Aronoff (2010) argue that the size of a sign language community affects lexical stabilization and consistency across signers.

5.3. Sound

A common myth is that profoundly deaf individuals do not have a concept of sound because they cannot hear. However, deaf people construct meanings for sounds based on their experiences and somatosensory sensations associated with sound (Padden and Humphries 1988).

Given the unique experience that deaf individuals have with sound, ASL sound descriptions are not directly comparable with spoken language descriptions, and the ASL sound data were not included in the cross-linguistic sample. However, the same pairs of sound stimuli that were

21 presented to hearing speakers were presented to deaf ASL signers who experienced the sound pairs through vibrations felt by the hands (and possibly through some residual hearing).

Deaf signers were able to describe almost all of the sound stimuli – there were very few non-responses (5% of the sounds or 11/240 sound stimuli had “I don’t know” or other non- descriptive responses). Codability scores were not computed for the sound stimuli, but examination of the percentage of participants who produced the same initial sound description was relatively low, ranging from 0% to 25% (mean = 17% or 2/12 producing the same description for a given sound). The least common strategy to describe sounds (5% of first responses) using an evaluative response, such as EXCITED. Source-based descriptions constituted 20% of responses and included the following sources of sound: a honking car horn, a train, music, an explosion, an alarm or phone ringing, a whistle, a phone tone. Abstract responses were most frequent (70%) and included both lexical signs describing amplitude (LOUD, SOFT) or pitch (HIGH, LOW) and classifier constructions that iconically depicted aspects of the sound, such as their amplitude, pitch, or duration. Examples of classifier constructions used to describe sounds are given in Figure 5. Table 3 lists the different categories of ASL sound descriptions, examples of each category, and their relative frequency.

22 Table 3 Type and frequency of ASL sound descriptions Percent of Category of response Examples first responses R-U-M-B-L-E, B-U-Z-Z-I-N-G, Fingerspelled words 5.0% B-E-E-P, I-N-T-E-N-S-E GUITAR, HONK-HORN, TRAIN, Lexical source signs 15.0% RING, NOISE “same as the first”, “same as Other responses 15.4% before,” “nothing” LOUD, SOFT, HIGH, LOW, Lexical sound descriptions 27.9% QUIET, POWERFUL, LIGHTER CL: a flat-O handshape opens and closes (used to describe the onset and offset of a low pitched tone)

CL: S to 1 handshape, held in Classifier constructionsª space (used to describe a long 36.7% tone)

CL: hooked 5 handshape opens wide (used to describe a loud sound)

ªCL = Classifier construction; flat-O handshape = all fingers extended and touching the extended thumb. S to 1 handshape = S handshape (fist) moves to 1 handshape (fist with extended index fingers); hooked 5 handshape = all fingers curved and spread.

23

Figure 5. Illustration of ASL classifier constructions describing sound. A) a classifier construction used to describe the onset and offset of a low pitched tone; B) a classifier construction used to describe a long tone (the 1 handshape was held in space); C) a classifier construction used to describe a loud sound.

Many of the non-classifier first responses (23.7%) were followed by secondary descriptions of the sound that involved a classifier construction. When secondary responses are included, the percentage of sound descriptions that contained a classifier construction is surprisingly high: 51.7%. These constructions are particularly interesting because they illustrate how sound can be made visible in a sign language. For example, amplitude was sometimes depicted by the degree of opening of the fingers of the hand, as illustrated in Figure 5A and C.

The signer in Figure 5C who used the open 5 handshape to indicate a loud sound also used this handshape to indicate a soft sound, but the fingers were much more curved (a loose O handshape) and opened only slightly. The movement dynamics of the classifier construction was

24 sometimes used to depict the temporal component of a sound. For example, as shown in Figure

5A, the onset and offset of the tone could be mapped to the beginning and the end of the sign (in this case, the opening and closing of the handshape). For the example shown in Figure 5B, the signer held the 1 handshape in space for a relatively long time to indicate the long duration of the tone. Finally, low pitched tones were sometimes indicated by placing the hand(s) at a relatively low location in signing space (usually a spread 5 handshape, palm down, moving outward); however, we did not observe examples in which the hands were positioned at a relatively high location in space when describing a high tone. High tones were sometimes contrasted with low tones using handshape, with a 1 handshape or a G-handshape (index finger and thumb extended) representing the high tone and a 5 or B handshape representing the low tone.

In sum, ASL has lexical signs that can be used to describe the quality of sound, such as

LOUD, SOFT, HIGH, and LOW. These signs appear to be somewhat conventionalized, with 75% of signers (9/12) describing at least one sound as HIGH (or HIGH(ER)), followed in frequency of use by the signs LOUD(ER) (58%; 7/12), LOW(ER) (42%; 4/12), SOFT(ER) (33%; 4/12), and

LIGHT(ER) (25%; 3/12). In addition, deaf individuals can characterize sounds by the source that creates the sound, such as a honking horn or a whistle. Somewhat surprisingly, classifier constructions that are typically used to describe the size and shape of objects can be extended to characterize the nature of sound.

5.4. Touch

Following the cross-linguistic pattern, touch was much less codable than either shape or colour (median codability score = 5.1), and the touch stimuli were less codable for ASL than for

English (English median codability score = 14.3) and for all languages combined (median

25 codability score for all languages = 15.4). The majority of touch descriptions (63.57%) were source-based (e.g., F-U-R, CARPET, S-A-N-D PAPER, P-L-A-S-T-I-C) and 36.43% of descriptions were abstract descriptions of a property of the touch sensation (e.g., SOFT, ROUGH, or a classifier construction describing the texture of the touch stimulus). There were no evaluative first responses for the touch stimuli. Table 4 lists the different sign-based categories for ASL tactile descriptions, examples of each category, and their relative frequency.

Table 4. Type and frequency of tactile descriptions for ASL signers Percent of Category of response Examples first responses P-L-A-S-T-I-C, F-U-R, W-O-O-L, Fingerspelled word 44.2% C-A-R-P-E-T, F-E-L-T, C-O-R-K SMOOTH, ROUGH, SOFT, Lexical sign 30.2% RUBBER, GLASS, MIRROR CL: F handshape, moves in circle (to describe a circular line of small beads)

CL: B handshape, traces a bumpy Classifier constructionsª 25.6% surface (ridges)

CL: 4 handshape, traces the tracks of multiple ridges

ªCL = Classifier construction; F handshape = thumb and index finger form a ring, all other fingers extended; B handshape = fingers extended and touching, palm down. Illustrations of these examples are provided in Figure 6.

The number of fingerspelled words that were used to describe the tactile stimuli was surprisingly high — the highest percentage of all of the sense tasks. The majority of responses for touch were source-based, and it happens that several of the source objects do not have common ASL signs (e.g., F-E-L-T, F-U-R, F-E-A-T-H-E-R, C-O-R-K), which could therefore lead to a number of fingerspelled responses. The most common lexical signs produced were

26 SMOOTH, ROUGH, and SOFT. As with the sound task, many of the non-classifier first responses for the tactile stimuli were followed by a classifier construction that was part of a secondary description of the touch sensation (20.83%). Including secondary responses, the percentage of tactile descriptions that contained a classifier construction is relatively high: 46.41%.

Figure 6. Illustration of ASL classifier constructions describing touch. A) The F handshape used to describe small beads in a circle; B) A 4 handshape traces the tracks of multiple ridges; C) A B handshape traces a bumpy surface.

Classifier touch descriptions generally reflected the shape of the object that was felt by the signer. For example, for the tactile descriptions of a spiraled string of small beads, signers indicated the bead shape with an F handshape (see Figure 6A) or an O handshape (all fingers and thumb touching). The spiral shape was then traced in space with either the bead handshape (F or

O) or with a 1 handshape. To describe ridges, several signers used a 4 handshape, in which the

27 fingers iconically depict the ridges, and the movement of the fingers traced shape of the ridges

(e.g., wavy lines or straight lines; Figure 6B). For “bumpy” stimuli (the beads and the ridges), several signers produced a classifier construction with a flat B handshape (palm down) – this handshape is often used to indicate a surface (Supalla 1986), and they produced a rippling motion with the hand to indicate a bumpy surface (see Figure 6C). For the texture stimulus that had slightly deeper ridges, signers sometimes described this stimulus with the B handshape moving in a curved up-and-down motion to trace the peaks and valleys of ridges.

To further explore whether the classifier constructions that were used to describe these textures were based on a visual image of the stimuli (derived from touch), we asked a separate group of six ASL signers to describe the tactile stimuli, not by touching the stimuli, but by viewing the stimuli and describing what they see. Very similar classifier constructions were produced when signers described the visual appearance of the tactile stimuli. This result supports our hypothesis that classifier-based descriptions of tactile sensations reflect the shape of the texture.

In sum, tactile sensations were labeled most frequently by identifying a possible source object, which was frequently named by a fingerspelled word for this set of stimuli. Texture- specific vocabulary for ASL consists of a few lexical signs (e.g., SMOOTH, ROUGH) and classifier constructions that reflect shape-based attributes of the source object (Figure 6). In general, tactile descriptions were quite variable across participants.

5.5. Taste

Codability for taste was higher than for touch in ASL (median codability score = 18.2), and codability for the taste stimuli was similar to English (English median codability score =

28 16.8), and taste codability was less than that observed for all languages combined (median codability score for all languages = 40.0). The majority (56.4%) of taste labels were source- based (e.g., SYRUP, VINEGAR), and the next most frequent response (32.7%) was an abstract label for taste (e.g., SOUR, SWEET, BITTER). Evaluative responses (e.g., AWFUL, ODD) occurred, but were the least common (10.2%).

No taste sensations were described by classifier constructions (either first or secondary responses). The majority of taste descriptions were lexical signs (70.9%), followed by fingerspelled words (29.1%). The abstract labels for taste were all lexical signs: SOUR, BITTER,

SWEET (see Figure 7). These taste signs are all produced at the chin near the mouth, which suggests an iconic influence on their form. In addition, the signs SOUR and BITTER differ only in their movement: SOUR is produced with a twisting motion at the chin, and BITTER is produced with a quick, straight motion to the chin. Umami was described by most signers (83%) as a salty-type taste (e.g., S-A-L-T; SALT-SWEET MIX; LITTLE-BIT SALT).

Figure 7. Illustration of ASL lexical signs for taste.

5.6. Smell

Codability for smell was relatively low in ASL (median codability score = 1.8), and codability for smell was less than for English (English median codability score = 5.1), and less

29 than that observed for all languages combined (median codability score for all languages = 6.6).

The vast majority (93.0%) of smell labels were source-based (e.g., LEMON, GUM, C-I-N-N-A-M-

O-N), and a small proportion of responses (7.0%) were evaluative (e.g., GOOD, STRONG,

AWFUL). No abstract labels (i.e., smell-specific signs) were produced.

Most smell stimuli were labeled by lexical signs (62.0%), followed by fingerspelled words (36.4%). There was a handful of smell descriptions (1.6%) that were source-based descriptions expressed by a classifier construction; for example, one signer first described an oniony smell with a handling classifier construction that depicted a person sprinkling the contents of a small bottle onto something (i.e., shaking a spice container). However, no signer used a classifier construction to describe the smell itself, rather than its source. Thus, neither smell nor taste sensations are readily described by classifier constructions in ASL. Classifier constructions primarily depict visually perceived aspects of a referent that can be manually depicted (e.g., size, shape, location, motion); it is possible that smell and taste sensations are not easily captured by such visual metaphors.

6. Relative codability of the senses in American Sign Language

Somewhat surprisingly, colour was most codable in ASL, followed by shape and then taste (see Figure 2). Touch and smell were the least codable senses for ASL. This pattern supports the traditional view that vocabulary associated with proximal senses (touch, smell, taste) is smaller and less consistent across individuals than vocabulary associated with distal senses (vision) (see Majid and Levinson, this volume). Although median codability values differed for ASL signers and American English speakers, the relative hierarchy of sense codability was the same for both groups (distal > proximal). This pattern suggests that the

30 codability hierarchy for ASL may arise from socio-cultural factors shared with American English speakers, rather than from effects of modality.

Figure 8 provides a summary of the types of lexical forms that were used to label each perceptual category. Colour stimuli were most often labelled with lexical signs, followed by taste and smell stimuli; this pattern indicates that both proximal and distal sensations are codified by lexical signs. Classifier forms (both classifier constructions and lexicalized classifier signs) were most frequently used to label the shape stimuli, followed by sound and touch stimuli. Classifier forms were never (or very rarely) used to label the colour, taste, or smell stimuli. Proximity of the senses also does not appear to determine whether classifier forms are used to encode sensory information. The finding that sound stimuli were often described using classifier constructions suggests that sound sensations can be metaphorically mapped to visual representations of the intensity (e.g., size of the handshape aperture), timing (e.g., a hold or a path movement in space), and pitch (e.g., a low location of the hands for a low pitch). Finally, fingerspelled words were produced as sensory labels in all categories and predominated for the touch category, most likely due to the particular stimuli that were presented in this condition.

31

Figure 8. The percentage of responses for each type of lexical resource in ASL for each set of sense stimuli. For the shape stimuli, classifier forms include both lexicalized classifier signs and classifier constructions.

7. Conclusions

The hierarchy of sense codability for ASL was: colour > shape > taste > touch > smell.

Relative to other languages, colour codability was quite high for ASL, and taste codability was relatively low. The underlying reasons for these differences are not clear, but it is unlikely that language modality per se is a factor.

Perhaps not surprisingly, shape labels were highly iconic in ASL (both lexical signs and classifier constructions). Touch descriptions also often relied on iconic classifier constructions that depicted the shape of the tactile source object (in addition to non-iconic source object labels,

32 e.g., F-U-R). Lexical taste-specific signs also exhibited iconic properties as they were articulated near the mouth. No smell-specific lexical signs were elicited (all descriptions were source- based). Interestingly, the sound stimuli (tone pairs with different amplitudes, durations, and pitch levels) were often described using classifier constructions that visually depicted the different qualities of the sounds that the deaf participants perceived primarily through vibrations. Overall, these results suggest that iconicity of linguistic forms was not constant across the senses, but was most frequently observed for shape and touch and least frequently observed for colour and smell.

Acknowledgements

This research was supported by a grant from the National Institute on Deafness and other

Communication Disorders (R01 DC010997) awarded to Karen Emmorey and San Diego State

University. We also wish to acknowledge Asifa Majid for helpful comments on earlier versions of the manuscript and for creating the boxplots based on Simpson’s Diversity Index that are used in this chapter.

References

Ann, Jean (1998). ‘Contact Between a Sign Language and a Written Language: Character Signs

in Sign Language’, in C. Lucas (ed.), Pinky Extension and Eye Gaze: Language

Use in Deaf Communities. Washington, DC: Gallaudet University Press, 59–102.

Baker, Charlotte, and Cokely, Dennis (1980). American Sign Language: A Teacher's Resource

Text on Grammar and Culture. Silver Spring, MD: TJ Publishers.

Battison, Robbin (1978). Lexical Borrowing in American Sign Language. Silver Spring, MD:

Linstok Press.

33 Bauman, H-Dirksen L. (2003). ‘Redesigning Literature: The Cinematic Poetics of American

Sign Language Poetry’, Sign Language Studies 4(1): 34–47. doi:10.1353/sls.2003.0021

Brentari, Diane (1998). A Prosodic Model of Sign Language . Cambridge, MA: MIT

Press.

Brentari, Diane, and Padden, Carol A. (2001). ‘Native and Foreign Vocabulary in American Sign

Language: A Lexicon with Multiple Origins’, in D. Brentari (ed.), Foreign Vocabulary in

Sign Languages: A Cross-linguistic Investigation of Word Formation. Mahwah, NJ:

Erlbaum, 87–120.

Canadian Association of the Deaf. (2012). Statistics on Deaf Canadians. Retrieved October 15,

2012, from http://www.cad.ca/statistics_on_deaf_canadians.php

Christie, Karen, and Wilkins, Dorothy M. (1997). ‘A Feast for the Eyes: ASL and ASL

Literature’, Journal of Deaf Studies and Deaf Education 2(1): 57–59.

Christiansen, John B., and Barnartt, Sharon M. (2002). Deaf President Now! The 1988

Revolution at Gallaudet University. Washington DC: Gallaudet University Press.

Corina, David P., and Sandler, Wendy (1993). ‘On the Nature of Phonological Structure in Sign

Language’, Phonology 10: 165–207.

De Vos, Connie (this volume).

De Vos, Connie (2011). ‘Kata Kolok Color Terms and the Emergence of Lexical Signs in Rural

Signing Communities’, The Senses and Society 6(1): 68–76.

Emmorey, Karen (2002). Language, Cognition, and the Brain: Insights from Sign Language

Research. Mahwah, NJ: Erlbaum.

Emmorey, Karen (ed.) (2003). Perspectives on Classifier Constructions in Sign Languages.

Mahwah, NJ: Erlbaum.

34 Engberg-Pedersen, Elisabeth (1994). ‘Some Simultaneous Constructions in Danish Sign

Language’, in M. Brennan and G. H. Turner (eds.), Word-order Issues in Sign Language:

Working Papers. Durham: Linguistics Association, 73–87.

Erting, Carol J., Johnson, Robert C., Smith, Dorothy L., and Snider, Bruce C. (eds.) (1996). The

Deaf Way: Perspectives from the International Conference on Deaf Culture. Washington,

DC: Gallaudet University Press.

Fischer, Susan (1975). ‘Influences on Word Order Change in American Sign Language’, in C. Li

(ed.), Word Order and Word Order Change, Austin: University of Texas Press, 1–25.

Gannon, Jack R. (2002). The Week the World Heard Gallaudet. Washington, DC: Gallaudet

University Press.

Harmon, Kristen, and Nelson, Jennifer (2012). Deaf American Prose: 1980-2010. Washington,

DC: Gallaudet University Press.

Hulst, Harry van der (1993). ‘Units in the Analysis of Signs’, Phonology 10(2): 209–241.

Kannapell, Barbara (1980). ‘Personal Awareness and Advocacy in the Deaf Community’, in C.

Baker and R. Battison (eds.), Sign Language and the Deaf Community: Essays in Honor

of William C. Stokoe. Silver Spring, MD: National Association of the Deaf, 105–116.

Lane, Harlan (1984). When the Mind Hears: A History of the Deaf. New York: Random House.

Lane, Harlan (1992). Mask of Benevolence: Disabling the Deaf Community. New York: Knopf.

Lane, Harlan, Hoffmeister, Robert, and Bahan, Ben (1996). A Journey Into the Deaf-world. San

Diego, CA: DawnSignPress.

Lantz, Delee, and Lenneberg, Eric H. (1966). ‘Verbal Communication and Color Memory in the

Deaf and Hearing’, Child Development 37(4): 765–779.

Liddell, Scott K. (1980). American Sign Language . The Hague: Mouton.

35 Liddell, Scott K. (1984). ‘THINK and BELIEVE: Sequentiality in American Sign Language’,

Language 60(2): 372–399.

Liddell, Scott K., and Johnson, Robert E. (1989). ‘American Sign Language: The Phonological

Base’, Sign Language Studies 64: 197–277.

Majid, Asifa (this volume).

Majid, Asifa, and Levinson, Stephen (this volume).

Maher, Jane (1996). Seeing Language in Sign: The Work of William C. Stokoe. Washington, DC:

Gallaudet University Press.

Mandel, Mark (1977). ‘Iconic Devices in American Sign Language’, in L. A. Friedman (ed.), On

the Other Hand: New Perspectives on American Sign Language. New York: Academic

Press, 57–107.

Meir, Irit, Sandler, Wendy, Padden, Carol, and Aronoff, Mark (2010). ‘Emerging Sign

Languages’, in M. Marschark and P. Spencer (eds.), Oxford Handbook of Deaf Studies,

Language, and Education Vol. 2. Oxford: Oxford University Press, 267–280.

Mitchell, Ross E. (2004, April 7). How Many People Use ASL? And Other Good Questions

Without Good Answers. Paper given at Gallaudet University. Retrieved October 15, 2012,

from http://research.gallaudet.edu/Presentations/2004-04-07-1.pdf

Mitchell, Ross E., Young, Travis A., Bachleda, Bellamie, and Karchmer, Michael A. (2006).

‘How Many People Use Sign Language in the United States? Why Estimates Need

Updating’, Sign Language Studies 6(3): 306–335.

Neidle, Carol, Kegl, Judy, MacLaughlin, Dawn, Bahan, Benjamin, and Lee, Robert G. (2000).

The Syntax of American Sign Language: Functional Categories and Hierarchical

Structure. Cambridge, MA: MIT Press.

36 Obasi, Chijioke (2008). ‘Seeing the Deaf in “Deafness”’, Journal of Deaf Studies and Deaf

Education 13(4): 455–465.

Padden, Carol A. (1998). ‘The ASL Lexicon’, Sign Language and Linguistics 1(1): 39–60.

Padden, Carol, and Humphries, Tom (1988). Deaf in America: Voices from a Culture.

Cambridge, MA: Harvard University Press.

Padden, Carol, and Humphries, Tom (2005). Inside Deaf Culture. Cambridge, MA: Harvard

University Press.

Ramos, Angel M. (2003). Triumph of the Spirit: The DPN Chronicle. Twin Falls, ID: R & R

Publishers.

Rutherford, Susan (1993). A Study of American Deaf Folklore. Silver Spring, MD: Linstok Press.

Sandler, Wendy (1989). Phonological Representation of the Sign: Linearity and Non-linearity in

American Sign Language. Dordrecht: Foris.

Sandler, Wendy (1999). ‘Prosody in Two Natural Language Modalities’, Language and Speech,

42(2/3): 127–142.

Sandler, Wendy, and Lillo-Martin, Diane (2006). Sign Language and Linguistic Universals. New

York: Cambridge University Press.

Schembri, Adam (2003). ‘Rethinking “Classifiers” in Signed Languages’, in K. Emmorey (ed.),

Perspectives on Classifier Constructions in Sign Languages. Mahwah, NJ: Erlbaum, 3–

34.

Schick, Brenda (1990). ‘Classifier Predicates in American Sign Language’, International

Journal of Sign Linguistics 1, 15–40.

Sonnenstrahl, Deborah M. (2002). Deaf Artists in America: Colonial to Contemporary. San

Diego: DawnSignPress.

37 Stokoe, William C., Casterline, Dorothy C., and Croneberg, Carl G. (1965). A Dictionary of

American Sign Language on Linguistic Principles (2nd Edition). Washington, DC:

Gallaudet University Press.

Supalla, Ted (1982). Structure and Acquisition of Verbs of Motion and Location in American

Sign Language. Doctoral dissertation, University of California, San Diego.

Supalla, Ted (1986). ‘The Classifier System in American Sign Language’, in C. Craig (ed.),

Noun Classes and Categorization. Amsterdam: Benjamins, 181–214.

Supalla, Ted, and Newport, Elissa (1978). ‘How Many Seats in a Chair? The Derivation of

Nouns and Verbs in American Sign Language’, in P. Siple (ed.), Understanding

Language Through Sign Language Research. New York: Academic Press, 91–132

Taub, Sarah F. (2001). Language From the Body: Iconicity and Metaphor in American Sign

Language. Cambridge, UK: Cambridge University Press.

Valli, Clayton, and Lucas, Ceil (2002). Linguistics of American Sign Language: An Introduction

(3rd Edition). Washington, DC: Gallaudet University Press.

Valli, Clayton, and Lucas, Ceil (1992). in the American Deaf Community. San

Diego, CA: Academic Press.

Van Cleve, John V., and Crouch, Barry A. (1989). A Place of Their Own. Washington, DC:

Gallaudet University Press.

Wilbur, Ronnie B. (2000). ‘Phonological and Prosodic Layering of Non-manuals in American

Sign Language’, in K. Emmorey, and H. Lane (eds.), The Signs of Language Revisited.

Mahwah, NJ: Erlbaum, 190–214.

Winston, Elizabeth A. (ed.) (1999). Storytelling and Conversation: Discourse in Deaf

Communities. Washington, DC: Gallaudet University Press.

38 Woll, Bencie (this volume).

Zwitserlood, Ingeborg (2003). Classifying Hand Configurations in Nederlandse Gebarentaal.

Doctoral dissertation, University of Utrecht, Netherlands.

39