A Survey on Mouth Modeling and Analysis for Sign Language Recognition

Total Page:16

File Type:pdf, Size:1020Kb

A Survey on Mouth Modeling and Analysis for Sign Language Recognition A Survey on Mouth Modeling and Analysis for Sign Language Recognition Epameinondas Antonakos?y, Anastasios Roussos? and Stefanos Zafeiriou?y Department of Computing, Imperial College London, U.K. Abstract— Around 70 million Deaf worldwide use Sign Lan- guages (SLs) as their native languages. At the same time, they have limited reading/writing skills in the spoken language. This puts them at a severe disadvantage in many contexts, including education, work, usage of computers and the Internet. Automatic Sign Language Recognition (ASLR) can support the Deaf in many ways, e.g. by enabling the development of systems for Human-Computer Interaction in SL and translation (a) LATE between sign and spoken language. Research in ASLR usually revolves around automatic understanding of manual signs. Recently, ASLR research community has started to appreciate the importance of non-manuals, since they are related to the lexical meaning of a sign, the syntax and the prosody. Non- manuals include body and head pose, movement of the eyebrows and the eyes, as well as blinks and squints. Arguably, the mouth is one of the most involved parts of the face in non-manuals. Mouth actions related to ASLR can be either mouthings, i.e. (b) NOT YET visual syllables with the mouth while signing, or non-verbal mouth gestures. Both are very important in ASLR. In this Fig. 1: In American SL, the signs “late” and “not yet” paper, we present the first survey on mouth non-manuals in are performed with the same manual gesture. Their lexical ASLR. We start by showing why mouth motion is important in SL and the relevant techniques that exist within ASLR. distinction is only based on the mouth action of the tongue Since limited research has been conducted regarding automatic touching the lower lip. (Images copyright ASL University) analysis of mouth motion in the context of ALSR, we proceed by surveying relevant techniques from the areas of automatic mouth expression and visual speech recognition which can be applied to the task. Finally, we conclude by presenting the Opposite to another common misconception, SLs are challenges and potentials of automatic analysis of mouth motion as rich and grammatically complex as spoken languages in the context of ASLR. and even though they usually have different grammatical structure, they exhibit very similar properties [32], [57]. For I. INTRODUCTION example, similar to spoken languages, SLs consist of basic Sign languages (SLs) commonly serve as an alternative semantic components, referred to as phonemes [57]. The or complementary mode of human communication. Recent phonemes are mainly expressed through combinations of statistics suggest that around 5% of the worldwide population manual features, such as shape, posture (orientation), loca- suffers from hearing loss to some degree [44]. Furthermore, tion and motion of the hands. However, SLs convey much it is estimated that about 1% of the worldwide population of their prosody through non-manual signs [45], [67], [47], (around 70 million Deaf people) use SLs as their native [68], [56], [41], such as the pose of head and torso, facial languages, even though it is hard to measure the exact expressions (combinations of eyes, eyebrows, cheeks and number. Note that SL is not only used by the Deaf com- lips) and mouth movements. These non-manual articulators munity, but also by people who cannot physically speak. play an important role in lexical distinction, grammatical The limited usage and popularity of SLs among people that structure and adjectival or adverbial content. For example, use spoken languages has lead to the dominance of several yes/no questions in American SL (ASL) are associated with misconceptions about them. For example, many people be- raised eyebrows, head tilted forward and widely-opened lieve that there is a unique international SL or that SL is eyes, and wh-questions with furrowed eyebrows and head simply a pantomime. However, each country has each own forward [36], [56]. Topics are described by raised eyebrows SL (in some cases more than one) and there are hundreds of and head slightly back and negations are expressed with a different SLs worldwide. head shake [36], [56]. The head pose and eye gaze describe turn taking during a story narration [6]. ? The authors contributed equally and have joint first authorship. Their In this paper we are interested in the mouth actions within names appear in alphabetical order. y The work of E. Antonakos and S. Zafeiriou was partially funded by the SL. As explained in [8], [47], the mouth lexical articulators EPSRC projects EP/J017787/1 (4DFAB) and EP/L026813/1 (ADAManT). can be separated in two categories: mouth gestures and foreign language with a fundamentally different grammatical structure. At the same time, the vast majority of the rest of the population does not understand and is unable to use SL. Additionally, the current technological advances within the domain of Human-Computer Interaction (HCI), ranging from text-based interaction to speech recognition and synthesis, are almost entirely oriented towards hearing (a) BRUDER (translated as “brother” in English) people. The aforementioned issues put the Deaf community in a disadvantaged position within many contexts, includ- ing education, work, usage of computers and the Internet. Automatic SL Recognition (ASLR) can support the Deaf community in overcoming these disadvantages by enabling the development of reliable systems for HCI in SL and translation between sign and spoken language. In contrast to speech recognition, which is now ubiquitous in real- (b) SCHWESTER (translated as “sister” in English) world HCI and other applications, ASLR is still far from Fig. 2: In German SL, the signs “bruder” and “schwester” being a mature technology. Nevertheless, during the last are performed with the same manual gesture and only decades, there have been some significant developments in differentiate on the lips patterns. (The figure is used with ASLR (please see [45], [67], [15] for some general surveys). permission from [66].) However, the most of the research effort is oriented towards the SL recognition using hands-based features. The research efforts that employ non-manual features, including mouth actions, are very limited. mouthings. Mouth gestures (also referred to as oral com- In this paper we present an overview of the literature ponents), which include deformations of the mouth’s shape, around the employment and interpretation of mouth actions movements of the tongue and visibility of the teeth, are in ASLR systems. Since, limited research has been conducted not related to the spoken language [47]. The term mouthing regarding automatic analysis of mouth motion in the context refers to the silent articulators of the mouth that correspond of ALSR, we proceed by surveying relevant techniques from to a pronounced word or part of it. If part of the word is the areas of automatic mouth expression and visual speech articulated then, in most SLs, it is its first syllable. Note that recognition which can be applied to the task. As mentioned there is a debate in literature regarding the importance and above, there are many open questions regarding the role role of the mouth during signing. Some researchers argue of the mouth in SL, both on a linguistic and computing that mouthings are not really part of the lexical scope of a level. Herein, we aim to provide a clear and comprehensive SL and are not linguistically significant [57], [30]. However, overview of the ongoing research on this problem in order recent research has shown that mouthings and mouth gestures to highlight the achievements that have already been accom- contribute significantly to the semantic analysis of various plished, but most importantly emphasize the challenges that SLs [37], [39], [9]. The frequency of mouthings during are still open and need to be addressed. signing is different for each SL [47], [16] and is dependent on both context and the grammatical category of the manual sign II. NON-MANUAL FEATURES IN ASLR they occur with [39], [9]. The mouth actions can contribute to The vast majority of ASLR methods use solely hand the signing in various ways [56], [50]. Most mouthings have features. However, the research in ASLR has recently started a prosodic interpretation while others have lexical meaning. appreciating the importance of non-manual parameters. This An example of this is shown in Fig. 1. In ASL, the signs “not relatively new research direction is especially promising and yet” and “late” have the exact same manual gesture. The only is yet to be explored in depth. The non-manual parameters lexical distinction difference between them is that in order play an essential role in SL communication because they are to articulate “not yet” (Fig. 1b) the signer needs to touch the related to the meaning of a sign, the syntax or the prosody lower lip with the tongue and make a slight rotation of the [54], [10], [69], [70]. There are methods related to the direct head from side to side that declares negation [37]. Another recognition of non-manual linguistic markers [43], [42], example is shown in Fig. 2 for the words “brother” and [38], as applied to negations, conditional clauses, syntactic “sister” in German SL. Finally, there are cases in which the boundaries, topic/focus and wh-, yes/no questions. Moreover mouth may articulate physical events, emotions or sensations, there are methods for the detection of important facial events such as types of sounds, noise, disturbances, heaviness, types such as head gestures [40], [24], [42], eyebrows movement of textures etc. These are usually referred to as non-linguistic and eyes blinking/squint [42], along with facial expression mouth gestures [55]. recognition [40], [64], [65] within the context of SL. The There are many difficulties that Deaf people encounter authors in [42] employ a 2-layer Conditional Random Field in the every day life.
Recommended publications
  • The Bilingual & the Bicultural Person in the Hearing & in the Deaf World
    The Bilingual & the Bicultural Person In the Hearing & in the Deaf World François Grosjean Sign Language Studies, Volume 77, Winter 1992, pp. 307-320 (Article) Published by Gallaudet University Press DOI: https://doi.org/10.1353/sls.1992.0020 For additional information about this article https://muse.jhu.edu/article/506985/summary Access provided by Universite de Neuchatel (14 Oct 2018 20:54 GMT) THE BILINGUAL &THE BICULTURAL PERSON IN THE HEARING & IN THE DEAF WORLD Frangois Grosjean Abstract If we define the bilingual as a person who uses two or more lan- guages (or dialects) in everyday life, then most Deaf people who sign and who use the majority language regularly (in its written form, for example) are bilingual. Deaf bilinguals share many simi- larities with hearing bilinguals (their diversity, the perception they have of their own bilingualism, their use of various language modes, etc.) but they are also characterized by a number of specificities (the lack of recognition of their bilingual status, the maintenance over time of their languages, the competence they have in certain language skills, their varying patterns of language knowledge and use, etc.). As concerns the bicultural, whom we can define as a person who lives in two or more cultures, who adapts to each and who blends aspects of each, there is little doubt that many Deaf are indeed bicultural. Some of the implica- tions for the bilingual and bicultural education of Deaf children that emerge from these considerations are discussed in the paper. The bilingual person. Despite the fact that more than half the world's population uses two or more languages in everyday life, many erroneous beliefs still surround the notion of bilingualism.
    [Show full text]
  • Alignment Mouth Demonstrations in Sign Languages Donna Jo Napoli
    Mouth corners in sign languages Alignment mouth demonstrations in sign languages Donna Jo Napoli, Swarthmore College, [email protected] Corresponding Author Ronice Quadros, Universidade Federal de Santa Catarina, [email protected] Christian Rathmann, Humboldt-Universität zu Berlin, [email protected] 1 Mouth corners in sign languages Alignment mouth demonstrations in sign languages Abstract: Non-manual articulations in sign languages range from being semantically impoverished to semantically rich, and from being independent of manual articulations to coordinated with them. But, while this range has been well noted, certain non-manuals remain understudied. Of particular interest to us are non-manual articulations coordinated with manual articulations, which, when considered in conjunction with those manual articulations, are semantically rich. In which ways can such different articulators coordinate and what is the linguistic effect or purpose of such coordination? Of the non-manual articulators, the mouth is articulatorily the most versatile. We therefore examined mouth articulations in a single narrative told in the sign languages of America, Brazil, and Germany. We observed optional articulations of the corners of the lips that align with manual articulations spatially and temporally in classifier constructions. The lips, thus, enhance the message by giving redundant information, which should be particularly useful in narratives for children. Examination of a single children’s narrative told in these same three sign languages plus six other sign languages yielded examples of one type of these optional alignment articulations, confirming our expectations. Our findings are coherent with linguistic findings regarding phonological enhancement and overspecification. Keywords: sign languages, non-manual articulation, mouth articulation, hand-mouth coordination 2 Mouth corners in sign languages Alignment mouth demonstration articulations in sign languages 1.
    [Show full text]
  • The Manual Alphabet
    The Manual Alphabet Aa Bb Cc Thumb up, other fingers against Thumb lines up with index or All of the fingers are curved, palm palm middle finger. faces out Dd Ee Ff Thumb touches middle finger. Thumb under index and middle Thumb and index finger touch, Index up. finger others up and apart. Gg Hh Ii Palm faces you, index and Palm faces you, thumb, index Pinky finger up, thumb rest on thumb horizontal and middle finger horizontal index finger. Jj Kk Ll Pinky finger up (i), and the Index up, middle finger out, Index finger up and thumb out, “I”moves down, in, and curves thumb on inside knuckle of forming an imaginary “L” up. middle finger Mm Nn Oo Index, middle and ring finger Index and middle drape over All fingers curved with the tips of over thumb. Thumb on pinky thumb, and thumb on knuckle the index and thumb touching. knuckle of the ring finger. Pp Qq Rr Similar to “K”, but the palm Similar to G, but the palm Index in front of middle finger. faces down, with the middle faces down. Thumb rest on knuckle of ring finger pointing to the floor. finger Ss Tt Uu Fingers clinched in a “fist,” Thumb juts up, between index Index and middle finger up and with thumb on the knuckles of and middle finger. Index together, thumb on knuckle of index and middle finger curves over thumb. ring finger Vv Ww Xx Index and middle finger up Index, middle and ring fingers The fingers clinched in a fist, and apart, thumb on knuckle up and apart, thumb resting on index finger up and bent, like a of ring finger the pinky nail “hook,” and thumb on the knuckles of index and middle finger.
    [Show full text]
  • Deep Learning of Mouth Shapes for Sign Language
    Deep Learning of Mouth Shapes for Sign Language Oscar Koller, Hermann Ney Richard Bowden Human Language Technology & Pattern Recog. Centre for Vision Speech & Signal Processing RWTH Aachen University, Germany University of Surrey, UK {koller,ney}@cs.rwth-aachen.de [email protected] This paper deals with robust modelling of mouth shapes a semantically related sign. But in other cases, mouthings in the context of sign language recognition using deep con- simply provide additional information that is redundant to volutional neural networks. Sign language mouth shapes the hand’s information channel and not explicitly needed to are difficult to annotate and thus hardly any publicly avail- understand the sign. able annotations exist. As such, this work exploits related Mouth shapes are difficult to model and to recognise as information sources as weak supervision. Humans mainly their presence seems to be based on personal and situational look at the face during sign language communication, where preferences of the signers. Currently, no publicly available mouth shapes play an important role and constitute natural data set exist that contain manual mouthing annotations that patterns with large variability. However, most scientific re- would allow for supervised training. The reason lies in the search on sign language recognition still disregards the face. difficulty to annotate mouth shapes. Sometimes, they repre- Hardly any works explicitly focus on mouth shapes. This sent the mouth pattern of shortened parts of a related spoken paper presents our advances in the field of sign language word, sometimes the whole spoken word or sometimes no recognition. We contribute in following areas: We present a mouth shape at all may be visible.
    [Show full text]
  • Sign Languages in Contact
    INTRO_Sign_Pozos_Gaul_193027 7/30/07 11:19 AM Page 1 Editor’s Introduction: Outlining Considerations for the Study of Signed Language Contact David Quinto-Pozos To my knowledge, this volume represents the first book-length collec- tion of various accounts of contact between sign languages, and this brings with it excitement as well as the realization of challenges that lie ahead.1 As many researchers who are interested in language contact might suggest, it is exciting because these chapters contribute to our un- derstanding of the structural and social aspects of contact and how such contact affects language in the visual-gestural modality. They provide us with information about Deaf communities throughout the world, as well as language data that speak to the ways in which contact is manifested in those communities. This global perspective allows us to examine con- tact situations in search of commonalties and recurring patterns. It also enables us to see how some outcomes of contact between sign languages might or might not fit the general patterns of contact that have been demonstrated for spoken languages. Perhaps as a way to balance the ex- citement about this topic, the sobering truth is that we know so little about contact between sign languages. As a result, we are faced with the task of documenting examples of such contact and the challenge of ex- amining the effects of visual meaning creation on linguistic structures that occur in these contact situations. By focusing on this area of inquiry, we stand to gain much knowledge about how language works. The study of language contact among signed languages forces us to carefully consider how the visual-gestural modality of human com- munication influences language birth, development, change, and de- cay or loss from disuse.
    [Show full text]
  • Fingerspelling in American Sign Language
    FINGERSPELLING IN AMERICAN SIGN LANGUAGE: A CASE STUDY OF STYLES AND REDUCTION by Deborah Stocks Wager A thesis submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Master of Arts Department of Linguistics The University of Utah August 2012 Copyright © Deborah Stocks Wager 2012 All Rights Reserved The University of Utah Graduate School STATEMENT OF THESIS APPROVAL The thesis of Deborah Stocks Wager has been approved by the following supervisory committee members: Marianna Di Paolo , Chair 5/10/12 Date Approved Aaron Kaplan , Member 5/10/12 Date Approved Sherman Wilcox , Member 5/10/12 Date Approved and by Edward Rubin , Chair of the Department of Linguistics and by Charles A. Wight, Dean of The Graduate School. ABSTRACT Fingerspelling in American Sign Language (ASL) is a system in which 26 one- handed signs represent the letters of the English alphabet and are formed sequentially to spell out words borrowed from oral languages or letter sequences. Patrie and Johnson have proposed a distinction in fingerspelling styles between careful fingerspelling and rapid fingerspelling, which appear to correspond to clear speech and plain speech styles. The criteria for careful fingerspelling include indexing of fingerspelled words, completely spelled words, limited coarticulation, a slow signing rate, and even rhythm, while rapid fingerspelling involves lack of indexing, increased dropping of letters, coarticulation, a faster signing rate, and the first and last letter of the words being held longer. They further propose that careful fingerspelling is used for initial uses of all fingerspelled words in running signing, with rapid fingerspelling being used for second and further mentions of fingerspelled words.
    [Show full text]
  • MRID-Self-Paced-Manu
    Minnesota Registry of interpreters for the Deaf Self-paced Modules for Educational Interpreter Skill Development These self-paced learning modules are designed to be tools that educational interpreters can use to enhance their skills in pursuit of certification. Successful completion of this module series does not guarantee the interpreter will successfully obtain certification from the National Registry of Interpreters for the Deaf (RID) or any other certifying body, nor does it guarantee employment in an educational setting. Any interpreter working with these modules must understand that the Minnesota RID, RID and/or Minnesota Department of Children, Families and Learning are not responsible for an interpreter's success or failure in taking any national certification test or in his or her pursuit of employment. MRID, 1996 Fingerspelling Table of Contents I. Goals and Objectives ...................................... 4 I1. Equipment and Materials ................................ 5 /I/. Time.lllllllllIIIImIIIImIm..IIIIII.m.m.m............... 6 lkI Pre-Test ........................................................... 7 V; Overview and Ac fivities A. Uses of fingerspelling.........................I.....m........... f2 k? . Types of fingerspelling ......................................... f3 C. English to ASL ........m.......I1.....................n........... 16 DmTransliteration ...................................................... 20 E. ASL to English ....................................................... 22 Vl. Post-Test ~~~~~~~~~~~~~~~~~~~~~~~~~~~.~~~~~.~~~~~~..~~~~~~.~.~~~~29
    [Show full text]
  • What Standards? the Need for Evidence-Based Auslan Translation Standards and Production Guidelines
    What standards? The need for evidence-based Auslan translation standards and production guidelines What Standards? The need for evidence-based Auslan translation standards and production guidelines Gabrielle Hodge, Della Goswell, Lori Whynot, Stephanie Linder and Cathy Clark June 2015 What Standards? The need for evidence-based Auslan translation standards and production guidelines Authored by Gabrielle Hodge, Della Goswell, Lori Whynot, Stephanie Linder and Cathy Clark Published in 2015 The operation of the Australian Communications Consumer Action Network is made possible by funding provided by the Commonwealth of Australia under section 593 of the Telecommunications Act 1997. This funding is recovered from charges on telecommunications carriers. deafConnectEd Website: www.deafconnected.com.au Email: [email protected] Telephone: 03 9269 8306 Australian Communications Consumer Action Network Website: www.accan.org.au Email: [email protected] Telephone: 02 9288 4000 TTY: 02 9281 5322 ISBN: 978-1-921974-27-4 Cover image: Richard Van Der Male, 2015 This work is copyright, licensed under the Creative Commons Attribution 3.0 Australia Licence. You are free to cite, copy, communicate and adapt this work, so long as you attribute the authors and "deafConnectEd, supported by a grant from the Australian Communications Consumer Action Network”. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/au/ This work can be cited as: Hodge, Gabrielle et al., 2015, What Standards? The need for evidence- based Auslan translation standards and production guidelines, Australian Communications Consumer Action Network, Sydney. Acknowledgements This Report was funded by the Australian Communications Consumer Action Network (ACCAN) Grants Scheme, with additional funding support from the Australian Communication Exchange (ACE), the Deaf Society of New South Wales (DSNSW), Deaf Services Queensland (DSQ), Vicdeaf, and the Western Australian Deaf Society (WADS).
    [Show full text]
  • Variation in Mouthing Occurrence in Greek Sign Language (GSL); the Case of Register and Grammatical Class
    Variation in mouthing occurrence in Greek Sign Language (GSL); the case of register and grammatical class By Ioanna Tsourma A thesis submitted to The University of Birmingham for the degree of MASTER OF RESEARCH Department of English Language and Applied Linguistics School of English Drama and American and Canadian Studies University of Birmingham 2018 University of Birmingham Research Archive e-theses repository This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder. Abstract This thesis explores the mouthing occurrence rates across different registers (informative and narrative) and grammatical classes in Greek Sign Language (GSL). A general consensus has been reached dividing mouth activity into at least two subcategories of mouth actions: ‗mouth gestures‘ and ‗mouthings‘ (see edited volume by Boyes Braem & Sutton-Spence, 2001). Data consist of 45‘ of video recordings from six native and near-native1 signers, transcribed using a software tool for annotating multimedia video recordings (ELAN). In an effort to gain some control over the influence of individual variation, all six signers have an example of SL production in both registers. A combination of semantic, syntactic and morphological criteria (Schwager & Zeshan, 2008) has been utilized in order to classify all manual signs for their grammatical class, in an effort to study the potential impact of grammatical class on mouthing occurrence.
    [Show full text]
  • The Role of Mouthing and Fingerspelling and Their (Similar) Functions in the Russian Sign Language Corpus
    AG 2: Linguistic diversity and linguistic modalities The role of mouthing and fingerspelling and their (similar) functions in the Russian Sign Language Corpus Donnerstag, 05.03.2020 Anastasia Bauer 09:00–09:30 University of Cologne ESA1 HG HS B [email protected] Fingerspelling (FS) is a linguistic feature of sign languages in which letters from spoken language alphabets are represented by conventionalized handshapes (Sandler & Lillo-Martin 2006). Mouth- ings (M) are mouth movements that resemble the articulation of spoken words during signing AG 2 (Boyes-Braem & Sutton-Spence 2001). FS and M are often described as outcomes of cross-modal language contact without parallels in spoken language situations and are unique to sign-spoken/written language contact (Valli & Lucas 1992). Although FS and M are diverse in their nature and have their origin in contact between three modalities (FS is located at the interface between orthography & sign; M is locat- ed at the interface between speech & sign), their functions appear to be very similar (Sandler & Lillo-Martin 2006). Following the same pattern, FS and M are used as means of representing spoken/written vocabulary in sign languages (Padden & Gunsauls 2003). Apart from this often cited function of filling in the lexical gaps or disambiguating the manual signs, we find the use of “redundant” M and FS. Redundant FS is used when there is already a lexical sign; redundant M corresponds exactly with the manual sign in terms of semantic congruency without the ne- cessity for disambiguation. Interestingly, fingerspelled signs are reported to be accompanied by mouthings during signing (Johnston et al 2016).
    [Show full text]
  • Acquisition of English Literacy by Signing Deaf Children
    Acquisition of English Literacy by Signing Deaf Children Barbara Gemer de Garcia Abstract Research on how signing deaf children learn to read and write provides no clear answer to how successful deaf readers acquire literacy in a spoken and written language. Providing early support and deaf-centered environments may prevent delays in the development of language and literacy. The study of strategies used by deaf adults that explicitly link signs to print is critica!. Also important are the consideration of the role that Manually Coded English and phonological awareness may play in the development of English literacy. The research indicates that there are multiple paths, rather than a single one, to reading for deaf learners. Key words Deafness in children - Education. Deafness in children- language. Deafness in children in children - writing. Professor of Departament of Education Froudations and Research Gallaudet University - Florida, EUA. Ph.D in Education Ponto de Vista , Florianópolis, n.05, p. 129-150, 2003 Letramento em inglês de crianças surdas sinalizantes Resumo Pesquisas sobre como crianças surdas sinalizantes aprendem a ler e escrever não apresentam respostas claras sobre como atingir o su- cesso dos leitores surdos na aquisição do letramento na língua falada e escrita. Oferecendo apoio e um ambiente centrado no surdo pode evita atrasos no desenvolvimento da linguagem e do letramento. O estudo das estratégias usadas pelos surdos que ex- plicitamente relacionam os sinais com a grafia é critico. Também são importantes as considerações sobre o papel do Inglês Manual- mente Codificado e da consciência fonológica que podem ser rele- vantes no desenvolvimento do letramento no inglês.
    [Show full text]
  • Exploring Localization for Mouthings in Sign Language Avatars
    Exploring Localization for Mouthings in Sign Language Avatars Rosalee Wolfe1, Thomas Hanke2, Gabriele Langer2, Elena Jahn2, Satu Worseck2, Julian Bleicken2, John C. McDonald1, Sarah Johnson1 1School of Computing, DePaul University, 243 S. Wabash Ave., Chicago IL 60604, USA 2Institute of German Sign Language and Communication of the Deaf (IDGS), University of Hamburg, Gorch-Fock-Wall 7, 20354 Hamburg, Germany [email protected] {thomas.hanke,gabriel.langer,elena.jahn,satu.worseck,julia.bleicken}@uni-hamburg.de [email protected], [email protected] Abstract According to the World Wide Web Consortium (W3C), localization is “the adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market” (Ishida & Miller, 2015). One aspect of localizing a sign language avatar is creating a capability to produce convincing mouthing. For purposes of this inquiry we make a distinction between mouthings and mouth gesture (Crasborn et al., 2008). The term ‘mouthings’ refers to mouth movements derived from words of a spoken language while ‘mouth gesture’ refers to mouth movements not derived from a spoken language. This paper reports on a first step to identify the requirements for an avatar to be capable of mouthings in multiple signed languages. Keywords: avatar technology, mouthing, German Sign Language (DGS), lip sync 1. Mouthings in signed languages 2. History of lip sync technology The occurrence of mouthings has been reported for Portraying mouthings requires animating an avatar’s many signed languages (cf. for example (Boyes-Braem mouth. Animating an avatar’s mouth originates with the & Sutton-Spence, 2001) and their origin in spoken technique of creating speaking characters.
    [Show full text]