From VLibras to OpenSigns: Towards an Open Platform for Machine Translation of Spoken Languages into Sign Languages

Tiago Ara´ujo · Rostand Costa · Manuella Lima · Guido Lemos

June, 2018

1 Introduction In order to allow adequate access of information for deaf people, one solution is to translate/interpret The World Health Organization estimates that approx- spoken contents into the associated SL. However, con- imately 466 million people worldwide have some level sidering the volume and dynamism of information in of hearing loss[31]. In Brazil, according to the 2010 cen- some environments and platforms, such as the Web, sus of the Brazilian Institute of Geography and Statis- performing this task using human interpreters is a very tics (IBGE), there are approximately 9.7 million Brazil- difficult task, considering the high volume of content ians with some type of hearing loss, representing around that is published daily on the Internet. In the context 5.1% of its population[13]. of Digital TV, the support for sign languages is gener- This relevant part of the population faces several ally limited to a window with a human challenges for accessing information, since it is gener- interpreter, which is displayed overlaying the video pro- ally available in written or spoken language. The main gram. This solution has high operational costs for gen- problem is that most deaf people are not proficiency eration and production of the contents (cameras, stu- in reading and writing the spoken language of their dio, staff, among others), needs full-time human inter- country. One of the possible explanations is the fact preters, which ends up restricting its use to a small por- that these languages are based on sounds[25]. A study tion of the TV programming. To address this question carried out in 2005 with 7 to 20 years old Dutch deaf pragmatically, one of the most promising approaches is persons found that only 25% of them had a reading ca- the use of machine translation (MT) tools from spoken pacity equal or greater than a 9-year-old child without languages into SLs. disability [30]. Proportionately to the number of SL, there are also One of the reasons for this difficulty is that the deaf several parallel initiatives to develop machine trans- communicate naturally through sign languages (SL), lation tools for SLs, usually focused on a single lan- 1 and spoken languages are just a “second language”. guage/country[11,17,24]. Most of these text-to-sign ma- Each SL is a natural language, with its own lexicon and chine translation tools, although they were developed grammar, developed by each deaf community over time, independently in their respective countries, have sim- as well as each non-deaf community develop its spoken ilarities in approach, scope, and architecture. In gen- languages. Thus, there is no unique SL. Although there eral, the basic functionalities are present in some form are some similarities between all these languages, each in most of them. Some examples are the extraction of country usually has its own, some even more than one the text to be translated from audio and subtitles, the - by 2013, there were already over 137 sign languages generation of a sign language video, incorporation of cataloged around the world[4]. the sign language videos into the original videos (e.g, on Digital TV), dactilology and rendering of signs by plugins and mobile applications, etc. There are also sim- Digital Video Applications Lab (LAVID) Informatics Center (CI) - Federal University of Paraiba 1 In this paper, we use the acronym text-to-sign to repre- (UFPB) sent the translation of texts from spoken languages into sign E-mail: {maritan,rostand,manuella,guido}@lavid.ufpb.br languages. 2 Tiago Ara´ujoet al. ilarities in the structure and behavior of components, further enhance digital inclusion and accessibility, in such as APIs and backends of communication, transla- technologies such as Digital TV, Web and Cinema, es- tion and control. pecially in the poorest countries. The main points of variation are usually the spe- cific mechanism of machine translation and the sign lan- guage dictionary (visual representation of signs). Con- 2 Machine Translation Platforms for Sign sidering the use of avatars for representing the content Languages in the sign language, the process of creating the visual represention of signs is usually similar (e.g., a set of 2.1 Sign Languages animations) and generally depends on the allocation of The communication of people with hearing impairment financial and human resources, regardless of the tech- occurs through formal gestural languages, called Sign nology used. Languages (SL). SL are languages that use gestures, To reduce this problem, the objective of this pa- facial and body expressions, instead of sounds in com- per is to propose an open, comprehensive and extensi- munication. They have a proper linguistic system that ble platform for text-to-sign translation in various us- is independent of spoken languages and effectively ful- age scenarios and countries, including Digital TV. In fills the communication needs of the human being, be- the proposed platform, the common components share cause they have the same complexity and expressiveness generic functionalities, including the creation and ma- of the spoken languages [21]. nipulation of the SL dictionaries. Only the translation SLs have properties common to other human lan- mechanism and the dictionary itself are interchange- guages[21], such as: able, being specific for each SL. To accelerate the de- velopment, we used the Su´ıteVLibras2 tools and com- – Flexibility and Versatility: SL present several possi- ponents as a basis[2]. bilities of use in different contexts; Our proposal is the concentration of efforts and re- – Arbitrarities: the word is arbitrary because it is al- sources around an unique solution can provide some ways a convention recognized by the speakers - the cutting edge gains, such as the definition of patterns for sign languages also have words where there is no the industry standard and greater functional flexibility relation between form and meaning; for the common components, and also allow advances – Discontinuity: minimal differences between words in the state-of-the-art, such as sharing techniques and and their meanings are discontinued through the heuristics among translation mechanisms. distribution they present at different linguistic lev- A single standardized platform with centralized pro- els; cessing of multiple sign languages can also serve as a – Creativity/Productivity: there are infinite ways of catalyst for more advanced translation services, such expressing the same idea with different rules; as incorporating text-to-textIn this paper, we use the – Dual articulation: the human languages present units acronym text-to-text to represent the translation of texts of smaller articulations, without meanings, that com- between spoken or written languages. conversion. Thus, bined with others form units of meaning; we can integrate available translation mechanisms be- – Standard: there is a set of rules shared by a group tween spoken languages to allow Deaf in Brazil or Spain of people; to understand a text in English, for example. – Structural Dependency: elements of the language can Another contribution is to leverage the emergence not be combined at random, there is a structural de- of a common core machine translator that can be ex- pendence between them. tended/adapted to other languages and regionalisms. Generally, each country has its own sign language. Reducing the effort to make a new SL available may The (Libras), Portuguese Ges- 2 The Suite VLIBRAS is the result of a partnership be- tural Language (LGP), Angolan Sign Language and tween the Brazilian Ministry of Planning, Development and Mozambican Sign Language (LMS) are the SL of Brazil, Management (MP), through the Information Technology Sec- Portugal, Angola and , respectively, just to retariat (STI) and the Federal University of Para´ıba(UFPB), name a few countries with the same oral linguistic base and consists of a set of tools (text, audio and video) for the Brazilian Sign Language (Libras), making computers, (ie, Portuguese). As in spoken languages, there are also mobile devices and Web platforms accessible to deaf. Cur- variations within the sign language itself, caused by re- rently, VLibras is used in several governmental and private gionalisms and/or other cultural differences. sites, among them the main sites of the Brazilian government It is relatively common to assume that sign lan- (brasil.gov.br), Chamber of Deputies (camara.leg.br) and the Federal Senate (senado.leg.br) ). Further information can be guages are flagged versions of their respective spoken obtained from http://www.vlibras.gov.br. languages. However, although there are similarities, SLs VLibras to OpenSigns 3 are autonomous languages, possessing singularities that chine translation services for the other components 3 distinguish them from spoken languages and each other and also hosts the repository of 3D models of the Libras SL [21]. signs that are used by the avatar to render the acces- The relevant cultural differences that impact the sible content after the translation. Currently, the Signs modes of environmental representation are reflected in Dictionary of the Su´ıte VLibras has around 13,500 considerable differences between sign languages. modeled signs, one of the largest bases of the kind in the world. Finally, there is the WikiLibras, a Web tool for the collaborative modeling of signs in Libras, which allows 2.2 Machine Translation Platforms volunteers to participate in the process of building and expanding the signs dictionary, through the specifica- Machine translation systems for sign languages are gen- tion of the movements of each signal. erally divided into three main classes: Rule-Based Ma- chine Translation (RBMT), Statistical Machine Trans- lation (SMT) and Example-Based Machine Translation 3 Open Signs: A Proposal of a Multilingual (EBMT) [26]. One important challenge of these systems Machine Translation Platform is to ensure that the content available to deaf has the same consistency and quality of the original content, From our experience in the development of the Su´ıte allowing the adequate understanding of the message. VLibras, we identified that a number of VLibras fea- Considering these systems may be a viable alterna- tures were not dependent of the source and the tar- tive to minimize the marginalization of deaf, especially get language, and possibly applicable to other contexts. through digital inclusion, several researches have been Among the technological tools potentially reusable, we developed around the world focusing on the develop- can mention: ment and offering of operational platforms for MT from – Plug-ins for three browsers (Google Chrome, Mozilla spoken languages into SL. Firefox and Safari) that allow texts on web pages to With respect to machine translation for Brazilian be captured, submitted to a remote text-to-gloss4 Sign Language, there are four platforms available for translator and the resulting glosses are rendered by machine translation of Brazilian Portuguese digital con- an avatar. tents into LIBRAS: Su´ıteVLibras [1,2], HandTalk[6], – TV applications for the Brazilian Digital TV Sys- ProDeaf[20] e Ryben´a[22]. tem that allow the presentation of sign language The Su´ıteVLibras is a set of open source com- contents available on Digital TV signal. putational tools that translates digital content from – Mobile applications for two platforms (Android Brazilian Portuguese (BP) into Libras, making the in- and iOS) that allow the translation and rendering of formation available for deaf users on computers, TVs, signs from an input text, also using a remote text- mobile devices and Internet portals. An overview of its to-gloss translator. component architecture is given by Figure 1. – Desktop applications for two operating systems The VLibras main components are: (Windows and Linux) that allow contents from mul- tiple sources on the user’s computer to be translated – VLibras-Plugin: a browser extension that allows the to SL and rendered offline. translation of selected texts to LIBRAS; – Extraction mechanisms of texts from audio and – VLibras-Mobile: VLibras clients for mobile devices videos for a text-to-gloss translation . ( both iOS and Android); – A web portal for video translation resulting in a – VLibras-Desktop: is a tool used to translate into sign new video with a sign language window synchro- language marked texts taken from applications run- nized with the original audio. ning on personal computers; – VLibras-Video: is a portal that allows translation to An integrated set of tools like this for machine trans- LIBRAS of audio tracks or subtitles associated with lation using avatars is not easy to develop and we be- videos; lieve there are few initiatives in the world with this – LibrasTV : an adaption of VLibras for the Brazilian 3 Except for the VLibras-Desktop, which operates au- Digital TV system. tonomously and offline, having a built-in machine translator and a copy of the Signs Dictionary. 4 We use the acronym text-to-gloss to represent the transla- It is also part of the Su´ıte VLibras a backend tion of texts in spoken languages into a textual representation service called VLibras-Service, which performs the ma- in sign language, called gloss. 4 Tiago Ara´ujoet al.

Fig. 1 Su´ıteVLibras Component Architecture reach and penetration, and still making available a dic- port for multiple source and target languages. We called tionary composed of more than 13,500 Brazilian Sign it OpenSigns-Core. Language (Libras) 3D signs. According to Figure 4, the components highlighted Based on these resources, our proposal is to offer a in orange and red represent the points of variance and multilingual platform for machine translation text-to- have been rebuilt to support multiple source and target sign, which accepts several spoken languages as input languages. The components in blue had their basic be- and performs a machine translation into several target havior maintained. The minor adjustments required are sign languages. With the effort of generalization prac- related to the generalization and internationalization of ticed in this work, the VLibras framework could become their interface. We will detail these changes in Sections available to be extended and used in other countries and 3.1.1 and 3.1.2. languages. Thus, the main focus of our work was the transfor- 3.1.1 Multilingual Text-to-Gloss Translator mation of a complete platform of machine translation from Brazilian Portuguese (written or spoken) to LI- Originally, the basic role of the Suite VLibras MT BRAS, called VLibras, into an extensible and multilin- component was to receive texts in BP and convert them gual machine translation platform, called OpenSigns. to a textual representation in Libras, called gloss. Since most of the related work uses a rules-based translation approach, including VLibras, we choose this translation 3.1 Building an OpenSigns Platform Prototype approach for the OpenSigns MT component. The main reason for making this choice is the difficulty in finding We started with an initial assessment that the majority a large and representative bilingual corpus of several of the Su´ıteVLibras components has generic features domains for all spoken language-sign language pairs. In that can be shared among several sign languages with addition, SLs are visuospatial languages which means minor changes. The main changes needed are aimed that the gloss representation is intermediate. Therefore, at making the components that access the translation there is no formal, structured and widely disseminated services “agnostic”, ie, independent of the source and written form, which hinders the establishment of natu- taget languages. In addition, we also focus on enabling ral conventions of writing a natural language and makes the solution to support multiple machine translation it difficult to implement a statistical or a neural MT engines and multiple sign dictionaries. component, for example. Figure 3 illustrates the architecture of the VLibras- However, it is possible to use other MT approaches. Core[2]. Initially, it only translated content from Brazil- Since this component has a restricted and well-defined ian Portuguese (BP) to Libras. Figure 4 presents an function (ie translating a text from a spoken language adapted version of this architecture, which includes sup- into a gloss of sign language), it could be replaced by VLibras to OpenSigns 5

(a) (b)

(c) (d)

Fig. 2 VLibras Aplications (a) VLibras Desktop, (b) VLibras Plugin, (c) VLibras Mobile and (d) VLibras Video other MT component (e.g., a neural MT implementa- As can be seen in Figure 4, one novelty of the Open- tion, a statistical MT implementation, among others) Signs architecture is the incorporation of a previous with little or no modification to the overall architecture text-to-text conversion. In this new context, the text- of the system. to-gloss translation process now consists of two dis- The four main steps of the VLibras translation pro- tinct integrated processes: (1) a text-to-text machine cess are as follows (see Figure 5)[15]: translation, which converts the input text into a spo- ken language associated with the target sign language – Preprocessing: In this step, the input text is sepa- (eg, from English to Brazilian Portuguese); and (2) a rated in sentences and then in tokens. specific text-to-gloss translation to the target sign lan- – Classification: Two classification approaches are per- guage (eg, from Portuguese to Libras gloss). The in- formed: Morphological, where we identify the gram- ternal organization of the components of the generic matical classes of the tokens in the sentence; Syn- text-to-gloss translator of OpenSigns is illustrated in tactic, which groups lexical items into multiple syn- Figure 6. tactical units. – Morphological and Syntactic Adequacy: This process It is important to note that the text-to-gloss transla- is important because the Libras structure is gener- tion of OpenSigns has a machine translation algorithm ally different from the Brazilian Portuguese. In addi- similar to that used in the VLibras, containing the four tion, Libras has some morphological elements that original steps (see Figure 5). However, the architectural are not represented, such as articles and preposi- pattern of the translator has been changed to allow tions. It is also necessary to treat verbal tense, to the development of several concrete implementations, identify common nouns of two genera, among oth- so that the singularities of each sign language can be ers. addressed punctually, with maximum reuse. – Postprocessing: This step refines the sequence of The configuration of the concrete components used glosses to improve the quality of the translation. in the text-to-gloss machine translation for each pair Some examples are: treatment of numbers, plural, of languages sup- synonyms, among others. ported is defined previously by a template and applied 6 Tiago Ara´ujoet al.

Fig. 3 Internal Architecture of VLibras-Core

Fig. 4 Internal Architecture Adapted to the OpenSigns by the OpenSigns translator at run time. This flexibility In some steps, whose behavior is usually based on allows the implementation of specific steps that can be external configurations such as models and translation addressed for a specific sign language, whereas others rules, we can reference a generic concrete implementa- can be shared by several sign languages. For example, tion in template and make adjustments just in the con- the preprocessing and post-processing steps, which per- figurations. As shown in Figure 7, this strategy allows form common actions can be referenced in more than to conciliate several different scenarios for the morpho- one template, whereas the specific morphological and logical adequacy step, allowing that existing sign lan- syntactic classifications can be inherited and adapted, guage implementations can be adapted and integrated or fully reimplemented. to the OpenSigns, and new implementations can be cre- VLibras to OpenSigns 7

Fig. 5 Internal Organization of the Suite VLibras Translator

Fig. 6 Internal Organization of the OpenSigns Translator

The process of build the animations (or videos) of the signs is usually one of the major challenges in the development of a MT platform for sign languages. Gen- erally, each sign needs to be interpreted by a human interpreter, digitally captured, adapted to the model of an avatar, revised, and finally encapsulated in a par- ticular pattern of representation. This process is very Fig. 7 Implementation of the Morphological Adequacy Step expensive and time-consuming, because the creation of a correct and extensive signs dictionary with tens of ated using generic classifiers and adapters offered by the thousands of signs involves many technological and lo- platform. gistical resources and also the coordinated effort of a multidisciplinary team.

3.1.2 Multilingual Signs Repository To assist in this process, the Su´ıteVLibras already provides a specific environment for building and man- Regardless of the translation approach used in the text- aging its Libras dictionary, called WikiLibras[2]. Wik- to-gloss MT, the quality of the text-to-sign translation iLibras aims to facilitate the implementation of some generally also depends on the available sign vocabu- of the above steps, performed by teams (volunteer or lary [14]. In this type of machine translator, the signing not) and also to allow the distributed management of is usually done using 3D avatars, that use these signs the workflow of animation, revision and distribution of to render the message. In the absence of a sign, the the signs. avatar’s default behavior is to make the fingerspelling (or dactylology5), which makes it difficult to under- A schematic view of WikiLibras is presented in Fig- stand, and requires more time for signing. ure 8. It offers a graphical interfaces for sign editing, which simplify some of the animation tasks, reducing 5 (or dactylology) is the communication in the level of expertise required for the animators. In sign language of a word or other expression by rendering its written form letter by letter in a manual alphabet (definition addition, it also allows the abstraction of the sign de- extracted from www.dictionary.com) scription language used internally by the components of 8 Tiago Ara´ujoet al.

Su´ıteVLibras. The same features applies to the process All rules, whether morphological or syntactic, are of defining the formal translation rules. modeled in XML files. Basically, each rule contains in- Some parts of the WikiLibras features are sign lan- formation from which grammar class it is intended for guage agnostics, i.e. they are not exclusive to Libras and the action should be taken whether the rule applies and can be applied for creating signs in other SL. How- to the sentence. The application of syntactic rules im- ever, we have to perform some changes to the tool for plies the updating of the syntactic tree in order to keep using in the OpenSigns. These changes are highlighted it consistent with the modifications made. in orange in Figure 9 and focus essentially on making The adaptations made from BP to Libras also use the repository multilingual, from the point of view of auxiliary dictionaries and algorithms for treatments of storage as well as to reference signs that belongs to special plurals. In the morphological adaptations to En- multiple dictionaries. glish, auxiliary dictionaries are also used for the veri- fication of some specific, as well as exclusive modules for verbal treatment and treatment of plurals, in both cases using algorithms based on WordNet7. In this first 3.2 Proof of Concept version of the prototype, in the translation from En- glish to ASL and Spanish to LSE, only morphological To develop a proof of concept of the proposal platform, adequacy is being done. initially, we developed a prototype able to translate The post-processing step implemented in the Open- texts into several source spoken languages into three Signs prototype refines the translation in a specific way target SLs: Libras, LSE and ASL. for each of the three SL. Some examples of steps per- The text-to-text pre-translation module was devel- formed in this step are: substitution of words or part 6 oped using the Google Cloud Translation API , to con- of the sentence by a synonym, the substitution of num- vert texts in any spoken language into Brazilian Por- bers by numerals and identification of compound words, tuguese, Spanish or English depending on the target among others. sign language. Then, the text-to-gloss translation module was adapted to support the translation of sentences in Brazilian Por- 4 Final Remarks tuguese (BP), English or Spanish for a sequence of glosses into Libras, ASL or LSE respectively. The tok- In this work, we present the results of a research whose enization (i.e., the separation of words from each sen- objective is the development of a multilingual platform tence) in English or was made specif- for “text-to-sign” machine translation, ie, a unique ecosys- ically for each of them, taking into account their own tem that accepts several spoken languages as input and structural characteristics. performs a machine translation for several output sign We also adapted the process of generation of sen- languages. The proposed platform is based on several tence syntax trees for English and Spanish in the new common existing components, derivated from the Su´ıte translation componnents. Figure 10 bring one example VLibras, in which components supporting specific mech- of syntactic trees for the same sentence in BP, English anisms of different sign languages have been added. and Spanish, respectively. A prototype, based on an extension of Su´ıteVLi- Thus, before the generation of the syntactic tree, bras was developed with the aim of verify that the the proper labels of English and Spanish are replaced basic concepts of the proposed platform are feasible. In by their grammatical equivalents in BP, if any. Such this prototype, additional components have been im- a temporary artifice used in the prototype may have plemented to support the translation of texts in any some impacts on the generation of the syntactic tree language for LIBRAS. of some sentences but does not make the translation The OpenSigns MT component implemented in our process unfeasible. prototype is rule-based, since the most of the related The text-to-gloss translation is based on a set of work uses a rules-based translation approach, including grammatical rules specific to each language treated in VLibras, and due to the difficulty in finding a large the prototype. Such rules are aimed at the adequacy of and representative bilingual corpus of several domains the morphosyntactic divergences between the spoken for all spoken language-sign language pairs. However, language and the associated target sign language. since this component has a restricted and well-defined function (ie translating a text from a spoken language 6 This API is able to identify the input language of a sen- into a gloss of sign language), we could replace it by tence and translate it automatically into a target spoken lan- guage (cloud.google.com/translate). 7 wordnet.princeton.edu/citing-wordnet VLibras to OpenSigns 9

Fig. 8 WikiLibras Architecture

Fig. 9 WikiSigns Architecture

perform some tests to assess whether our proposed text- to-sign MT approach can be extended to support other target SL (e.g., ASL). These tests show that our exten- sion MT approach for ASL had also WER and BLEU values better than a direct translation approach. We believe it is fundamental to stimulate the com- Fig. 10 Example of a Sentence Syntactic Tree in BP, English munity of researchers and developers who work with and Spanish MT for sign language to collaborate. As we are distinct groups, the cooperation is only possible with the def- a neural or statistical MT component with little or no inition of standards for architecture and some system modification to the overall architecture of the system. components. One of the components that is critical for In Section 4, we present some tests carried out with the evolution of the results in the area is the SL dictio- the implemented prototype addressing the two research nary. For us, the dictionary must be a resource shared questions presented in Section 1. These tests show that by the different MT systems. This would imply in a is possible to combine a text-to-text MT approach along more accelerated increase in the number of signs, qual- with a text-to-sign translation in order to provide sup- ity and convergence in the use of signs. It is therefore port translation from several spoken languages. How- vital to accelerate the definition and expansion of the ever, as expected, the translation of a spoken language sign languages themselves. into a sign language from another country (e.g., from This strategy was performed by Suite Vlibras au- English to Libras) tends to produce worse results than a thors and follow the same strategy in the research de- translation performed from an spoken language to the scribed in this article with the aim of encouraging and sign language of the same country (e.g., from BP to strengthening cooperation and accelerate the ordered Libras), but better than a direct translation. We also growth of the dictionaries of signs. 10 Tiago Ara´ujoet al.

As future and complementary work, a deeper in- 15. Lima MACB, Ara´ujoTMU, Oliveira ES (2015). Incorpo- vestigation with deaf users of other countries will be ration of Syntactic-Semantic Aspects in a LIBRAS Machine done to reinforce these results. Thus, the major goal of Translation Service to Multimedia Platforms. Proceedings of the 21st Brazilian Symposium on Multimedia and the this work is the dissemination of the OpenSigns project. Web, Webmedia 2015, 133-140. Such disclosure is critical because full validation de- 16. L´opez-Lude˜naV, Gonz´alez-MorcilloC, L´opez JC, Fer- pends on researchers, interpreters, and users from other reiro E, Ferreiros J, San-Segundo R (2014a) Methodology for developing an advanced communications system for the countries using the platform. Deaf in a new domain. Knowledge-Based Systems. 52:240- 252. 17. L´opez-Lude˜naV, Gonz´alez-MorcilloC, L´opez JC, Fer- References reiro E, Ferreiros J, San-Segundo R (2014b) Translating bus information into sign language for deaf people. Engineering 1. Ara´ujoTMU (2012) Uma solu¸c˜aopara gera¸c˜aoautom´atica Applications of Artificial Intelligence. 32:258-269. de trilhas em L´ınguaBrasileira de Sinais em conte´udosmul- 18. L´opez-Lude˜naV, San-Segundo R, Morcillo CG, L´opez tim´ıdia(A solution for automatic generation of Brazilian JC, Pardo Mu˜nozJM (2013) Increasing adaptability of a Sign Language tracks in multimedia contents), PhD The- speech into sign language translation system. Expert Sys- sis, Universidade Federal do Rio Grande do Norte, Brazil. tems with Applications. 40:1312-1322. 2. Ara´ujoTMU, Ferreira FLS, Silva DANS, et al (2014) An 19. L´opez-Lude˜naV, San-Segundo R, Mart´ınR, S´anchez D, approach to generate and embed sign language video tracks Garcia A (2011) Evaluating a speech communication sys- into multimedia contents. Information Sciences, 281:762- tem for deaf people”. IEEE Latin America Transactions. 780. 9:565-570. 3. Corada (2016) Sign 4 Me - A Signed English Transla- 20. ProDeaf (2016) Prodeaf WebLibras. http://prodeaf.net. tor App. http://www.corada.com/products/sign-4-me-app. Accessed 30 Nov 2016. Accessed 30 Nov 2016. 21. Quadros RM, Pizzio AL, Rezende PLF (2009). Lingua 4. Ethnologue Language of the World (2016) Deaf sign lan- Brasileira de Sinais I (Brazilian Sign Language I). Cole¸c˜ao guage on world. http://goo.gl/EbapKu. Accessed 30-Nov- Letras Libras. Universidade Federal de Santa Catarina. 2016 22. Ryben´a (2015). Ryben´a. 5. Freitas C, Rocha P, Bick E (2008) Floresta sint´a(c)tica: http://portal.rybena.com.br/site-rybena. Accessed 30 bigger, thicker and easier. In: Texeira A, de Lima VLS, Nov 2016. Oliveira LCO, Quaresma P (eds.)Conf Comput Process 23. Signslator (2016) Signslator, el primer traductor de Port Lang - Lect Notes Comput Soc, Springer Verlag, lengua de signos (Signslator, the first sign language trans- Aveiro, pp. 216-219. lator). http://www.signslator.com. Accessed 30 Nov 2016 6. HandTalk (2015). Hand Talk. http://www.handtalk.me. 24. Shoaib U, Ahmad N, Prinetto P, Tiotto G (2014) In- Accessed 30 Nov 2016. tegrating MultiWordNet with lexical 7. HETAH (2016) Fundaci´on HETAH - Herramientas resources. Expert Systems with Applications. 41:2300-2308. Tecnol´ogicas para Ayuda Humanitaria (HETAH Foun- 25. Stumpf MR (2000) L´ınguade Sinais: escrita dos surdos dation - Technological Tools for Humanitarian Aid). na Internet (Sign Language: Deaf Writing on the Internet). http://hetah.net/es. Accessed 29 Nov 2016. Proccedings of the V Congresso Ibero-Americano de In- 8. Huenerfauth M (2004) A multi-path architecture for ma- form´aticana Educa¸c˜ao- RIBIE. Chile. chine translation of English text into American Sign Lan- 26. Su HY, Wu CH (2009). Improving Structural Statistical guage animation. Proceedings of the Student Research Machine Translation for Sign Language with Small Cor- Workshop at HLTNAACL. 25-30. pus Using Thematic Role Templates As Translation Mem- 9. Huenerfauth M (2005a) genera- ory. Trans Audio Speech and Lang Proc. 17:1305–1315, tion: multimodal NLG with multiple linguistic channels. http://dx.doi.org/10.1109/TASL.2009.2016234. Proceedings of the ACL Student Research Workshop. 37- 27. Tschare G (2016) The Sign Language Avatar Project. 42. Innovative Practice 2016. http://goo.gl/5RCkAc. Accessed 10. Huenerfauth M (2005b) Representing coordination and 30 Nov2016. non-coordination in an american sign language animation. 28. va Zijl L, Barker D (2003) South African Sign Language Proceedings of the 7th international ACM SIGACCESS Machine Translation System. Proceedings of the 2nd inter- conference on Computers and accessibility, 7:44-51. national conference on Computer graphics, virtual Reality, 11. Huenerfauth M (2008) Generating american sign lan- visualisation and interaction in Africa. 49-52. Cape Town, guage animation: overcoming misconceptions and techni- South Africa. cal challenges. Universal Access in the Information Society. 29. va Zijl L, Combrink A (2006) The South African Sign 6:419-434. Language Machine Translation Project: issues on non- 12. Huenerfauth M, Zhao M, Gu E, Allbeck J (2007) Evalu- manual sign generation. Proceedings of the 2006 annual re- ating american sign language generation through the par- search conference of the South African institute of computer ticipation of native ASL signers. Proceedings of the 9th scientists and information technologists on IT research in international ACM SIGACCESS conference on Computers developing countries. 127-134. Gordon’s Bay, South Africa. and accessibility - Assets. 7:211-218. 13. IBGE (2010) Censo demogr´aficobrasileiro do IBGE 2010 30. Wauters LN (2005) Reading comprehension in deaf chil- (IBGE Brazilian Census of 2010). Brazilian Institute of of dren: The impact of the mode of acquisition of word mean- Geography and Statistics. http://goo.gl/e5t6fS. Accessed: ings. EAC, Research Centre on Atypical Communication, 01 Dec 2017. Radboud University. 14. Lima MACB (2015) Tradu¸c˜ao Autom´atica com Ade- 31. WHO (2017) Deafness and hearing loss, qua¸c˜aoSint´atico-Semˆantica para LIBRAS (Machine Trans- Fact sheet. World Health Organization. lation with Syntactic-Semantic Adequacy for LIBRAS), http://www.who.int/mediacentre/factsheets/fs300/en. Master Thesis, Universidade Federal da Para´ıba,Brazil. Accessed 01 Dec 2017.