<<

Angelika Ribisel, BA

Deafness and Technology The effect of the convergence of technology on the lives of hearing impaired people

MASTER’S THESIS

Submitted in Partial Fulfillment of the Requirements for the Degree of

Master of Science

Studies of Media and Convergence Management Alpen-Adria-Universität Klagenfurt Faculty of Humanities

Supervisor: Ass.Prof. Mag. Dr. Marlene Margit Doris Eva Hilzensauer Center for and Deaf Communication

August 2015

Affidavit

I hereby declare in lieu of an oath that - the submitted academic paper is entirely my own work and that no auxiliary materials have been used other than those indicated; - I have fully disclosed all assistance received from third parties during the process of writing the paper, including any significant advice from supervisors; - any contents taken from the works of third parties or my own works that have been included either literally or in spirit have been appropriately marked and the respective source of the information has been clearly identified with precise bibliographical references (e.g. in footnotes); - to date, I have not submitted this paper to an examining authority either in Austria or abroad and that - the digital version of the paper submitted for the purpose of plagiarism assessment is fully consistent with the printed version.

I am aware that a declaration contrary to the facts will have legal consequences.

(Signature) (Place, date)

2

Table of Content 1. Introduction ...... 7 2. ...... 9 2.1. Disability in Austria ...... 10 2.2. Disability in other countries ...... 12 2.2.1. Situation in the United States ...... 12 2.2.2. Situation in Germany ...... 13 2.2.3. Comparison of the United States and Germany ...... 13 3. Deafness ...... 14 3.1. Aspects of ...... 15 3.1.1. Types of hearing loss ...... 15 3.1.2. Degrees of hearing loss ...... 16 3.1.3. Configuration of hearing loss ...... 16 3.1.4. Other descriptors associated with hearing loss ...... 17 3.2. Uppercase Deaf vs. lowercase deaf ...... 17 3.3. Deaf Ethnicity and Deafhood ...... 17 4. Sign Language ...... 18 4.1. Austrian Sign Language ...... 19 4.2. Sign Language Grammar ...... 20 4.3. Notation systems ...... 20 4.3.1. HamNoSys ...... 20 4.3.2. Movement-Hold Model ...... 22 4.3.3. Stokoe ...... 23 4.3.4. GLOSS Notation ...... 24 4.3.5. SignWriting ...... 24 5. Technology ...... 26 5.1. Technological convergence ...... 26 5.2. ...... 27 5.3. Markup Languages: XML ...... 28 5.3.1. General information about XML ...... 28 5.3.2. XML Document Object Model ...... 29 5.3.3. Elements and Attributes ...... 30 5.4. Markup Languages: SiGML ...... 31 5.4.1. Manual and Non-manual Signing ...... 32 5.4.2. Gestures written in SiGML ...... 32

3 6. Deafness and ICT ...... 34 7. Web ...... 37 7.1. Functionalities of web accessibility ...... 38 7.2. Sign Language Interpreter Module...... 39 8. Smartphone and Web Applications for deaf people ...... 41 8.1. SpreadTheSign ...... 41 8.1.1. Implementation details ...... 43 8.1.2. User perspective of STS ...... 44 8.2. TapTap ...... 45 8.3. UNI by MotionSavvy ...... 46 8.4. SignMedia SMART ...... 47 9. for deaf people ...... 50 9.1. Ledasila ...... 50 9.1.1. Linguistic concept of the database ...... 51 9.1.2. Technical aspects ...... 52 9.2. Gebärdenwelt ...... 52 10. Gadgets ...... 53 10.1. Conversor Pro ...... 54 10.2. ZBand ...... 55 11. Avatars ...... 55 11.1. Sign Language Animation ...... 56 11.2. Early development of Sign Language Avatars ...... 57 11.3. Recent and future developments ...... 58 11.3.1. The eSIGN Project ...... 58 11.3.2. ViSiCAST ...... 59 12. Models and Theories of Individual Acceptance ...... 61 12.1. Theory of Reasoned Action (TRA)...... 61 12.2. Technology Acceptance Model ...... 62 12.3. Diffusion of innovation theory ...... 64 13. Qualitative Interviews ...... 65 13.1. Hypotheses ...... 65 13.2. Conduction of interviews ...... 66 13.3. Results and implications for the future ...... 67 13.3.1. First hypothesis ...... 68 13.3.2. Second hypothesis ...... 70 13.3.3. Third hypothesis ...... 72

4 13.3.4. Fourth hypothesis ...... 74 13.3.5. Implications for future research ...... 76 14. Conclusion ...... 77 15. References ...... 81

5 Table of Figures

Figure 1: Distribution of in Austria (Leitner, 2007, 1134) ...... 10 Figure 2: Distribution of disabilities in the US (United States Census Bureau, 2010) ...... 12

Figure 3: Handshapes tranformed into HamNoSys (Hanke, 2004) ...... 21

Figure 4: Examples for HamNoSys notation (Wachsmuth & Sowa 2002, 148)...... 22

Figure 5: Movement-Hold model of the word "week" (Valli & Lucas, 2000, 38) ...... 23

Figure 6: Comparison of Stokoe and HamNoSys (LinkedIn Corporation, 2015) ...... 24

Figure 7: signs written down in SignWriting (Deaf Action Commitee on SignWriting, n.d.) ...... 25

Figure 8: The channels of the SLI Module (Debevc et al, 2010, 191) ...... 40

Figure 9: Search result for the word "Büro” at the of SpreadTheSign (European Sign Language Center, 2012) ...... 42

Figure 10: Mobile app version of SpreadTheSign (European Sign Language Center, 2012) ...... 43

Figure 11: The illustration for the word "blue" (European Sign Language Center, 2012) ...... 45 Figure 12: MotionSavvy's UNI translates sign to speech and speech to text (Stemper, 2015) ...... 46

Figure 13: Description and video display of the term "Pre-Production" (Hope, n.d.) ...... 49

Figure 14: Search results for the word "Büro" in Ledasila (Alpen-Adria-Universität Klagenfurt, 2015) 51

Figure 15: Components of the Conversor Pro (Conversor Ltd., 2015) ...... 54

Figure 16: SiGML tranlsation of the HamNoSys transcription (Verlinden et al, 2005, 4760)...... 57

Figure 17: Synthetic signing animation (Kennaway, 2003) ...... 59

Figure 18: Theory of Reasoned Action (Terry et al, 1993, 9) ...... 62 Figure 19: TAM (Legris et al, 2003) ...... 63

Figure 20: Categories of adopters (based on Rogers, 2003) ...... 64

Tables

Table 1: Clark's table for the degree of hearing loss ...... 16

6 1. Introduction

During the last few years, accessibility has become a catchphrase when discussing the of people with disabilities. Around the world, many people have disabilities and because of that, they are often not able to perform activities of daily living without the support of others. Frequently, these people have to rely on themselves to get things done because they are left alone by their surroundings. Thus, they have to find own solutions to problems in order to achieve their goals. Through the emergence of technology, the disabled population gets the opportunity to increasingly take their lives into their own hands and get part or even full independence. Today, this is not yet achievable, but it might be in the future, if technology changes and evolves as rapidly as it has done it in the last few decades. Devices and services developed for the disabled helped to lower or completely remove barriers. Regarding hearing impaired people, devices like mobile phones helped to make communication easier by using voice to text recognition, noise recognition for giving visual alerts, for example instead of a door bell ringing, or by enabling them to have conversations via text messaging service as well as through video telephony or over the Internet. Of course, deaf people do not need around-the-clock supervision, nor are they in need of constant help. Still, there are some obstacles that have to be taken, and many of them restrict hearing impaired people in living their lives how they want to. The lack of competent interpreters in Austria is concerning. A recent study showed, that in the section of secondary education alone, nearly 50 full-time additional interpreters would be required to achieve the needed support for the hearing impaired pupils in these schools. An estimated 2 permille of the Austrian population is deaf, not including age-related hearing loss in adults (Hartl & Unger, 2014). Although this is only a small part of the population, the required and by law constituted rights of the deaf population are not respected. Of course, technology cannot (yet) replace human interpreters, but it is a start in helping hearing impaired people to live an easier life.

The wants and needs of disabled people are still often neglected by society. The evolvement of technology has had a positive effect on disabled people. There are much more possibilities that can help them in their everyday life. Still, there is one big problem: These people have to know about the technology first, so that they can use it. Such information hardly gets public, one needs to actively search for it.

7 This thesis wants to give an insight about important terms and areas related to technology suitable for deaf people, as well as give answers to some pressing questions like if and how technology improves their lives, if they are satisfied with the technology they have right now or if they wish for something else in the future, and if the communication between them and deaf and hearing family and friends has changed since the emergence of technology, for example the smartphone.

All these questions need answers, and so far, previous research has mostly concentrated on how technology may help pupils to study and educate themselves. To give a few examples: Christine Monikowski wrote an article in 1997 about electronic media and its influence on deaf students’ access to knowledge. Patrick Pillai focused on the usage of technology to educate deaf and hard of hearing children, especially in the area of rural Alaska and its general education setting in this area. Jennifer Beal-Alvarez and Joanna E. Cannon published an article in 2014 about their technology intervention research with deaf and hard of hearing learners. Virginia Volterra and her colleagues focused on advanced learning technology for a bilingual education of deaf children. Adults have been neglected in this area of research for the most part, which is why this paper conducts qualitative interviews with five deaf adults regarding the topic of deafness and the influence of technology. This field of research has not been subject to many papers yet, because of its special focus on technology for deaf people. There is a lot of research available about technology helping physically disabled people (e.g. Crewe & Zola: for Physically Disabled People; Theng: Assistive Technologies for Physical and Cognitive Disabilities), but the focus on applications, websites and gadgets for deaf people and its perception among those is still a broad field that has only been brushed lightly.

Due to these research gaps mentioned, an insight into the thoughts and wishes of deaf people needs to be observed. This thesis aims at showing some examples of technologies that benefit children as well as adults. It focuses on literature review and actual studies of apps, websites, and technology gadgets that were created especially for deaf people. Additionally, the usage of avatars to translate into sign languages is examined, giving examples of recent and possible future projects in this area and an insight about how the technology behind it works. Furthermore, as mentioned before, qualitative interviews with five deaf adult participants give an insight about their ways of communicating and their thoughts about technology. The results determine the supportiveness of the hypotheses that are created.

8 Chapters 2 - 5 focus on the theoretical rationale, giving definitions of important terms like disability, deafness, and technology. Chapters 6 - 11 give an insight into applications and websites relevant for deaf people. Chapter 12 explains the theoretical models that the qualitative interviews are based on. Chapter 13 describes the qualitative study, its results, and implications for the future.

2. Disability

This chapter describes what disability is, and gives an insight about the situation in Austria as well as in other countries (Germany and the United States of America), including a short introduction into the education situation for deaf people in the mentioned states.

“A disability is an environmentally contextualized health-related limitation in a child’s existing or emergent capacity to perform developmentally appropriate activities and participate, as desired, in society” (Halfon, Houtrow, Larson & Newachek, 2012).

Although Halfon and his companions defined this term especially in regards to children, it is a good statement that can be applied to other age groups as well. Important is the fact that a disability keeps a person from participating normally in a society. Though the statement of Halfon and his colleagues is understandable, there are other opinions about disability. Davis describes disability as “part of a historically constructed discourse, an ideology of thinking about the body under certain historical circumstances” (Davis, 1995, 2). Davis thinks that categorizing people in being disabled or normal is not that easy. People are automatically put in one or the other sector, depending on their bodily or mental state. Many deaf people do not think of themselves as being disabled, they rather see themselves as being a linguistic minority. This opinion of the deaf population contradicts with Halfon’s statement about disability. Although hearing impaired people can live an autonomous life, they are limited regarding communication and participation in their everyday lives. Since being deaf is considered a disability in political and legal issues, they are officially living with disability, even if they themselves dismiss such a thought.

9 2.1. Disability in Austria There have been some articles as well as books about information technology in relation with advantages for disabled people. Paul H. Wise wrote an article about emerging technologies and their impact on disabilities. Wise mostly concentrates on mental disabilities, for their numbers are rising, whereas the number of physically disabled people is falling (Wise, 2012, 169).

In Austria, most disabilities evolve around mobility, as well as blindness. Problems with hearing are rather rare. 2,5% of the population has hearing problems, but only 0,7% is severely hearing impaired (Leitner, 2007). 2,7% of women have problems with hearing, in contrast only 2,1% of men suffer from the same problem. Altogether, over 20 per cent of men and 20 per cent of women living in a private household have a type of disability.

Figure 1: Distribution of disabilities in Austria (Leitner, 2007, 1134)

The term disability has not been present in Austrian law for long. Only in the second half of the 20th century, the term was added to Austrian laws. Before that, impaired people were called “versehrt” (English equivalent: maimed), which is an old term for being physically disabled. At that time, a person was considered being “versehrt” if he or she was unable to fill out the ballot card without any help. In other words, a disabled person was only a person that had a physical handicap. Mental disabilities were not even considered in laws (Bundesministerium für Arbeit, Soziales und Konsumentenschutz [BASKM], 2009).

10 In Austria, the term disability first appeared lawfully regarding the education system. Laws for disabled people were created. Since then, this term was defined in several laws, including the Federal Law, the National Insurance Act, and Federal State Laws. The Federal Ordinance for Freedom from Barriers (in German: “Bundesbehindertengleichstellungsgesetz”) defines a disability as a non-temporary physical, psychic, or mental impairment of sensory functions that make a part-taking in social life difficult. A non-temporary impairment is one that lasts for more than six months (BASKM, 2009). The study that the BASKM conducted revealed several problems that hearing impaired people have in their everyday lives. 43 per cent of the asked hearing disabled people stated that they have problems in public transport. 34,9% said that public places and buildings are lacking accessibility. 56 per cent stated that their disability creates problems in their free time due to communication barriers (BASKM, 2009). After this study was conducted, the federal government decided on taking on such problems dealing with accessibility, planning on creating barrier-free usage of public buildings by, for example, implementing signs and alarm systems for blind and deaf people. Whether this plan was successful is debatable, some changes are still ongoing, others were never implemented in the first place. The plan in 2008 was to make every public building in Austria accessible by 2015 (BASKM, 2009).

Regarding television, the ORF-Act clearly stated that disabled people have the right to access the same information as others. The ORF (Austrian Broadcasting Corporation) offers subtitles during some shows. In 2008, 26% of the shows on ORF 1 and ORF 2 were subtitled (BASKM, 2009). Still, this is only one quarter of all content, and deaf people often do not understand the subtitles. They need translation into their national sign language to understand everything. The ORF is translating daily news into sign language, but the rest is still either subtitled or not accessible at all.

In terms of education, there is only one school in Austria focusing on deaf people, located in Vienna. It is called the Federal Institute of Deaf Education, featuring a kinder garden, school, day care center, and a boarding school. Though this institute is certainly a good idea, it neglects the fact that there are families living in other federal states than Vienna, who do not want to give everything up so that a child can go to this special school. Every federal state should consider caring more for the disabled population, giving them what they need to live a normal life.

11 2.2. Disability in other countries This chapter focuses on two different states and the situation for deaf people in those countries. The first are the United States of America, the second Germany, a member state of the European Union.

2.2.1. Situation in the United States The U.S. Census Bureau did an extended study on disabilities in the United States of America (USA) in 2010 (Brault, 2012). The report divides the disabilities into different types: communicative, physical, and mental. People with a communicative disability reported that they were either blind, deaf, or hard of hearing. People with the mental type had trouble with learning, remembering, or dealing with everyday life. People with a reported that they had difficulty walking a certain distance, suffered from rheumatism or another disease that disabled them physically, or that they had to use a , , or a to move (Brault, 2012, 2). The report concluded that approximately 18,7 per cent (56,7 million people) of the population of the United States had a disability in 2010. About 12,6 per cent suffered from a severe disability. In contrast to that, in Austria the percentage is slightly over 20 %. Figure 2 shows the distribution of disabilities regarding the type of impairment in the USA. As one person may have more than one type of a disability, the numbers are slightly higher than the overall rate of disability in the United States.

Figure 2: Distribution of disabilities in the US (United States Census Bureau, 2010)

12 As far as education in the United States is concerned, it is obvious that they are far more advanced in terms of accessibility. The first school for deaf pupils was already created in the 18th century. It was named the Conneticut Asylum for the Deaf and Dumb, which would be considered as discriminating nowadays, but was normal in the 18th century. it was a manual school, focusing on American Sign Language. The school was developed by Thomas Hopkins Gallaudet and Laurent Clerc (Gallaudet University, 2014). Today, one of the most famous institutions in the whole world is the Gallaudet University. It formed out of the Columbia Institution for the Instruction of the Deaf and Dumb and Blind, which was created in 1856 by Amos Kendall. The son of Thomas Hopkins Gallaudet, Edward Miner Gallaudet, became the school’s superintendent. Some years later, the school got the authorization to give out college degrees, in 1894 the name of the college was changed to Gallaudet College. About 70 years later, United States President Lyndon Johnson signed the act to create the Model Secondary School for the Deaf, which was established on the Gallaudet campus. Finally, it became a university in 1986, where the undergraduate students now can choose from more than 40 different majors. They also have a career center to help the students get internships and, after graduating, find a job (Gallaudet University, 2014).

2.2.2. Situation in Germany In Germany, 9,4% of the population was severely impaired in 2013 (Statistisches Bundesamt, 2015). In 2009, the overall number of disabled people added up to 9,6 million men and women (11,6% of the population). Germany faces the same problems as Austria regarding its education system. Education in German Sign Language is mostly non-existent, there are some teachers in integrated classes that can speak some sign language, but this knowledge is not comprehensive enough to teach a child everything that the hearing pupils learn during their education. To give deaf students the possibility to learn everything the way they should, interpreters that also have teaching experience would be needed.

2.2.3. Comparison of the United States and Germany In conclusion, one gets the impression that the situation for deaf people in the United States is much better than in Europe. Of course, one has to consider that Austria and Germany are rather small countries compared to the whole United States, but still they have a responsibility towards disabled people. This responsibility seems to be taken on slowly, still developing an education system that might be suitable for deaf pupils, whereas in the United States there has already been a school for deaf children since the 18th century.

13 The numbers presented here are from three different countries and show us, that a huge part of the worldwide population has a type of disability. Unfortunately, either often these disabilities are unnoticed by healthy people, or there is no willingness to help those affected by it. There are exceptions, companies and people that develop applications, websites, and animated sign language tools to give disabled men and women a chance for a better or improved way of life. Some examples are given in the chapters 7 to 11.

3. Deafness

As was stated in the previous chapter, in a legal sense, being deaf is a disability. This chapter gives an insight about what deafness means, which types and degrees of hearing loss exist, and why one should differentiate between the lowercase deaf and the uppercase Deaf.

Deafness is the inability to hear sound. There are different opinions about how much, or how little, deaf people are able to hear, which is why it is generalized by stating that deaf people are unable to hear sound. In a more detailed definition, “[…] a child is considered deaf if hearing impairment is so great, even with good amplification, that vision becomes the child’s main link to the world and main channel of communication” (Quigley & Paul, 1984, 1). Previous studies have limited the shift from audition to vision as the main channel of communication at 90 dB for the majority of people. The subchapter 3.1 explains the various aspects of hearing loss, including types, degrees, and configuration of hearing loss. Subchapters 3.2. and 3.3. aim at describing the difference between the lowercase deaf and the uppercase Deaf as well as the terms Deafhood and Deaf Ethnicity, giving an insight into deaf culture and community.

14 3.1. Aspects of hearing loss The American Speech-Language-Hearing Association describes three aspects that are important when describing hearing loss: type, degree, and configuration of hearing loss (American Speech-Language-Hearing Association, 2011). The following subchapters explain these three aspects.

3.1.1. Types of hearing loss There are three types of hearing loss. The first one is conductive hearing loss, which appears when sound is not able to travel easily through the outer ear canal to the eardrum and the tiny bones, called ossicles, of the middle ear (American Speech-Language-Hearing Association, 2011). This type can be corrected medically or surgically in most cases. Some possible reasons for conductive hearing loss might be ear infections, holes in the eardrums, a foreign body in the ear canal, or fluid in the middle ear from colds or allergies (Debevc, Kosec & Holzinger, 2010, 183). The second type is called sensorineural hearing loss, which happens when the inner ear, called cochlea, or the nerve pathways from the inner ear to the brain is damaged. This type is the most common one regarding permanent hearing loss and cannot be corrected medically. Sensorineural hearing loss reduces the person’s ability to hear faint sounds. Even loud sounds or speech may be heard muffled or unclear. It can be caused by head trauma, exposure to loud noise, or simply aging ear (American Speech-Language-Hearing Association, 2011). Drug abuse of certain drugs such as streptomycin and gentamicin may also be a reason for this type of hearing loss (Debevc et al, 2010, 183). The third type, mixed hearing loss, is a combination of conductive and sensorineural hearing loss. This means that the damage might occur in two places, for example the outer or middle ear, and the inner ear or auditory nerve.

15 3.1.2. Degrees of hearing loss The degree of hearing loss refers to the severity of the loss ear (American Speech-Language- Hearing Association, 2011). Clark created a table about the degree of hearing loss, which is depicted in Table 1.

Table 1: Clark's table for the degree of hearing loss

Degree of hearing loss Hearing loss range (dB HL) Normal -10 to 15 Slight 16 to 25 Mild 26 to 40 Moderate 41 to 55 Moderately severe 56 to 70 Severe 71 to 90 Profound More than 91

Clark named seven different degrees of hearing loss. Normal hearing loss ranges from -10 to 15 dB HL (remark: dB HL = decibels Hearing Level). Slight hearing loss ranges from 16 to 25, mild from 26 to 40, moderate from 41 to 55, moderately severe from 56 to 70, and severe from 71 to 90 dB HL. Finally, the profound hearing loss is considered everything over 91 dB HL.

3.1.3. Configuration of hearing loss The shape (=configuration) of the hearing loss needs to be seen in relation to the degree and pattern of hearing loss across frequencies. To give an example, a hearing loss that only affects low tones would be described as a low-frequency loss. If one would make a configuration of this hearing loss, the results would show that high tones are heard well, while low tones are heard poorly (American Speech-Language-Hearing Association, 2011).

16 3.1.4. Other descriptors associated with hearing loss There are also other descriptors regarding hearing loss and its severity. One of those descriptors is “bilateral versus unilateral”. Unilateral hearing loss means that the impairment only affects one ear. It can occur if it runs in the family, or if there was an illness such as Rubella, that results in such an impairment. One can also distinguish between sudden and progressive hearing loss, as well as fluctuating and stable hearing loss (American Speech-Language-Hearing Association, 2011).

3.2. Uppercase Deaf vs. lowercase deaf Being deaf does not just mean that you are not able to hear, it also means that you are part of a certain community. There are variations in the cause and degree of hearing loss, the age of onset, the educational background, and the communications methods that are used, as well as how individuals cope with the hearing loss (National Association of the Deaf, n.d.). For example, depending on the age when the hearing loss occurred, people may consider themselves as being “late-deafened” and not just deaf. There are also two meanings of the word deaf: the lowercase deaf is referring to the audiological condition of not hearing, whereas the uppercase Deaf is used to refer to a particular group of deaf people that share the same language and culture, as well as history and social life (Padden, 1990; Davis, 1995). People that are part of the Deaf community have a strong deaf identity, tending to value it by, for example, frequenting schools especially created for deaf pupils. The term Deaf culture was created in the 1970s, based on the belief that Deaf communities had their own ways of living because of their sign language use. Deaf communities value a variety of art forms, sometimes focusing on the circumstance of deafness, but also such forms that are used all around the world. Own games, jokes, poetry, drawings, and much more have been created by these communities (Ladd, 2003).

3.3. Deaf Ethnicity and Deafhood Deaf people define themselves as a culture with an own language and history, and therefore constructed terms like “Deaf Ethnicity” and “Deafhood” to give this ethnical group a name (Ladd & Lane, 2014, 42). These two terms are mostly used in the United States by American users of the American Sign Language (ASL).

17 Ladd and Lane describe ethnicity as a feeling of belonging to a group. Often, the members are like family, they are able to communicate without barriers, they get access to information, and they are able to identify themselves with something in a positive way (Ladd et al, 2014, 42). As mentioned before, culture and history are a big part of ethnicity. There are certain rules that were established, for example regarding the usage of sign languages. The history of the deaf population often evolves around the hard times deaf people had to endure in different centuries and decades (e.g. the Second World War), which brought together the scattered deaf community and let them bind while reminiscing about past times.

The term “Deafhood” was created by Paddy Ladd in the 1990s. At that time, it was not clearly defined what Deafhood was supposed to be or mean. Ladd took a look in the possible future of the deaf culture and founded the term because he thought that it might reflect on the evolvement of deaf culture in the following decades (Ladd et al, 2014, 44). Ladd’s goal was to define a term that did not have a negative annotation as “deafness” did in some deaf communities. Deafhood is a definition that should newly define what it means to be deaf (Ladd et al, 2014, 48). Ladd describes Deafhood as everything what “deaf culture” ever was, is, and will be, and as a journey that every deaf individual has to take to find out and understand who he or she is (Ladd et al, 2014, 50).

4. Sign Language

The following chapter and its subchapters describe what sign language is and why it is an important communication tool for hearing impaired people. Additionally, some information about the Austrian Sign Language is given, as well as essential insights about sign language grammar and important models like HamNoSys and the Movement-Hold-Model.

Sign language plays an important role in the communication with others, it does not just consist of simple gestures without a structure. In detail, a sign language has its own grammar and a vast vocabulary. In sign language, not only the sign, also facial expressions, the positioning of the hands, and the lip movement are important to understand the content of the conversation.

18 Every country has its own sign language, which makes conversation between hearing impaired people from different countries difficult. Additionally, not only does every country have its own sign language, every district has its own dialect, exactly as it is the case in spoken languages. This makes a unified way to communicate very difficult. Sign languages established themselves because deaf people used visual communication. They are natural languages, which are adapted to the visual canal (Skant, Dotter, Bergmeister, Hilzensauer, Hobel, Krammer, Okorn, Orasche, Orter, Unterberger, 2002, 9). It is important to distinguish between sign languages and other visual systems like the finger alphabet or manually coded languages, which are a representation of an oral language in a gestural-visual form. American Sign Language is the prevalent sign language worldwide, not only in the United States of America and Canada, but also in other countries. William C. Stokoe defined the term “American Sign Language” while being a professor at the Gallaudet University, where he analysed the signs of his deaf students (Jarmer, 2011, 67). There are over 500 000 deaf native users of the American Sign Language and over 17 million second language speakers worldwide.

4.1. Austrian Sign Language As mentioned before, every country has its own sign language, as well as dialects in different regions of the country. The Austrian Sign Language divides into dialects, which can even vary from town to town. This variety was created through regional deaf communities that created own signs to communicate within their circles of trust. Although there are these local differences, people in Carinthia are able to understand Styrian or other Austrian dialects without problems, even though some signs might be produced in another way. The Austrian Sign Language has been an autonomous language in Austria since September 1, 2005, when it was implemented in §8(3) of the Federal Constitutional Law. This means, by law it is an own, individual language that is separated from spoken German.

19 4.2. Sign Language Grammar In oral languages, a combination of sounds creates words that are used to communicate. In sign languages, the communication takes place with a combination of the hands, face, the upper body, body movement, and facial expressions (Skant et al, 2002, 9). Signs are made in a restricted area, called the signing space, which makes an invisible circle that encloses the head and the upper body. Sign languages have their own grammar. There are certain rules how signs add up to sentences (Skant et al, 2002, 10). Additionally, there are no verb forms like in a spoken language. The sign for “to buy” is always the same, whether the act lies in the past, present, or future. In other words, there is no “have/has bought” or “will buy”, instead other words explain the time were the act is taking place. For example, if the person informs someone else that he or she already bought something, the person uses another sign that stands for “finished” to make it clear that the act is already done (Jarmer, 2011, 68f).

4.3. Notation systems Several notation systems allow someone to transcribe signs into written words. The following subchapters give a description of various notation systems, their origins, and structure. The models depicted are HamNoSys, Movement-Hold, Stokoe, GLOSS Notation, and Signfont.

4.3.1. HamNoSys One of these systems is HamNoSys, short for Hamburger Notationssystem, which was developed at the University of Hamburg in order to provide a way of recording signs in any sign language and is based on Stokoe’s notation system. The first version was defined in 1984, three years later it was published for the first time (Hanke, 2004). The model was designed with several goals in mind. The transcriptions of sign languages around the world should be possible without having to the system to every sign language. A standard alphabet was not satisfying, so iconic glyphs were created to represent the signs. The system also aimed at shortening the notation for the signs, and integrating it in computer-transcribing tools. The members of the University of Hamburg also desired a well- defined syntax and the opportunity to evolve and specialise if needed (Hanke, 2004). Today, HamNoSys is now in its fourth version. Every version aimed at filling some gaps, introducing various shortcuts, and fixing some issues, like encoding non-manual behaviour in detail.

20 Every notation has a special structure. It consists of a description of the initial posture and the actions that occur when changing the posture in sequence or parallel. The initial posture includes non-manual features as well as handshape, location, and orientation of the hands. If a sign is two-handed, a symmetry operator defines that the dominant hand copies the non- dominant one, unless it is specified in the notation. Non-manual features do not have to be specified, if the location is not described in detail, a default location is assumed (Hanke, 2004).

The handshapes are composed of symbols for the basic form and so-called diacritics to define the thumb position and a possible bending of the hand. Some examples of the handshapes can be seen below (Figure 3).

Figure 3: Handshapes tranformed into HamNoSys (Hanke, 2004)

If other fingers are involved or they have another form, it can also be combined with one of the basic handshapes. This is how the creators of HamNoSys aimed at including all handshapes documented until now.

21 This model only describes the physical action required to produce a sign, and not its meaning. The notation system divides each sign in its components, such as hand position, hand orientation, hand shape, or motion, turning it into written signs, as can be seen in Figure 4 (Wachsmuth & Sowa 2002, 148).

Figure 4: Examples for HamNoSys notation (Wachsmuth & Sowa 2002, 148).

HamNoSys uses a linear one-dimensional system, the internal structure may not be altered freely. The ordering does not represent the actual temporal order (Brentari, 2010, 166). One problem with this system might be that in signing, often more than one thing happens at once, and HamNoSys does not represent the right temporal order of the sign. In case of computer animation programs for avatars, HamNoSys is used, because it seems to be the easiest way to transcribe the signs into programming language. How this works is part of Chapter 11 dealing with avatars and sign language animation.

4.3.2. Movement-Hold Model Scott K. Liddell and Robert E. Johnson designed the Movement-Hold Model. The inventors claimed that signs consist of hold segments and movement segments, which are produced sequentially (Valli & Lucas, 2000, 37). The information about handshapes, orientation, location, and nonmanual signals is represented in bundles of articulatory features. Liddell and Johnson decided that signing could be divided into a hold and a movement part. Holds are periods of time during which all important aspects of an articulation bundle are steady, whereas movements are the periods of time during which some or all aspects are in transition (Valli et al, 200, 37). For example, a sign can change in handshape and location, but it may also only change in handshape or location. These changes are considered as movements and thus put in the movement part of the model. Valli and Lucas give several examples of the Movement-Hold Model, one of them can be seen in Figure 5.

22

Figure 5: Movement-Hold model of the word "week" (Valli & Lucas, 2000, 38)

In American Sign Language, the sign “week” begins with a hold, the right hand being at the base of the left hand. The right hand moves to the tip of the left hand and ends with a hold in that specific location. Thus, the change in the sign is the location of the active hand (Valli et al, 2000, 38). One of the upsides of using this model is the ability to depict sequences efficiently. Moreover, this system can also provide some details for the description of signs, while describing and explaining the various processes that take place in sign language use (Valli et al, 2000, 38).

4.3.3. Stokoe Stokoe’s Notation was the first phonemic script used for sign languages. Latin letters and numerals were used to show the shape of the fingers, iconic glyphs transcribe movement, orientation, and position of the hands. The HamNoSys notation model was developed using the idea of Stokoe’s Notation, while transforming and adopting it. This system was created in 1960 by William C. Stokoe, an American professor and pioneer researcher in the field of sign languages, especially American Sign Language. He taught at the Gallaudet University, a university specifically made for deaf people. Before his research, signs were considered as a rather poor substitute for speech. Stokoe was the first to consider the signs as being an own language with certain elements (Maher, 1996). Stokoe also created the term “cherology”, derived from the Greek word “chereme” for “hand”, which is the equivalent to phonology in spoken languages.

23 Figure 6 compares Stokoe’s Notations with HamNoSys.

Figure 6: Comparison of Stokoe and HamNoSys (LinkedIn Corporation, 2015)

4.3.4. GLOSS Notation GLOSS Notation is a linguistic method for translating and transcribing the cognitive equivalent of a word or phrase (Lewis, 2007, 140f). A gloss represents the word in its most common meaning. It is a written form of sign languages. A gloss is an element that is written down to depict signs.

Some examples for the spelling for manual elements in Austrian Sign Language are (Skant et al, 2002, 2f):  +: A + between two glosses shows that the signs are connected or that a sign needs to be repeated.

 GLOSS1: The number represents the fact that the word is a variant of a sign.  IX-I: This is an index for a first person singular.  IX-time: This means that it refers to a certain point in time, usually a concrete time is written down, for example IX-this week.

4.3.5. SignWriting The Deaf Action Commitee on SignWriting built up by the Center for Sutton Movement Writing created the sign writing system SignWriting. It uses visual symbols to represent facial expressions, movements, and handshapes of signed languages (Deaf Action Commitee on SignWriting, n.d.). The creators of SignWriting see their system as an alphabet, a list of symbols that is used to write any sign language in the world. Basic symbols are used so that you can write down every sign language available.

24

Figure 7: signs written down in SignWriting (Deaf Action Commitee on SignWriting, n.d.)

Figure 7 shows how signs are written down via SignWriting. The website of the writing system also allows the user to learn SignWriting through online lessons, games, and videos, or by purchasing books (Deaf Action Commitee on SignWriting, n.d.).

.

25 5. Technology

This main chapter and its subchapters define several terms evolving around technology, including technological convergence, assistive technology, and markup languages, which play a big part when creating sign language animations. First of all, a definition for the term technology is needed in order to proceed. Braun described technology as “the ways and means by which humans produce purposeful material artefacts and effects […] material artefacts used to achieve some practical human purpose and the knowledge needed to produce and operate such artefacts” (Braun, 1998, 8f). The dictionary has various definitions for it, one being that it is “a branch of knowledge that deals with the creation and use of technical means and their interrelation with life, society, and the environment” (Dictionary.com, 2015). Both definitions state that technology needs to be useful in context of life, society, or environment.

5.1. Technological convergence This chapter defines the terms convergence and technology convergence, both are getting more and more important in our information society, where creating, using, and distributing information through information technology is significant. The word “convergence” derives from the Latin word convergare, which means to approach or to merge. Within the last years, this term has been used to describe a change in business, technology, and other sectors. Convergence changes the way companies do business, they have to adapt to our converging world. Thus, they might have to work together with other companies, or create a converging product that have more than one purpose to satisfy customers (Diehl & Karmasin, 2013, 1). The convergence of technologies affects market structures, social interaction, politics, production, and consumption of technological products (Gathegi, 2013, 221). Neelameghan and Chester define technology convergence as “seamless access, at any time and from anywhere, to a vast and varied array of information sources” (Neelameghan & Chester, 2007).

26 Smartphones are a typical outcome of technology convergence. Some years ago, a mobile phone was only able to call or text someone. Then, with the change of society and technology, new inventions emerged. The smartphone was born, a mobile phone that was more than just a device with which you could call or text someone. Nowadays, smartphones let you have Internet access, make pictures, play music, text, call, video chat, and a lot more. However, convergence is much more than just the access of all the information from one device. Additionally, it is about using any device available to access the information from anywhere (Gathegi, 2013).

These different approaches of convergence also opened up new possibilities to help disabled people. Whether it is a smart home application for a physically handicapped person, or apps and gadgets that help deaf, blind, or otherwise disabled people, new technology gives them opportunities to take part in a normal life.

Although these applications are products of technology, there is a definition of media convergence that also applies to these apps and gadgets. Media convergence is a “phenomenon involving the interlocking of computing and information technology companies, telecommunication networks, and content providers […] brings together the “three Cs” – computing, communications, and content” (Encyclopædia Britannica Inc., 2015). This is a very strong and good definition of convergence, which also applies to smartphones if they are used in the way it is described in Chapter 8 about apps and Chapter 10 about gadgets. It is not only about the computing, it is much more than that, communication channels and the content are very important if you want to help disabled people live an easier life.

5.2. Assistive Technology Taking a closer look at technology, for this thesis the term technology can be narrowed down to assistive technology. This term refers to the “application of scientific knowledge for practical, applied purposes, here directed toward improving health and well-being” (Wise, 2012, 170).

27 The Individuals with Disabilities Education Improvement Act defines assistive technology as “any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve functional capabilities of individuals with disabilities” (Individuals with Disabilities Education Improvement Act, 2004). In other words, the definition of the Individuals with Disabilities Education Improvement Act relates to Wise’s description of assistive technology. Improving the well-being and functionality of disabled people is the main goal of assistive technology. The word assistive describes quite well how technology is used – the aim is to support humans within their everyday life by extending certain abilities like communicating with hearing people, or by enabling deaf people speaking different languages by translating signs from one language to another. Furthermore, this technology fosters the independence of its users by being available all the time and wherever one goes.

5.3. Markup Languages: XML The Merriam-Webster Dictionary defines a Markup Language as “as system for marking or tagging a document that indicates its logical structure (as paragraphs) and gives instructions for its layout on the page especially for electronic transmission and display” (Merriam- Webster, Incorporated, 2015). In other words, markup languages make remarks on a document so that it is syntactically distinguishable from the original text. In the following chapters, the two markup languages XML and SiGML are explained, because they are of value when dealing with animation of sign language.

The following subchapters are supposed to give the reader an insight into XML (eXtensible Markup Language). First, some general information about XML and its relationship to the markup language HTML is given. Then, a short introduction to the XML Document Object Model and elements as well as attributes aims to give a basic understanding of how XML works.

5.3.1. General information about XML The term XML is the abbreviation of eXtensible Markup Language, meaning that is changeable. In contrast, HTML (Hypertext Markup Language) is non-extensible and therefore cannot be changed when viewed in the browser. In other words, XML is the extensible form of HTML (Powell, 2007).

28 XML allows storage of changeable data, so that the web pages can be altered at any time regarding look and content. It lets the developer create customized tags, so that a person is more flexible.

To give a more detailed description, HTML is only allowed to use certain tags, which are predefined. Tags in HTML can be , , , or

. An HTML document always has to begin with and end with , indicating that the tags and text in between are part of the document. Every tag describes an element of the text. means that the text in between is a heading. refers to the main body of the document.

indicates that a paragraph is beginning or ending (

). Every tag should to be closed by adding the slash in between (Powell, 2007). In contrast to this, XML is extensible and therefore allows the user to change or add features. This way, if a person creates an XML document, he or she may create tags that are unique for this specific document. The slash to end the tag is absolutely required and without it, the document cannot be the way it is supposed to (Powell, 2007, 2).

XML documents can be formatted using XSL, which is a formatting language for XML. With XSL, a person may change the look of the text or create tables. XML documents can be displayed in a web browser if a so called data island is created, which allows the XML document to be embedded inside an HTML page (Powell, 2007, 4ff).

5.3.2. XML Document Object Model Every XML data set needs a structure, a certain hierarchy that can be accessed programmatically. This structure is called DOM, or Document Object Model. The purpose of the DOM is that the programmer can find, read, and change something within the XML document if he or she wants to (Powell, 2007, 11). If the developer wants to change something, there are two possible ways to achieve this. The first one is finding something in the XML document by using the tag and the name of what you are looking for. This is called the explicit data access. The second option is the dynamic (generic) access. For this, a program accesses the document using the structure of the document. This way, the program may scroll through all tags and data available. This process is made possible through the Document Object Model. It gives a person direct access to a program within the browser. The browser displays the XML data on the screen, so that all tags can be found by looking at the received data (Powell, 2007, 11).

29 5.3.3. Elements and Attributes An element is everything from the start tag to the end tag. One element might contain other elements, text, and / or attributes. An element may also be empty, which would look like this:

As was mentioned before, everyone can create their own tags, but there are several rules to consider. First of all, element names are always case-sensitive. The name can mean something in one document and something completely different in another one. This is not the case in HTML, where each tag represents a certain value. Second, an element name has to start with a letter or underscore ( _ ). They must not start with the letters xml, however they are written. Besides that word, every word is possible. Names can contain letters, digits, periods, hyphens, and underscores, but they cannot contain spaces (Powell, 2007, 17f). Elements can also have attributes that extend the aspects of the element. In other words, attributes provide additional information about a certain element. Attributes create values, the combination of those two is called a name-value pair. One XML element may have more than one name-value pair, the values have to be quoted always. An example for an attribute is “gender” in this part of an XML document:

Jennifer Lawrence

In most cases, it is advisable to use elements instead of attributes, because attributes are not able to contain multiple values or tree structures, and they are not easily changeable.

30 5.4. Markup Languages: SiGML This chapter describes SiGML, the Signing Gesture Markup Language, an XML application that enables a person to transcribe sign language signs. The information contained in this chapter is important for the one of the later chapters dealing with avatars.

This special markup language was created during the ViSiCAST project, which was started to improve the quality of life of European deaf citizens through widening their access to various services and facilities. It served as a key component of a prototype system the project members created in order to realize turning natural language into signed animation (Glauert, Kennaway, Theobald & Elliott, 2004). Glauert and his colleagues describe the primary purpose of SiGML as “to support the definition of signing gestures in a manner allowing them to be animated in real-time using a computer-generated virtual human character, or avatar” (Glauert et al, 2004, 100). The word “avatar” has various definitions. In Hinduism, an avatar is a manifestation of a deity or a released soul in bodily form on Earth. In general, it can be an incarnation or embodiment of a person or idea. The third definition is valuable for this paper: An avatar is “an icon or figure representing a particular person in computer games, Internet, forums, etc.” (Oxford University Press, 2015).

As mentioned before, SiGML is an XML application language. Glauert and and his colleagues define the so-called Gestural SiGML as the major component of SiGML, which is based on the Hamburg Notation System. SiGML also lets the user incorporate other signed data, for example motion capture that is created through parameters obtained by recording the action of a human signer (Glauert et al, 2004). Information created with HamNoSys can be translated into SiGML and vice versa, although SiGML allows more specified physical features of the posture of the signer than it is possible with HamNoSys. As was already described in the chapter about notation systems, HamNoSys offers a possibility to represent phonetically significant features of signing like hand shape and position (Glauert et al, 2004). The Gestural SiGML needs to consider manual and non-manual components, even though the manual components are more specifiable than non-manual ones, due to the fact that non- manual components are often not as specifically defined as manual aspects.

31 5.4.1. Manual and Non-manual Signing The manual components of the Signing Gesture Markup Language lets signs be defined considering the transitions between static postures involving either one or both hands of the signer. The posture of one’s hand is determined by the shape, its orientation, and the location of the hand in the signing space (Glauert et al, 2004). Some handshapes occur more often than others, for example “fist” or “flat hand”. This core set of shapes can be extended by applying modifications (e.g. bending of one or more fingers). If the signs are two-handed, the notation gives a configuration of the two hands with regards to each other, indicating the so-called “hand constellation” (Glauert et al, 2004). Straight line, circular, or zig-zag hand motions can be specified, all of them can be modified and redefined in various ways. The change of the hand-orientation or shape is also supported. All motions can be combined either in parallel or sequentially. Non-manual signing features of SiGML are based on the definitions of HamNoSys, some of them being body and head movements, facial expressions, and mouthings. The facial expressions, like eyebrows, nose, and eyelids, are depicted in order to complete the sign, without communicating attitude or emotions.

5.4.2. Gestures written in SiGML Glauert and his colleagues give several examples of SiGML signs. The following code represents the sign “film” in British Sign Language (Glauert et al, 2004, 101):

32 The sign has only manual components. Both hands are needed to sign the word, so there are specifications of both hands’ shapes, orientation, and location with regard to each other. The code line “” indicates that the dominant hand is swinging.

Another example given be Glauert and his colleagues is the sign for “tell-(the)-story” in British Sign Language. The correct code for this word looks like this (Glauert et al, 2004, 102):

The line “).

To sum up, SiGML enables the user to transcribe sign language signs. It serves as a key component in order to realize turning natural language into signed animation. It adds

33 functions to the basic XML markup language in order to adapt to the specific needs of the sign language gestures.

6. Deafness and ICT

Information and Communications Technology (ICT), its systems and applications, has contributed to improving the status of people that have a disability (Debevc et al, 2010, 182). Daily, millions of people use the to communicate or gather information. In the last years, these technologies have changed and improved significantly. Audio and video communication have evolved and offers a lot of possibilities for deaf and hard of hearing people. Functional and accessible technology allows deaf people to fully participate in society, business, and education, and thus enables them to advance personally and professionally (Maiorana-Basas & Pagliaro, 2014). Maiorana-Basas and Pagliaro describe the situation regarding technologies for deaf people as a rather ironic one. In 1876, Alexander Graham Bell wanted to electronically transmit speech in order to help hearing impaired people such as his mother and wife. Finally, he created the telephone that had no use at all for deaf people. The normal hearing population started to use telephones to communicate whereas deaf people had to communicate face-to-face or via letters (Maiorana-Basas & Pagliaro, 2014). It was only in the 1960s, that Robert Weitbrecht, a deaf scientist, invented an acoustic coupler, which let hearing impaired people communicate over the telephone via a typewriter. This device is called Telecommunications Device for the Deaf (TDD) or teletypewriter (TTY). The invention allowed deaf people to communicate over a distance, but the other person had to have the exact same device as well, other manufacturers would not work. Also, if the telephone network operator was not the same one, there could be problems with writing or receiving message.

34 Years passed and brought new technologies with them. The mobile phone let everyone communicate not only via phone, but also via short messages (SMS), which was a huge step forward for the deaf community in terms of communication, giving them a feeling of self- control and integration into society (Maiorana-Basas & Pagliaro, 2014).

Information and Communication Technologies help hearing impaired people in several different ways. First, they can be used to communicate directly or to provide captions. Second, through technologies deaf people have the opportunity to teach themselves a written national language. Furthermore, hearing people can learn sign language with the help of technology (Hilzensauer, 2006, 185). New communication tools are opening new possibilities to hearing impaired people. Many deaf men and women quickly adopted the first inventions, like the fax or the text telephone. Not shortly after, mobile phones provided another way to communicate over long distances. Short messages got very popular in the deaf communities (Hilzensauer, 2006, 185). In addition, it allowed deaf people to communicate with hearing colleagues or family members. The Internet revolutionized everything, not only for normal people, but also for others, who got new opportunities to communicate with others. Live chats over programs like Skype or MSN offered the possibility to contact deaf companions farther away instantly. Another online feature that is taken up by a lot of hearing impaired people is the so-called vlog. The word is a combination of “video” and “blog”. A vlog allows a person to share videos concerning their interests and experiences, as it is normally done in a blog. Such interest may be concerning music, fashion, movies, education, exercising, and much more. Vlogs are very common on YouTube, featuring monologues or mobile commentaries recorded on the go (Snelson, 2015, 176).

In some countries, there are also institutions called “Relay Centers”, which help to solve issues like doctor appointments for disabled people. In Austria, the “RelayService” is in place. A Relay Center is designed to let deaf and hearing people communicate with each other. The assistant at the call center reads e-mails or text messages sent by the deaf person out loud to the hearing recipient and writes down the answer for the hearing impaired person (RelayService, n.d.). At the beginning, the person could only contact the Relay Center via written messages; the Communication Assistant (CA) at the center called the required person, and then wrote the answer back to the deaf person (Hilzensauer, 2006, 185).

35 Now, through new technologies, it is even easier. The hearing impaired person can call the center via live chat, use sign language to impart information about the needed assistance, and the CA can answer in sign language after taking the call. The advantages of such centers are the possibility to be independent of friends or relatives, and get a sense of integration into modern society.

Maiorana-Basas and Pagliaro conducted a survey in order to find out what technologies hard of hearing and deaf adult Americans use and what websites they frequent primarily. The results showed that younger people often used smartphones, tablets (Apple, Android, Blackberry) and also computers, whereas older people preferred using the computer to the smartphone. The deaf participants stated that they primarily use technology for writing / reading e-mails, text messaging, and surfing the Internet (all these factors were accounted with more than 70%). About 40 to 50 per cent of the participants said that they use it to communicate via video through applications like Facetime and Skype, and to write documents or papers. Also, games are popular (Maiorana-Basas & Pagliaro, 2014).

As far as frequented websites are concerned, search sites like Google and Wikipedia were on top of the list. News and information sites as well as social media sites are preferred. The reason for this might be that deaf people are now able to gather information by themselves. They do not have to ask anyone to assist them or to explain something to them, often if they do not know a word, they can look it up on the Internet, including a picture describing the word (if there is one). Social media sites let them connect with people all over the world and stay in contact with everyone, even if they do not see each other that often. News sites let them know what is going on all around the world, there are even news sites completely made in sign language. To conclude this chapter about Deafness and ICT, the emergence of the Internet opened up a new world not only to disabled people, but also to everyone else. Though it made accessing information in general much easier, hearing impaired men and women still have to deal with problems of web accessibility, which will be closer described in the following chapter.

36 7. Web Accessibility

Before describing various applications and web sites for deaf and hard of hearing people, it is necessary to give a short insight into web accessibility for the deaf population, which is done in this chapter. Additionally, the Sign Language Interpreter Module is presented, as an example of how implementing web accessibility for deaf and hard of hearing people could work.

Web accessibility is still an important topic for deaf and hearing impaired people. As their preferred language is sign language, they have the need for a translation of the existing written information into their first language (Debevc et al, 2010, 181). Written language is often difficult to understand for disabled people. To make web surfing easier for deaf and hard of hearing people, information in sign language should be available. Regarding literacy, it is known that a large percentage of the deaf population is insufficient educated and therefore faces different problems when reading or writing information (Debevc et al, 2010, 184). Documents are often difficult to comprehend, because they are written for people with a standard education, which does not apply to the majority of deaf and hard of hearing people. This problem also arises when dealing with hyperlinks or web sites with a lot of sub terms and categories. Often, the deaf person is not able to understand what the term means and has to look for the needed information randomly, which is time-consuming and inefficient. One problem that goes hand in hand with the implementation of sign language is the placing of the video. Often, the video that translates the words into sign language is also disturbing the view on the screen. Debevc and his colleagues developed a system that “enables the embedding of selective interactive elements into the original text in appropriate locations, which act as triggers for the video translation into sign language” (Debevc et al, 2010, 181). When a video is finished, the window closes automatically and the original web page is visible again. Though there are such projects like Debevc and his colleagues designed, most web sites are still not accessible to everyone. In 2006, the United Nations audited the global web accessibility, investigating 100 web sites from 20 different countries. The study showed that with a few changes the web sites would be much more accessible and user-friendly. Many web hosts are of the opinion that written information or pictures are sufficient for deaf people.

37 Unfortunately, the fact that their first language is sign language is often neglected, which leads to difficulty in comprehending the information given on the web sites (Debevc et al, 2010, 182). Debevc and his colleagues named several arguments to provide sign language videos on web sites, which are demographics data, literacy and access to information, navigation as well as reading ability, and multilanguage requirements. Regarding demographics data, considering the fact that about ten per cent of the population has some kind of disability, deaf people have to deal with the challenges that arise when using audio-based communication. About 80% of deaf people worldwide have problems with literacy and reading, and / or insufficient education, which leaves them with problems when it comes to comprehending difficult texts. The navigation ability is restricted due to their lack of knowledge of specific words. Sign language videos might help them navigate through the sites. As far as Multilanguage requirements go, it is essential that sign language is implemented on websites that feature another national language. For example, in Austria the Austrian Sign Language is an official national language, so if a website offers the opportunity to change to this country’s language, it should be also available in sign language. Another reason to include sign language into web sites is the fact, that in some countries, it is an official national language, which is also the case in Austria. Therefore, every Austrian web site should have information in Austrian Sign Language as well. This is evidently not the case, only a few web sites try to include deaf and hard of hearing people by offering video assistance. Although it is obvious that there is a much greater need for apps and web sites with integratory content, some apps and websites are designed to help the deaf population. Some of these applications are described in the following subchapters.

7.1. Functionalities of web accessibility When dealing with web accessibility, one has to look at seven different functionalities: video control, subtitles, image resizing, added sound, slow-motion, fast-forward, and the shifting of the video across the web page. Accessible videos obviously need controls like start, stop, and pause, so that the person viewing the video can decide what to watch and in what pace. Also, the video should be able to be enlarged, in order to let the person focus on facial expressions as well as hand movements (Debevc, 2010, 188).

38 Adding sound should also be possible if there is need for it. Of course, deaf people do not need sound, but hard of hearing users may hear some sound when wearing a hearing aid. This way, they can follow the signs while still getting some audio information about the subject, if they want to.

7.2. Sign Language Interpreter Module Debevc and his colleagues created the Sign Language Interpreter Module, also called SLI Module, using a multimodal approach in order to combine different media elements, for example video, audio, and subtitles, into a new layer (Debevc et al, 2010, 190). The SLI Module considers three different channels, also called modalities, with one video document. The sign language interpreter is the visual modality, speech is an auditory modality, and subtitles are a textual modality (Debevc et al, 2010, 190). Debevc and his colleagues tried to manifest these channels through a transparent video, that can be positioned anywhere on the screen. The goal was, that the person only has to click on a button, and the video appears, which can be paused, stopped, and back- and forwarded if need be. After the video is finished, the user automatically returns to the normal web site. The added hyperlink element allows the user to play the video, while making little effort for the author of the web site. The only requirement is the implementation of the hyperlink, which connects to a server where the video for the specific case is located (Debevc et al, 2010, 190). To put it more simply, the module is supposed to enable videos with a transparent background over an existing web page without altering the web page’s structure. Through this, it synchronizes video, audio, and subtitles, and is activated if the user has a demand for it. The videos can be put wherever the user thinks it is not distracting him or her. Figure 8 depicts, how the SLI Module unites the different channels to provide a certain comfort for deaf and hard of hearing users.

39

Figure 8: The channels of the SLI Module (Debevc et al, 2010, 191)

The creation of such a video suitable for this approach is done using a chroma key background, which is used to remove the background from the subject of the video (Debevc et al, 2010, 192). Such a procedure is necessary in order to let the user focus on finger and lip movements, without having a disturbing background. The person signing needs to wear a different colour than the green chroma key background. A video modelling software lets the creators remove the background. As a result, the background is transparent. Debevc and his colleagues created videos with the format Audio Video Interleave (AVI), these were then imported into the modelling software, removing the background and softening edges between the object and the background. The video was then exported and converted into a Shockwave Flash video, allowing high quality videos that present the signs and movements without blurriness (Debevc et al, 2010, 192).

40 After the videos were finished, the next step was the integration into web sites. The creators used HTML (Hypertext Markup Language) and JavaScript code, in order to integrate the Module as fast and simple as possible. The two main points were the compatibility with other browsers, and the ability to view several transparent videos on the web page. The project was successfully implemented on several web sites and is still in use, though it is not used as much as could be expected regarding the benefits for deaf people.

8. Smartphone and Web Applications for deaf people

After defining all important terms dealing with deafness and technology, the following chapters aim at describing several useful smartphone and web applications, websites, and gadgets that were created especially for hearing impaired people. This chapter takes a closer look on useful applications. The apps evaluated are SpreadTheSign, MotionSavvy, TapTap, SignMedia SMART. As described above, this technology based on the introduction of even more powerful smartphones and smart devices fostered the development of such applications and gadgets in the last few years and opened up new possibilities for app-developers as well as app-designers to deliver needed applications to users in different ways.

8.1. SpreadTheSign SpreadTheSign (http://www.spreadthesign.com//) is a database for sign languages around the world. It is an international “Leonardo da Vinci” project, which is supported by the European Commission via the Swedish International Program Office of Education and Training (European Sign Language Center, 2012). It is an app as well as a website. The fact that every country has its own sign language creates difficulties in conversing with people from around the world. This app can help deaf people to get in contact with people worldwide, and to be able to learn some important vocabulary, for example when going abroad. From a research point, a vast comparison of different Sign Languages worldwide is made possible. The project also wants to support the interaction between deaf and hearing people in the educational sector.

41 The website is administered by the European Sign Language Center, a Non-Governmental and Non-Profit Organization, whose object it is to make national sign languages available to everyone, whether hearing disabled or not (European Sign Language Center, 2012). Easy and free access to sign language signs from more than 25 countries is offered to deaf and hearing users. The project started in 2012 and is still ongoing until the fall of 2015. The Alpen-Adria- Universität Klagenfurt also takes part in this international project by providing videos in Austrian Sign Language. As stated above, the goal is to make national sign languages accessible to everyone. The SpreadTheSign project has many target groups, including deaf and hard of hearing people, as well as hearing learners of sign language, sign language interpreters, teachers who use it to support their deaf students by trying to learn some signs in order to communicate, or to use it as an educational tool, families and friends of deaf men and women, as well as the hearing community in general. A side effect of the app is that hearing people are made aware of the existence of different sign languages and the possibility to obtain knowledge in an easy and convenient way. So far, the database has thousands of words and sentences in various sign languages, including Swedish, English (BSL), American English (ASL), German, French, Spanish, Portuguese, Russian, Indian, American English, and many more. Every person can easily search for a word in his or her language, and then look at the signs used to describe this word in other languages, as can be seen in Figure 9 The user only has to click on a flag and gets the word (written and in sign language) in return.

Figure 9: Search result for the word "Büro” at the website of SpreadTheSign (European Sign Language Center, 2012)

42 As depicted in Figure 10 if you use the smartphone app for searching, you can easily slide to the left or right and choose another language.

Figure 10: Mobile app version of SpreadTheSign (European Sign Language Center, 2012)

Baby signs are also included in this dictionary, which are signs that deaf infants can learn in order to communicate in the earliest stage of growing up. Some deaf colleagues that were asked about the project confirmed that it is a good asset in communicating with other deaf people around the world. There remains one big problem: not only does every country has its own sign language(s), there are dialects and special signs that cannot be understood by people that are not part of this subculture. It might happen that you use the signs you found within the app, and that your dialogue partner does not understand you very well. Still, it is a first step towards unifying the deaf community around the world.

8.1.1. Implementation details The developer of the website used HTML, CSS and JavaScript to display the site on the client- side. There is a difference between a script running on server-side and on client-side.

43 “Server-side scripts execute on the Web server, feeding XHTML to the browser as a product of their computations, while client-side scripts execute on the user’s computer” (Darlington, 2005, 187). In other words, a client-side script allows the user to interact with the site (e.g. pushing a button or viewing animations). A server-side script reads the data out of a database and displays the information to the client (the browser), for example if the user opens an online web shop, the browser connects to the responsible server, which loads all available products from the database and sends it back to the client. This means, that client-side scripts cannot access server-wide information and are therefore limited compared to server-side scripts. An example for a client-side script language is JavaScript, which was also used while creating SpreadTheSign, being independent from any browser and can therefore be used manifold. Java and PHP (PHP: Hypertext Preprocessor) are examples for programming languages used on the server-side.

As mentioned before, the developers of SpreadTheSign used Javascript, CSS, and HTML to create the site. CSS (=Cascading Style Sheets) is a style sheet language that lets the user format a document and change its look. Also, a JavaScript library was needed to simplify the client-side scripting of HTML which works across different platforms. This library is called jQuery, which allows the user to navigate through a document swiftly, create animations, handle events, and select DOM (Document Object Model, represents the objects in a tree structure) elements.

8.1.2. User perspective of STS At an international meeting of the project partners for “SpreadTheSign” in Klagenfurt between the 26th and 28th of February 2015, the user perspective of the app and the website was discussed. The deaf users considered the app being useful because it is available everywhere, and can also be recommended for children. Though there were a lot of positive opinions, for example the possibility to communicate with other nations, there were some doubts because some special functions for the app have to be purchased for 4,99$ in the app store1. Those extensions can be used at the website for free. Also, the deaf users wished for an offline use of the app. As of today, it can only be used when online.

1 These extensions are not necessary to access the normal functions of the app. 44 As far as the homepage was concerned, there was a wish to have more illustrations to support the deaf users in understanding difficult words. Currently, there are illustrations for various words, as can be seen in 11, but for the most part this step is still in development.

Figure 11: The illustration for the word "blue" (European Sign Language Center, 2012)

Furthermore, the different nations gathered information about the users of STS. Most of the users are deaf, but there are also hearing people that try to learn sign language by using the app or the web page. In addition, teachers and students in schools use the app to communicate with deaf colleagues.

8.2. TapTap TapTap was created for mobile platforms of Apple like iPhone, iPad and iPod Touch. Its goal is to assist the deaf people in a hearing world. It is able to alert the user if a loud noise is made near them. It can help hearing impaired adults and children to get the attention from other people. By launching the app, the device will vibrate and flash to notify the person that there is something that needs his or her attention (Vondracek, 2010). The user is also able to adjust the sensitivity to suit the environment and personal comfort level. Some examples for the usage of this app would be a knock on your front door, where your device alerts you via vibration or flashes, a phone ringing, or your baby crying. It is debatable, how well this application really works. On the one hand, it might be of real use when someone is at home and the phone application alerts the person if the baby is crying. On the other hand, in public it might not work that well because there are a lot of different noises and it is uncertain whether the app can distinguish between them and filter the ones that matter.

45 8.3. UNI by MotionSavvy The company MotionSavvy is currently creating a product called UNI. The company recently launched an Indiegogo crowdfunding campaign to get funding for their project. Via crowdfunding, a private person or a company can donate money to a project they think is useful and needed in the future. UNI lets the user communicate easily with others by translating signs into voice and vice versa, as can be seen in 12.

Figure 12: MotionSavvy's UNI translates sign to speech and speech to text (Stemper, 2015)

UNI is a small and portable mobile device that tracks the movement of the hands in real time, and translates the signs into speech. At the moment, this is only possible for American Sign Language, but the company wishes to work on the device so that it one day is ably to translate various languages worldwide. For this purpose, they created the Indiegogo campaign shortly mentioned before. The company was able to raise over 45.000 USD until December 2014, overreaching their goal of 40.000 USD. 334 people funded the project through the campaign (Stemper, 2015).

The device includes a visualizer, which mirrors the image of the signer’s hands and provides real time visual feedback. This way, the user can see himself or herself signing (Stemper, 2015).

46 To be able to use this device, the person has to purchase the tablet Dell Venue 8 Pro. Plans for implementing it on Android and iOS versions have been made already. The tablet uses a technology called leap motion. In other words, the device uses a sensor to register where the hands and fingers are and follows the movement with a camera. This camera is paired with the recognition technology so that a seamless translation is possible (Stemper, 2015).

Additionally, the system studies the way you sign to improve the translations, and it is accessible everywhere without needing Wi-Fi. The device is now in the last stages of development and will be purchasable by fall 2015.

UNI by MotionSavvy deems itself to be one of the most useful devices for deaf people in the near future. It does not only allow you to understand someone better, it lets you communicate with them regardless of the ability to hear. Seeing as there is still a big problem with conversations between hearing and deaf people, this device could change the way we communicate with each other significantly. One point that needs to be observed is the quality of the translation. The application is not yet available, which is why not judgement about this can be made. The question arising is whether the device creates real sentences that fit together or just translates the words without displaying the real meaning of the sentence.

8.4. SignMedia SMART For deaf people, learning their national spoken / written language is quite difficult. Even more challenging is trying to learn another foreign language. In Austria, there is an additional factor influencing the willingness and ability to study another language for deaf people: the educational system. Deaf Austrians rarely have the opportunity to be educated bilingually because there are no special schools for that education branch2. Teaching deaf people another language, for example English, can be made possible through applications like Sign Media SMART (http://www.signmediasmart.com/), a multimedia approach to support deaf people with their goal to learn English. The SignMedia project was the predecessor of the SignMedia SMART project, and was funded by the Lifelong Learning Programme, also called Leonardo Da Vinci programme, of the European Commission (Hope, n.d.). Four partners were involved in creating the project: The University of Wolverhampton, the University of Turin, the Alpen-Adria-University

2 There are schools specialized on deaf education, but they focus on written German and not English. 47 Klagenfurt (Center for Sign Language and Deaf Communication), and the deaf-led media production company Mutt&Jeff Pictures Ltd. (Gansinger & Hilzensauer, 2015).

The main target group of the SignMedia project are deaf media professionals, as well as graduates and students of media related studies with an intermediate command of written English. Additionally, sign language interpreters that want improve their knowledge about media terminology, hearing co-workers of deaf experts, and other deaf learners who want to get some new vocabulary are targeted. Hearing media experts have the opportunity to look up information in various handbooks or dictionaries. Deaf media experts have to deal with the fact that there is little helpful information for them out there due to the fact that they often lack knowledge of literacy. There are almost no glossaries indexing special terms that are needed in this field of work. The SignMedia project’s goal was to help these individuals and improve their working life. The final product was launched in 2012 (Gansinger & Hilzensauer, 2015).

The SignMedia SMART project is a follow-up project of the SignMedia project. The goal is to separate the learning tool from the glossary, to make it accessible on its own. SignMedia SMART started in 2013 and ends in 2015. Like its predecessor, it is funded by the Leonardo da Vinci programme of the EU, and is coordinated by the University of Wolverhampton. Additional partners compared to the previous project are the European Sign Language Center in Sweden and Bellyfeel Ltd. in the United Kingdom. During this project, an app was created to make the content available anywhere and anytime (as long as the user has Internet access), which is helpful for deaf experts that need the vocabulary in a studio or outside, for example when filming something. The web app contains an extensive glossary of media terms in various sign languages, including Austrian and British Sign Language. Clicking on a word item leads the user to another page, where an embedded YouTube-video displays the term, additionally giving an explanation of what is meant by the word (Figure 13).

48

Figure 13: Description and video display of the term "Pre-Production" (Hope, n.d.)

As was mentioned in Chapter 8.1.1., there are server-side and client-side scripts. The developers of Sign Media SMART used JavaScript as the client-side script language, as well as HTML and CSS. The usage of the web application is for free and the users can look for media-related terms using a smartphone or tablet as well as a computer. The web application is currently under construction and can already be accessed via de link http://www.Signmediasmart.com/. The final app will be launched in autumn 2015 (Hope, n.d.).

49 9. Websites for deaf people

The previous chapter aimed at describing various applications that are useful for deaf and hard of hearing people. This chapter evaluates websites that are considered helpful for them. The websites analysed are Ledasila, which will be turned into an application this year, and Gebärdenwelt, an online platform that offers news and information in sign language.

9.1. Ledasila Though Ledasila (http://ledasila.uni-klu.ac.at), short for Lexical database for Sign language, is not an app yet, it might be turned into one in the future. The database has existed since 1997 and was developed within the framework of a project called “Gehörlosenserver”, which was financed by the Austrian Ministry of Science and Transport (Krammer, Bergmeister, Dotter, Hilzensauer, Okorn, Orter & Skant, 2001, 191). Ledasila is the biggest lexical database for sign language in Austria. Everyone may use it for free. A person can search for a certain word, but he or she can also choose e.g. a region, type of sign, positioning of the hands (location), form of the hands (handshape), and a field of words. If one decides to search only for a word, he or she gets all the possible matches for the search in a list. Clicking on a word leads to a video, where the word is translated into sign language. The results may include the Austrian Sign Language, but there are also other possibilities, for example to get regional vocabulary that is used in a certain region of Austria, as is depicted in Figure 14.

50

Figure 14: Search results for the word "Büro" in Ledasila (Alpen-Adria-Universität Klagenfurt, 2015)

The example shown in Figure 14 shows, that it is difficult to use the right signs when talking to people that are not from the same town or region as yourself. With the launch of the website as an app for iOs and Android, the learning of new words would be much easier. Students of the Alpen-Adria-Universität Klagenfurt, who visit the Austrian Sign Language courses (ÖGS), are the users of the website in order to freshen up their skills or look up new words. Additionally, the database is a source for teachers of hearing impaired pupils. Often this database is their only way to learn some words in order to communicate with the children. A smartphone application would make this process much easier and faster as well.

9.1.1. Linguistic concept of the database The database module is a mixture of Liddell & Johnson’s Movement-and-Hold-Model, HamNoSys, and the SignPhon category system, which were explained in Chapter 4.3. about notation systems. The database has two characteristics, which were non-existent before. The first one is the representation of signs and the second one deals with the openness of the sets of categories and corresponding values (Krammer et al, 2001, 192).

51 To represent the sign components, Krammer et al used an adapted version of the Movement- and-Hold model, assuming that each sign consists of hold and movement segments that follow each other in time (Krammer et al, 2001, 192). Furthermore, handshape, location, orientation, and other manual as well as non-manual parameters appear at the same time. The second characteristic of the database is an open set of categories and category values. If during an analysis it turns out that a certain category/category values are not yet included in the database, a user can add a category and category values anytime. This can be done easily: the user has only to give details about the category and add the values for this category (Krammer et al, 2001, 193).

9.1.2. Technical aspects Two different video formats are used in the database. A compressed MPEG1 video with less file size offers faster downloads of the files, whereas the original AVI videos, called “master tapes” are bigger and of a better quality (Krammer et al, 2001, 193f). At the beginning, Ledasila started out being a web application, which means that the user does not have to install something to use it, having access to the Internet and a web browser is sufficient (Krammer, Bergmeister, Bornholdt, Dotter, Hausch, Hilzensauer, Pirker, Skant, Unterberger, 2009). The web page was created with the programming language ASP, short for Active Server Pages. ASP belongs to Microsoft and offers the possibility to create dynamic websites. Back then, ASP was an often-used programming language and a good choice. Today, there are various other programming languages that are much easier to write and maintain. The system is currently still hosted on a Windows 2003 Server of the Alpen-Adria-Universität Klagenfurt, but a change to a newer server is necessary and will be made in the near future.

9.2. Gebärdenwelt The website www.gebaerdenwelt.tv is an online platform for deaf Austrians, which allows them to see daily news in Austrian Sign Language and is therefore unique in its service (Servicecenter ÖGS.barrierefrei, n.d.). The user can look at videos and simultaneously read the news in German. This service is a project of the service center ÖGS.barrierefrei, which was established in 2005, and is funded by the Federal Ministry for Work, Social Policy, and Consumer Protection. The center wants to give deaf people the same opportunities and chances as everyone else has. The Gebärdenwelt website has been online since May 2008,

52 and through a cooperation with the Austrian Press Agency, also called APA, they are able to create daily news from Monday to Friday (Servicecenter ÖGS.barrierefrei, n.d.). The goal of this project is to offer accessible news to everyone, educate deaf people, and implement the UN-convention related to the rights of people with disabilities. The project has its own film studio and a team of hearing and deaf coworkers that create qualitative videos, which are only signed by deaf moderators in order to make sure that it is done the right way. Additionally, the project has made several movies, including a documentary about the in China in 2009, and the world congress of deaf people in South Africa in 2011. In 2012, they added another section to their website, called Gebärdenwelt Kinder, where the different topics are partly created by deaf children. The section offers stories for children. Other sections that are featured revolve around sports, culture, education, science, technology, nature, economy, and health. This project offers a lot of information for deaf users. They are not only able to look at daily news without having problems understanding anything, they can educate themselves further and contribute by sending the editors some suggestions on interesting topics. Deaf users are invited to actively participate and not only watch.

10. Gadgets

As shown in one previous chapter, “Apps”, many new applications can be used on smartphones and tablets. It is not just people being born deaf that need assistance, many elderly suffer from not being able to hear well anymore, thus not being able to understand others. This chapter takes a closer look at gadgets, small tools that physically help hearing- impaired people to cope better in everyday life. The Oxford Dictionary defines a gadget as a “small mechanical or electronic device or tool, especially an ingenious or novel one” (Oxford University Press, 2015).

The following subchapters describe two useful gadgets for deaf people. First, the Conversor Pro, which helps hearing-impaired people to listen to others more easily by amplifying the voice to the hearing aid, is discussed. Then the ZBand, a vibrating silent alarm band, is described.

53 10.1. Conversor Pro The Conversor Pro is a hearing aid, which makes it easier for people with hearing difficulties to understand others. For this purpose, it uses a magnetic signal, the so called “telecoil” to send the amplified voice to the hearing aid. Normally, a microphone is used to get the voice and noise of the surrounding, while with this technology, the signal can come from a device like the Conversor (Hearing Loss Association of North Carolina, 2015).

The device consists of two components (Figure 15), the transmitter including the microphone and the pendant receiver who amplifies with a loudspeaker or – as mentioned before – transmits it to the hearing aid via telecoil.

Figure 15: Components of the Conversor Pro (Conversor Ltd., 2015)

The microphone-component can be put on the table near a person who is speaking and thus transmits the voice to the receiver, which is placed around the neck of the user. It does not matter which source of sound is used; a TV is as fine as a group of people. Unfortunately, the Conversor Pro cannot connect to any other digital device – the sound can just be processed with the connected receiver (Conversor Ltd., 2015).

54 10.2. ZBand The second gadget discussed in this chapter is a vibrating wristband that works with smartphones based on Android and Windows, alerting the user by sending vibrations. The ZBand vibrates, when the smartphone sends a signal to it. The device looks just like a normal wristband. Initially, the developers designed it for different purposes and target groups like deaf and hard of hearing people, but also couples who get up at different times. It does not make any sound, but the vibrations can be felt by the users. This way it can help them to get up in the morning as well as inform them about appointments. The wristband connects itself to the phone via Bluetooth, which is implemented in mostly all devices nowadays. Different apps can then use the wristband for their own purpose. Settings like the strength of the vibration can be set. Furthermore, the wristband has a button that the person can use to snooze the alarm clock or to turn off the alarm. When the battery is empty, it just needs 30 minutes to charge and then lasts for another 10 days (ZBand, 2014).

These facts make this wristband very interesting for deaf people to recognize sounds or alerts with their smartphones through vibrations. If for example there is a fire alarm, in the future the wristband could notify the user in such a case. Connecting the wristband with the fire alarm detectors or a doorbell opens up new opportunities for deaf and hard-of-hearing users.

11. Avatars

This chapter and its subchapters give the reader an insight about what an avatar is and how it is used to animate sign language for the deaf population. Early as well as present and future developments in the area of sign language avatars are discussed.

Avatars give deaf people the possibility to have a more interactive conversation with others. Whereas normal videos are static and cannot be changed, the avatar adopts to the users and allows them to communicate dynamically (Kipp, Heloir, Nguyen, n.d., 2). In addition, videos of a person using sign language cannot be anonymized because the mimics of a person are essential. An avatar offers the possibility to give information without having to address the problem of anonymity.

55 Furthermore, videos of signers have to be of high quality and are therefore often very expensive. Adding to that, every time the content changes, a new video has to be made. Consistent video backgrounds would be another advantage, as well as the decreased size of avatar videos. Controllable speed and view angles are another factor that speaks for using avatars (Tryggvason, n.d.).

Some challenges involved with creating a sign language avatar are the content representation and the animation. There is no universal writing system, as was mentioned before. Furthermore, it is important that the animation is of good quality, because the mimic of the avatar is also very important to understand the message.

11.1. Sign Language Animation Probably the most common method to animate something is done via motion capturing, which also can be used to animate sign language. On the one hand, motion capturing leads to a good comprehension and is approved by many test subjects. On the other hand, it is quite expensive and work-intensive (Verlinden, Zwitserlood, & Frowein, 2005). Verlinden and her colleagues name another alternative to motion capturing, which is called the synthetic creation of animation. In other words, an avatar represents a person using sign language. The first step to create such an animation is the use of a notation system. Some of them have already been described in Chapter 4.3. Verlinden and her colleagues chose the system HamNoSys, designing a computer font especially for that system, so that every component is assigned a specific value (Verlinden et al, 2005, 4760). The notations need to be translated into something a computer can read. Because of that, the University of East Anglia has created a special programming language, called Signing Gesture Markup Language, also known as SiGML. SiGML is an XML encoding of HamNoSys. After the signs have been translated, the data can be sent to the avatar that makes the requested signs (Verlinden et al, 2005, 4760). A more detailed view on SiGML has already been given in Chapter 5.4.. Figure 16 depicts the transcription of HamNoSys into SiGML, that translates the word into information a computer program is able to read.

56

Figure 16: SiGML tranlsation of the HamNoSys transcription (Verlinden et al, 2005, 4760)

There are several challenges when turning HamNoSys into numerical animation data. The categories need to be replaced by numerical locations and distances as well as joint rotations. Locations are defined proportionally to the length of the arms due to the fact that the hands are placed somewhere in that area (Glauert et al, 2004). Some features and postures are neglected by HamNoSys, or are not displayed in enough detail. This information has to be added in SiGML. Glauert and his colleagues define the significant aspects as those that would remain the same even if the sign is performed by a different avatar that has other body proportions (Glauert et al, 2004, 104).

11.2. Early development of Sign Language Avatars The first projects for Sign Language Avatars started in the 1990s. All the early projects had considerable difficulty creating an avatar that depicts the movements and facial expressions as it should. The technical possibilities were not the same as today, and designing the avatar took a lot of time and effort.

One example for an early avatar system is Simon the Signer. Through this system, television subtitles developed into signs in Sign Supported English (which borrows signs from British Sign Language while following English grammatical structure; there are different gradations), which appeared as an optional commentary on the TV screen (Glauert et al, 2004). Sign

57 Supported English is a combination of signs that are used in the English word order. Such sign supported languages are also present in other sign languages. Glauert and his colleagues describe the project as successful regarding technological implementation, but critically acclaimed by deaf users that were not content with the Sign Supported English instead of normal British Sign Language (Glauert et al, 2004).

Another one of the first avatars was TESSA. In the United Kingdom, the University of East Anglia, Televirtual, Consignia, and the Royal National Institute for Deaf People created an avatar called “TESSA”, short for Text and Sign Support Assistant. It was developed to assist deaf or hard of hearing individuals with their daily transactions at a Post Office. (Cox, Lincoln, Tryggvason, n.d.) Through technology and virtual animation of a human body, it enabled customers and post office workers to communicate with each other. The system recognized certain fixed phrases, as well as variable values such as days or amounts of money (Glauert et al, 2004). The Post Office assistant spoke into a microphone, the sound was transferred to a computer with a speech recognition system that converted the words into British Sign Language and the avatar signed the words to the customer. Also, English text could be displayed, for example for hard of hearing people that do not use sign language. The movements of the Avatar mimiced the ones that were produced by a native signer beforehand. The signer’s hand, body and mouth movements were captured with the help of various electronic sensors.

11.3. Recent and future developments In this chapter, some examples of recent projects in the area of animated Sign Language avatars are given. Additionally, there is an outlook of what could be possible in the future.

11.3.1. The eSIGN Project eSIGN was an EU-funded project that had the purpose to serve information in sign language using software technology and avatars. The project members created tools to help others implement signed versions of their information into websites (Tryggvason, n.d.).

58 To use the software, a person has to download it or get the CD-ROM from the developers. After having installed the product, the deaf user can see virtual signing, if the website provider has implemented it. The avatar may already be part of the website, or be visible after clicking a button. The creation of signed content consists of two basic elements. First, every sign has to be created individually, for example using SiGML descriptions of the signs. Second, complete signed sentences have to be formed. Website owners can ask for an implementation of this software on their website, the costs depend on the amount of content that needs translation (Tryggvason, n.d.).

11.3.2. ViSiCAST Another project dealing with avatars in sign language is ViSiCAST, developed by members of the University of East Anglia in Norwich, United Kingdom. During this project, a software called Anigmen was developed in order to synthesize animation data from descriptions giving via HamNoSys. A description of the geometry of a particular avatar is also part of the software, saving the shape of the avatar in order to create signs suitable to the type of avatar (Kennaway, 2003). Kennaway summarized the steps relating to creating the avatar, which can be seen in Figure 17.

Figure 17: Synthetic signing animation (Kennaway, 2003)

First, the original text is translated into a Discourse Representation Structure, which means that the HamNoSys notation system is used to translate the text into written Sign Language. In other words, a sequence of signs is generated from the information gained with this structure. Kennaway describes the HamNoSys notation system as the only possible option because it is the only system that can be used worldwide, in contrast to other systems like Stokoe, which was specifically designed for one sign language (Kennaway, 2003).

59 One problem when using HamNoSys for computer programs is that the notation system excludes information that is obvious for the human reader, but not to the program. When designing a program, you have to consider every angle and define everything clearly and in detail, otherwise the program will not work the way it is supposed to.

To design a HamNoSys version for computer animation, the project members created the aforementioned Signing Gesture Markup Language. The SiGML representation does not vary from the information gathered by HamNoSys, but it is more suitable for computer processing (Kennaway, 2003). The second step deals with the Anigmen animation synthesizer, the system that creates animation from the data that is required through SiGML. The system uses “expat, an off-the- shelf XML parser to read SiGML into its own internal data structure” (Kennaway, 2003, 2). The system then tries to add details that may have been lost through the SiGML transcription, like default locations or the duration in seconds of each movement. After Anigmen has finished this step, a more detailed SiGML transcription can be written out, featuring details that are not available in HamNoSys (Kennaway, 2003). The next step involves the calculation the movements of the avatar every 1/25 of a second.

“On current desktop machines, Anigmen requires about 1 millisecond to calculate each frame of data, i.e. about 2.5% of the available milliseconds for animation at 25 frames per second” (Kennaway, 2003, 2).

Finally, the avatar is rendered, which means that the avatar appears on the screen in specified postures at the required times.

The projects eSIGN and ViSiCAST already started to take the technology available and turn it into something helpful for the hearing impaired community. Until now, avatars have been very static, being able to gesture signs, but not displaying the corresponding facial expressions. This problem needs to be addressed. As has been mentioned before, sign language does not just consist of the movements of someone’s hands, it also features facial expressions. It would be a great help for deaf users, if the avatar could also display emotion. This way, they would feel like they are talking to a real person and not just a machine.

60 Some projects already deal with this problem, for example the project SiMAX (SignTime GmbH, n.d.). Since 2011, the lead partner Signtime TV and several others (matrixx IT- Services, IBM, etc.) are developing an avatar that is able to express emotions and body language. The user can choose when to add the expressions to the translations and for how long. With the rapid evolvement of technology, an avatar that mirrors a real person might be possible in only a few years.

12. Models and Theories of Individual Acceptance

The preceding chapters offered an insight about deafness and valuable technological tools that might help deaf people in their everyday lives. Still, it is not determined how deaf men and women perceive these applications and websites. This chapter wants to describe several technology acceptance models, including the Technology Acceptance Model and the Theory of Reasoned Action, which might be suitable to evaluate the worthiness of these technologies. Furthermore, two models are chosen, on which the qualitative interviews rely.

12.1. Theory of Reasoned Action (TRA) This theory is used to predict a wide range of behaviors and negative as well as positive feelings toward a target or product (Venkatesh, Morris, Davis & Davis, 2003). It implicates that people “make behavioural decisions on the basis of a reasoned consideration of the available information “ (Terry, Gallois & McCamish, 1993). An example for such a decision process would be if someone thinks about joining a club. He or she will evaluate if the time and effort (and possibly money) that needs to be put in is worth the result. Most likely, the person will determine if there will be a positive outcome and decide according to that consideration. In more detail, the behavioral and normative believes as well as the evaluation of the outcome and the motivation for the task lead to a certain attitude towards a subject, which creates an intention to do something, in the end leading to a specific behavior toward it (Figure 18).

61

Figure 18: Theory of Reasoned Action (Terry et al, 1993, 9)

12.2. Technology Acceptance Model Fred Davis (1989) was the first scientist to define the term Technology Acceptance Model (TAM). This model describes how users come to accept and react to technology. Davis created different factors that influence the decision to use a product, which lead to various extended versions of the TAM. The factors that Chuttur (2009) created for his extended Technology Acceptance Model are also displayed in Figure 19:  perceived usefulness  ease of use  intention to use  usage behavior  voluntariness  experience  subjective norm  image  job relevance  output quantity  result demonstrability

The perceived usefulness is the degree to which a person believes that using a particular system would enhance his or her job performance. The perceived ease-of-use is the degree to which a person believes that using a particular system would be no additional effort (Davis, 1989).

62 The subjective norm is a person’s perception of something and the evaluation if he or she should do it. People that are important to the person may influence this factor by recommending or advising against a certain product. The image is the perceived increase in status or a social system if the product is acquired. Job relevance are the capabilities of a system to enhance the job performance of a person. The output quality refers to the perception of how well a system performs tasks that match the goals of one’s job (Chuttur, 2009). Result demonstrability is the degree of possibility to observe the results and communicate them to others. Voluntariness displays the willingness to have the product without pressure from outside. Experience refers to previous experiences with a specific technology product (Chuttur, 2009). If the technology is perceived as useful and being easy to use, the person evolves a certain user behavior which leads to technology acceptance.

Figure 19: TAM (Legris et al, 2003)

63 12.3. Diffusion of innovation theory European sociologists and anthropologists first studied the diffusion of innovation in the 19th century. In 1962, Everett Rodgers published his work called “diffusion of innovations”, which explained how, why, and at what rate new ideas and technology spread through cultures (Rogers, 2003). Rogers evaluated that the diffusion process relies on human capital like knowledge, experience, skills, and talents. He identified four elements that influence the spreading of an innovation: The innovation itself, the used communication channels, time, and the social system valuable in the country. The innovation has to reach a critical mass. Adopters (people that accept and embrace technology) are put into different categories, as can be seen in Figure 20.

Figure 20: Categories of adopters (based on Rogers, 2003)

A technology innovation has to reach 16 per cent of the population to become widely adopted. The first ones to adopt the innovation are the innovators. They like to explore new things and are enthusiastic regarding innovations. The second group to adopt are the early adopters. They like change, and want to be one of the first people to try and buy the new products. The early majority makes up the biggest percentage of users. They are mainstream adopters, accept change and go along with it. The late majority is skeptical, and often adopts the products because it is necessary, but not because they want to. The last group consists of the laggards, they value tradition and wait to purchase the products until there is no other way.

64 13. Qualitative Interviews

In this chapter, hypotheses are created in order to ask the questions during the interviews accordingly. Four hypotheses are developed and explained in Chapter 13.1.. The following subchapter explains, how the conduction of interviews were conducted. Chapter 13.3. explains the results of the interviews according to the hypotheses. Finally, based on the results, some implications for the future are made.

13.1. Hypotheses For the qualitative interviews done with several deaf participants, first of all one or more hypotheses have to be created.

Through the change in the ways of communicating in the last decades, it seems that technology plays a big part in our lives, and that it opened up a lot of previously barred opportunities especially for disabled people. Considering this, the first hypothesis is:

H1: The convergence of technologies, for example the emergence of the smart phone, has bettered the lives of hearing disabled people.

Also, the ways of communication are manifold and let hearing impaired men and women use different devices and applications to transmit messages. This leads to the question if the ways to communicate have changed.

H2: The communication within the deaf community has changed in a way that there is more communication between them, but less direct communication (for example weekly meetings of an association).

In relation to the Technology Acceptance Model and the theory of diffusion innovation, the following hypothesis is created:

65 H3: Deaf people tend to be early adopters because of the opportunities technology offer to them. Furthermore, they purchase products because of their usefulness and ease of use (one of the aspects of the Technology Acceptance Model).

The ease and speed of use further proposes the following hypothesis:

H4: Deaf people wish for more technology suitable for them.

13.2. Conduction of interviews The qualitative interviews were conducted with five deaf participants, one male, and four female. All five of them have a stable work and living environment. The age ranges between 30 and 51 years. One interpreter translated the interviews made in Sign Language and spoken German, as it was easier for the interpreter as well as the deaf participants to understand what was asked of them. Every interview lasted between 12 and 17 minutes. The interviews were conducted in a room where there were no other disturbances, so that the interviewer and above all the interpreter could focus on the task at hand. The interviews were audio-taped via an Apple app called “Voice Memos“, where it is possible to record conversations, save them and also upload them or send them via e-mail to someone. This way of taping the interviews was really efficient and the quality of the voices was good.

The interviews dealt with the following topics (a detailed questionnaire can be found in the appendix): First the test subjects are asked if they have a smartphone, if they are using it frequently and what for. The time when they first got a smartphone is also of importance as it lets one conclude if they are late or early adopters. Second, they are asked if they think that technology has changed their way to communicate, and if it has affected their membership to the deaf community. Here it is important to ask about the communication channels (digital or direct) and the frequency of the communication, as well as the participation at events and meetings of the deaf associations.

66 Third, they were asked if they think that new technology like computers or smartphones help them in their everyday lives. If so, they give some examples. Also, other technologies like vibrating alerts and dawn simulators might be used by them. Another point that needs to be considered is what is important to them when buying a new technology. Following these questions starts the part about websites and apps. Questions like “Do you know some websites that offer translation into Austrian Sign Language?” or “What do you think about SpreadTheSign / UNI / Ledasila?” aim to determine, if the subjects are happy with these applications or if there are some ways to improve them. Finally, the five participants are asked if they use or know some other apps for deaf people and to express some wishes they have for future websites and applications.

As far as the first hypothesis is concerned, it is expected that the deaf users are of the opinion that new technology has made their lives easier and more comfortable. As far as the communication between deaf people in their communities is concerned, the results might point to a trend towards online communication via video chats and less face-to-face communication.

As was described in the previous chapters, there has been technology for deaf people for some years. Still, there is a lot more that could be done, starting with building upon the present projects and creating new ideas and applications that help hearing impaired people.

13.3. Results and implications for the future This chapter displays the results of the conducted interviews with the five deaf participants. Each of the five participants is given an alias and an abbreviation, for example one person is Subject 1, which is shortened with S1, the next person is S2, and so on. The following subchapters focus on the hypotheses and if they can be supported by the results of the interviews or not.

67 13.3.1. First hypothesis The first hypothesis dealt with new technologies and the impact on the living quality of deaf people:

H1: The convergence of technologies, for example the emergence of the smart phone, has bettered the lives of hearing disabled people.

This hypothesis is mostly supported by the results of the interviews. The emergence of convergent technology has improved the lives of hearing impaired people in many ways. For example, the smartphone has made communication a lot easier. S1 (female) describes the smartphone as being practical to make conversation with others, especially through video telephony. Also, the possibility to record a video and send it to someone is seen as a huge advantage. Although S1 thinks that the smartphone and other technology is somewhat practical, the subject is not too fond of it, because it is distracting and leaves you no time for something else. Other technology, like a visual alarm for the doorbell, is not used by S1 because it is too expensive for her taste. The subject uses some applications, mainly SpreadTheSign and Ledasila. She considers Ledasila is a very good database, where you can look at all the dialects and variations in Austrian Sign Language. S2 (female) has another view on this matter, being of the opinion that the smartphone and other technology is really helpful and supportive. S2 also uses the smartphone mainly to communicate with family and friends. A huge advantage for her is that she can make conversation with her boyfriend that does not live in Austria. By using the application WhatsApp (remark: an instant messaging app for smartphones, one of the most downloaded and used apps worldwide), they can easily communicate daily without having to pay for it. Another reason for her to use the smartphone is the velocity of the information transportation. Information is there within seconds. Another positive remark from S2 was about the usage of the computer. The Internet allows her to look up new, unfamiliar words fast and easy. Additionally, she can use a program to translate the written German into another language, for example English, in order to communicate with some colleagues. In general, S2 has made good experiences with new technology.

68 S2 uses various applications like SpreadTheSign, Ledasila, Signme, and IsignIT. IsignIT was created by Helene Jarmer and is an app that deals with health topics. Signme offers a animated drawings of signs. S3 (female) uses the smartphone to write short messages via text messaging or WhatsApp, look something up on the Internet, or write e-mails and surf online on Facebook. S3 thinks that especially the smartphone has helped her in her everyday life. Getting information fast and without problems is an advantage. S3 adds that she not only uses Facebook to get in touch with other people, but also to get information, for example tips for cooking or new signs that are emerging. Additionally, if she sees some interesting news, she starts to look into it online. All this information she gathers through her smartphone. One thing that S3 wishes to have is a doorbell that connects to the smartphone, which would really be helpful in everyday life. S3 knows and uses the same applications as the second subject. Additionally, she uses an app featuring a database for German Sign Language in order to compare Austrian and German Sign Language. S3 uses SpreadTheSign to learn new words in other sign languages. For example, if she would go to Spain, she could look up some new words in Spanish Sign Language to be able to communicate with other deaf people she might meet there. For the fourth subject, the smartphone is also an important and integrated part of her life. S4 is a female, she uses the smartphone foremost to look at Facebook, but she also downloaded WhatsApp and SpreadTheSign. She thinks that SpreadTheSign is constantly disseminating within Europe and now already known by a lot of deaf colleagues. Taking pictures and conducting bank transactions online are also used often. S4 uses messaging services to make appointments with friends or family. She also thinks that the smartphone helps her in her everyday life, the computer is not that important. S5 is male and fond of his smartphone as well. He uses it daily to communicate via WhatsApp and SMS, and of course video telephony. He thinks that the new technology has improved the communication between him and other deaf and hearing people. Before the smartphone, he could not just send a short message to someone that easily. He uses text messaging to communicate with his sister and other relatives and video telephony to make conversation with deaf colleagues and friends. This has offered the possibility to talk to each other for a longer time, like hearing people can talk to each other on the phone for hours. S5 states that he appreciates the opportunity to communicate with others via video for as long as he wants to.

69 S5 uses the computer to look up information on websites that feature sign language. He prefers to watch the news either on TV at a certain time, when it is translated into Austrian Sign Language, or online at the ORF TVThek (the programs made in Austrian Sign Language can be watched one week after the broadcast on TV). S5 states that the online service for news in sign language is one of his favorite shows, because he is not under pressure to be home at a certain time to watch the news like he had to when there was no such online service offered. Now he has the opportunity to watch the news with an integrated overlay of an intepreter. Additionally, he likes to look at the website Gebärdenwelt, where he can gather information in Austrian Sign Language. S5 also uses the applications SpreadTheSign and Ledasila. He does not have any other apps on his phone, because most of the interesting ones are in English, which he does not understand. A part of his family also uses SpreadTheSign to learn some new signs, and he is really surprised and pleased how fast they are able to learn with the app. After summarizing the results of the interviews, it is clear that the emergence of the smartphone and other technologies has improved the life of hearing impaired people, mostly considering the way they communicate. Services like WhatsApp and video telephony offer them the opportunity to talk to others efficiently and in their native language. S1 is the only one that thinks technology is distracting and therefore is not too fond of it, but she also appreciates the possibility to use the smartphone to communicate with others.

13.3.2. Second hypothesis After having evaluated these results, the second hypothesis needed to be interpreted:

H2: The communication within the deaf community has changed in a way that there is more communication between them, but less direct communication (for example weekly meetings of an association).

S1 takes part in meetings at the deaf association. She prefers to have direct contact with her colleagues from the association and barely uses her smartphone to interact with them. She states that the members of the association diminish, there are two to three people every year that leave the association, with almost no young deaf people joining. S1 does not think that new technology has something to do with that and that it cannot replace direct communication.

70 S2 states that she uses WhatsApp to exchange information with other hearing impaired associates fast and free of charge. Short messages are written on the smartphone, but if she wants to have a longer conversation with someone, she prefers to speak directly to him or her. S2 feels that the information gets transferred better in real life, meeting someone in person is preferable, because technology can also have some problems, for example that the video is stopping all of a sudden. This cannot happen in real life. S2 frequents the deaf association once a month because she is part of the advisors body. S2 also thinks that there are less people frequenting the meetings, but does not think that it has something to do with technology, and rather with the fact that younger people often have a Cochlear Implant and go to integrated schools. S3 also frequents the meetings of the deaf association and uses short messages or video telephony to transfer information quickly, but longer conversations are preferably held directly. S3 supports what the other subjects have said so far about the diminishing numbers in the deaf associations. She adds that it is now possible to cancel an appointment last-minute, which was not possible before, and that that could also be a reason for people not showing up. Before that, people came to the meetings even if something else came up. S4 does not frequent meetings at a deaf association. She was a member once, but gave it up and now prefers to meet up with deaf friends privately. She uses technology to make an appointment with them, but considers it better to talk to them in person. S5 frequents meetings at the deaf association regularly. He sees a possible connection between technology and the diminishing numbers of participants. S5 states that before there were such technical products, the associations were frequented by a lot of people, and now it is getting less. In addition, like S2, he thinks that young adults are integrated into normal schools and do not want to be a member in an association. The associations have to actively search for possible members. In contrast, S5 states that in Styria the situation is different, because there are a few deaf pupils in one school that have contact with each other and also visit the meetings together, whereas in Carinthia there are a lot of young adults that have no contact to other deaf people.

71 Considering these answers, the second hypothesis is not supported. Indirect communication is not replacing the direct one, it is more of a complementary act. Indirect tools are used to make appointments or send short information, but direct communication is preferred because the deaf people can make conversation more easily and fluently. Indirect communication can therefore be seen as an addition to direct conversing, and not as a replacement.

13.3.3. Third hypothesis The third hypothesis looks at the relation of being deaf and technology acceptance:

H3: Deaf people tend to be early adopters because of the opportunities technology offer to them. Furthermore, they purchase products because of their usefulness and ease of use (one of the aspects of the Technology Acceptance Model).

This hypothesis was developed according to two models of acceptance. As was mentioned before, the Technology Acceptance Model by Fred Davis names factors that influence the acceptance of new technology. Most important in this case are the perceived usefulness and the ease-of-use of the product. Other factors are the subjective norm and people influencing a decision. The changing of the public image and the job relevance can also be considered. In the case of technology for the deaf, job relevance and public image can be neglected. The subjects interviewed name several factors that are relevant to them when purchasing a new product, in this case a smartphone. S1 and S5 state, that the communication services have to work properly. S2 is of the opinion that is has to be effective in usage. S3 emphasizes the importance of the quality, especially for video telephony, and the reachability (in case of the smartphone: reception). S4 thinks that the smartphone needs to be practical, that you can reach someone or something fast and easy.

Looking at these results, it can be said that the deaf subjects buy the products if they are useful to them, as well as easy to use. The usefulness for them certainly lies in communicating with others through video telephony and message services. S1 also mentioned that she bought her phone because her husband recommended it to her. The others mainly focused on the quality of the video telephony in their purchase decision.

72 The third hypothesis also states that deaf people tend to be early adopters. This term was taken out of the diffusion of innovation theory by Rogers. The early adopters are the second group to adopt a new technology. They like change, and want to be one of the first people to try and buy the new products. This hypothesis was tested when asking the subjects if they would buy the project UNI if it were available in our country. S1 answers that she probably would not purchase it. She mentions that she won a tablet which she never uses, and she does not think she needs the UNI device. Due to this statement and several others she made during the interview, S1 can be considered as belonging to the group of laggards, that value tradition and wait to purchase the products until there is no other way around. S2 hesitates about buying the product, stating that if others would use it and recommend it to her, she would probably buy it. This makes her part of the early majority, which are mainstream adopters that accept change and go along with it. S3 is enthusiastic and declares that she would immediately buy it if it were available because of the opportunity such a device could offer regarding communication with hearing people. S3 clearly belongs to the early adopters. They like change, and want to be one of the first people to try and buy the new products. S4 cannot imagine how the product would work and if she would use it. S4 might belong either to the early or the late majority, depending on how she values the opinions of others and if she accepts changes or is skeptical towards it. S5 states that he would buy it if it entered the Austrian market, therefore, like S3, he belongs to the early adopters.

It is obvious that the results differed vastly. The hypothesis is not supported at all, seeing that only two out of five participants would fall in to the category of the early adopters. This makes it clear that deafness is no factor for being more open to new technology and embracing it.

73 13.3.4. Fourth hypothesis Now, the fourth hypothesis is considered:

H4: Deaf people wish for more technology suitable for them.

During the interviews, the subjects were asked about the project called UNI, which was explained in detail in Chapter 8.3.. Seeing as this is not a ready product at present time, it was interesting to see if they felt like that device could be useful for them. S1 is somewhat skeptical of this idea. She is curious, how and if it will work in real life as it is supposed to, but she considers it being a good idea and a device that could help hearing impaired people in their everyday lives. S2 is more appreciative of the idea. She states that, if it really works well it is a very good step in the right direction. Still, she is skeptical regarding the functioning of the device. As an example she gives signing with two hands. One hand might be required to hold the tablet, because not in every situation you are able to put the tablet down at a table or something else. The question arises, if the device would detect the right sign anyway. If she would, for example, sign the word “baby” or “work” (“Arbeit”), she is skeptical that the device really detects the signs with its sensors. Additionally, she prefers to have her hands free while signing and does not want to hold the tablet all the time. S2 is unsure whether she would buy the product herself, but might purchase it if others were to recount positive experiences.

S3 is most enthusiastic about this new technology. She states she would immediately buy it, because she would not need a interpreter any more. She could go to the doctor or a bureau and communicate with hearing people easily. Although she would appreciate this device, she also has some doubts about it. S3 mentions that the facial expressions of a person are also important when signing something, not just the gestures made with the hands. Therefore she is unsure whether the device could really transport all the information behind the sign. S4 finds the idea of this translation interesting. Until now, she has used the relay center in Austria to make appointments with doctors. S5 thinks that this project is good and could ease the communication between deaf and hearing people. He states that his smartphone has the ability to transfer spoken words into text, which he uses to understand what hearing people are telling him, but he regrets that he does not have the possibility to let his signs translate into text or spoken words. Therefore, UNI would be an interesting approach and he would also buy it if it were available in Austria.

74 Going back to the hypothesis and the question, if they wish for more technology suitable for them, the answers differed a lot. S1 is not too fond of new technology, which might be the reason that she also has no visions for the future of what could be helpful to her. S2 wishes that existing applications and websites are improved. To give an example, SpreadTheSign should offer some kind of categories to which area this sign belongs to (note from the author: there are categories like colours or religion; what she might miss are categories e.g. for handshapes). Also, Ledasila should be available on all browsers and as an application for smartphones like Apple and Android. S3 has more detailed notion what she would wish for in the future. She would wish for a translation via an avatar for something like newspapers, so that an avatar would sign the news for her. This way, she could understand more complicated texts and websites. Even if the avatars would only show static facial expressions, it would be a help for her. Also, Ledasila for all browsers and as an application is of importance to her. All five subjects expressed this in their interviews. S3 mentions a need for a 24-hour-relay center that offers translations around the clock. Right now, the center is only opened til 12 o’clock, there is no one available in the afternoon or at night, which would be good if there is an emergency. Additionally, one big wish is the ability to communicate via technology with hearing people. S3 expresses the wish to talk to her mother whenever she wants to, and not have to be satisfied with sending short text messages. S4 does not express any wishes for the future, except, like all other subjects, turning Ledasila into an app. S5 states that the UNI project is something that he would like to have in the future to enable communication between hearing and deaf people. He is not too fond of the idea of using avatars, because he thinks that it is important to see a real person. As an example, the news program “Zeit im Bild” is read by real people and therefore he wishes that the news in Austrian Sign Language are also made by real people and not an avatar.

It cannot be said rationally that the fourth hypothesis is supported. The problem with this subject is, that it seems deaf people often do not have such a clear idea of what could be possible for them. If they see a new technology, like they were shown with UNI, they embrace it and are open to it if it helps them. Additionally, they wish for existing applications to be extended and made better. Of course, this is also an important issue. Available applications should not be forgotten but improved. Still, it seems that deaf people need to see the possibility first, before they can start imagining what they could do with it.

75 Issues like a 24-hour relay center and a normal communication between deaf and hearing people are addressed. Therefore it can be said that the hypothesis is partly supported.

13.3.5. Implications for future research This chapter aims at discovering gaps in the research that might be filled in later research projects. Although this has nothing to do with technology, it might be interesting to further go into the topic of the decreasing numbers at deaf association. As one subject stated, the situation differs in Carinthia and Styria. It might be interesting to make an Austrian-wide comparison of the memberships in deaf association and interview various members from different cities to get an idea what really lies behind the diminishing numbers. Regarding the technology, it would be of interest to also ask about other devices and apps. New applications are emerging almost daily, so it is always possible to ask deaf participants about new apps and what they think about it. Although only one of the subjects of this study mentioned that the price played a role in the purchase decision, it might be of interest to compare the prices for “normal” and “deaf” technology, as well as their standard of living. To give an example: a normal smoke alarm costs about 30€. A smoke alarm for deaf people costs over 260€ because it needs an additional radio module and an alert module for the hearing impaired. (Stiftung Warentest, 2015). Additionally, further interviews with deaf Austrians or even deaf people around the world regarding their view on technology would be interesting and worth following up to. Another field with research potential is the usage of avatars. The results of the interviews showed, that some deaf users would welcome the opportunity of avatars, others prefer real persons and interpreters. A study could determine what deaf users want from an avatar and where they could imagine avatars to appear.

76 14. Conclusion This thesis aimed at displaying important aspects dealing with deafness and technology. At first, detailed explanations about all essential terms had to be given. In this case, it was especially important because the topic “Deafness and Technology” has not been a focus in many studies or papers. Of course, there are papers about technology and hearing impaired people, but they mostly focus on children, not on adults. Also, for a reader that does not know much about deafness and technology, it was essential to give these definitions.

Regarding websites and applications, there are getting more and more of them that are suitable for deaf users. This thesis gave some insight about the websites Ledasila and Gebärdenwelt. an online platform created in Austria, that allows them to see daily news in Austrian Sign Language. For deaf Austrians, this is a unique service, developed by the service center ÖGS.barrierefrei and funded by the Federal Ministry of Work, Social Policy, and Consumer Protection. The website offers deaf people the same opportunities hearing people have – to inform themselves about important topics and news happening worldwide. The goal of this project is to offer accessible news to everyone, educate deaf people, and implement the UN-convention related to the rights of people with disabilities.

Ledasila, the Lexical database for Sign language, is unfortunately not an app yet, but as it got clear during the qualitative interviews with the deaf subjects, they all wish for it to be available on all browsers and especially as an application for smartphones. This way, deaf and also hearing users could have access to the biggest lexical database for sign language in Austria anytime, anywhere. The advantage of Ledasila is the possibility to choose beyond different categories like a region or field of words, and gets different variations of the word in the various Austrian dialects. Every word includes a video, where the word is translated into a variation of Austrian Sign Language.

This paper also presented valuable applications for deaf users, including TapTap, SpreadTheSign, Sign Media SMART, and UNI. TapTap was created for mobile platforms of Apple like iPhone, iPad and iPod Touch, trying to assist deaf smartphone users in a hearing world. The application alerts the user via vibration or flashes if a loud noise is made near them.

77 SpreadTheSign was mentioned during every qualitative interview, all participants had it on their phones. This shows, that it really spreads around Austria and also the world. SpreadTheSign is an app as well as a website, but the app is clearly preferred by the users, because they can access the database anywhere at any time through the smarphone application. This was named as a huge advantage of SpreadTheSign in contrast to Ledasila. The subjects stated that the fact that the application works everywhere when needed is the best thing about it. It developed during an international project, its goal being to create an international database for sign languages around the world. Over 25 different sign languages can be accessed through the application, which allows deaf users to learn new vocabulary in other sign languages as well. This way, if they visit another country, they can communicate with deaf people they meet along the way. Additionally, they can learn different written languages as well. Also, hearing people use the app to learn new words in order to be able to communicate with deaf relatives or others. The project is ending in the fall of 2015, but a follow-up project is in planning. This certainly would be a great opportunity to extend the app, and add even more words and languages. As it became clear in the interviews, this app is preferred among all deaf users because it is convenient and easy to use.

SignMedia SMART developed out of its predecessor project called SignMedia, which focused on teaching deaf people another language through a multimedia approach. The European partners teamed up to create a glossary of special terms needed in the media sector, with target focus on deaf media professionals, interpreters, and students of media related studies. The SignMedia SMART project focused on separating the learning tool from the glossary, to make it accessible on its own. During this project, a web application was created to make the content available anywhere and anytime (as long as the user has Internet access), which is helpful for deaf experts that need the vocabulary in a studio or outside, for example when filming something. This project and the web application could be really helpful to deaf media experts, but also normal users can learn a lot of vocabulary and extend their knowledge further.

78 The fourth application that was evaluated was UNI by MotionSavvy. UNI is a small and portable mobile device that tracks the movement of the hands in real time, and translates the signs into speech. For now, it will only be purchasable in the United States of America, where the company is registered, but they wish to extend it worldwide. The subjects in the qualitative interviews embraced this idea, thinking that is would be a useful device for every deaf user. The device includes a visualizer, which mirrors the image of the signer’s hands and provides real time visual feedback. In order to be able to use this device, the person has to purchase the tablet Dell Venue 8 Pro. This might be a drawback, because some deaf users might decline the idea of having to purchase a particular product in order to use UNI. However, plans for implementing it on Android and iOS versions have been made already. Additionally, the system studies the way you sign to improve the translations, and it is accessible everywhere without needing Wi-Fi. The device is now in the last stages of development and will be purchasable by fall 2015. As was stated by one subject in the interview, this device is exactly what some deaf people wish for. A device, that helps deaf people communicate with hearing family, friends, and also strangers, and vice versa. In terms of communication between deaf and hearing men and women, this project is certainly the most promising one.

Another big part of this thesis was explaining how avatars work and why they could be useful. The interviews uncovered that not every deaf person wishes for avatars to translate something but prefer real people or interpreters. Still, avatars could at least give deaf users the opportunity to understand matters better than without any animation at all. Avatars give deaf people the possibility to have a more interactive conversation with others. Whereas normal videos are static and cannot be changed, the avatar adapts to the users and allows them to communicate dynamically. Additionally, an avatar offers the possibility to give information without having to address the problem of anonymity. Furthermore, videos of signers have to be of high quality and are therefore often very expensive, also considering that the signer has to be paid for the work of translating every detail. Adding to that, every time the content changes, a new video has to be made. Of course, if an avatar is used, the images need to be of a high quality as well. However, once a really good avatar is created, it could be used on different occasions. Even if it is a lot of work, an avatar could also mimic facial expressions, not only the movement of the hands. Although some deaf users are skeptical towards using avatars to transmit information, it is certainly preferable to not understand anything at all.

79 Also, with the rapidly changing technology, in a few years it might be possible to create an avatar that looks and behaves like a real person, allowing deaf users to have an experience as if they were talking to a real person.

In conclusion, it can be said that there are already some good applications and technology out there for deaf users. Still, there is a lot of room for improvement. Especially the implementation of web accessibility and avatars are topics that need to be addressed in the future. Deaf people have the same rights to information as everyone else does. They should not be discriminated because of their hearing disability. Technology is already bettering the lives of hearing impaired people through applications and websites that allow them to gather information on their own. Years ago, this did not seem possible, they had to rely on family and friends in order to get informed. Now, the Internet offers opportunities to get in touch with people far away, for example through Facebook, and also to acquire information on their own. The standard of technology is constantly changing and it is going to be interesting to see where it is going, for deaf and for hearing people as well.

80 15. References

Acs, Zoltan J.; Audretsch, David B. (1991): Innovation and Technological Change: An International Comparison. USA: The University of Michigan Press.

Ajzen, Icek (1991): The theory of planned behavior. Organizational Behavior and Human Decision Processes 50 (2): 179–211. doi:10.1016/0749-5978(91)90020-T.

American Speech-Language-Hearing Association (2011): Type, Degree, and Configuration of Hearing Loss http://www.asha.org/uploadedFiles/AIS-Hearing-Loss-Types-Degree-Configuration.pdf [18.02.2015]

Austen, Sally; McGrath, Melissa (2006): Telemental Health Technology in Deaf and General Mental-Health Services: Access and Use. American Annals of the Deaf, Volume 151, Number 3, Summer 2006, pp. 311-317. doi: 10.1353/aad.2006.0033

Beal-Alvarez, Jennifer; Cannon, Joanna E. (2014): Technology Intervention Research With Deaf and Hard of Hearing Learners: Levels of Evidence. American Annals of the Deaf, Volume 158, Number 5, Winter 2014, pp. 486-505. doi: 10.1353/aad.2014.0002

Boothroyd, Arthur (1987): Technology and Science in the Management of Deafness. American Annals of the Deaf, Volume 132, Number 5, November 1987, pp. 326-329. doi: 10.1353/aad.2012.1595

Brault, Matthew W. (2012): Americans With Disabilities: 2010. Current Population Reports. Washington: U.S. Department of Commerce, Economics and Statistics Administration, U.S. Census Bureau.

Braun, Ernest (1998): Technology in Context: Technology Assessment for Managers. London: Routledge.

Bravin, Philip W. (1981): Utilization of Technology in the Education of the Deaf-Blind. American Annals of the Deaf, Volume 126, Number 6, September 1981, pp. 707-714. doi: 10.1353/aad.2012.1306

Brentari, Diane (ed.) (2010): Sign Languages. Cambridge: Cambridge University Press.

Brophy, Peter; Craven, Jenny (2007): Web Accessibility. Library Trends, Volume 55, Number 4, Spring 2007, pp. 950-972. doi: 10.1353/lib.2007.0029

Bundesministerium für Arbeit, Soziales und Konsumentenschutz (2009): Behindertenbericht 2008, Bericht der Bundesregierung über die Lage von Menschen mit Behinderungen in Österreich 2008. Wien: Büro Service Stelle A des BMASK.

Carroll, James (1982): The Instructional Technologist and Hearing-Impaired Learner: Determining a Need for Professional Development Support. American Annals of the Deaf, Volume 127, Number 3, June 1982, pp. 365-368. doi: 10.1353/aad.2012.1025

81 Chuttur, Mohammad Y. (2009): Overview of the Technology Acceptance Model: Origins, Developments and Future Directions. Indiana University, USA. Sprouts: Working Papers on Information Systems, 9(37). http://sprouts.aisnet.org/9-37

Clark, J. G. (1981): Uses and abuses of hearing loss classification. Asha, 23, 493-500.

Clymer, E. William; McKee, Barbara G. (1997): The Promise of the World Wide Web and Other Telecommunication Technologies within Deaf Education. American Annals of the Deaf, Volume 142, Number 2, April 1997, pp. 104-106. doi: 10.1353/aad.2012.0700

Conversor Ltd. (2015) http://www.conversorproducts.com/ [04.02.2015]

Cox, Stephen; Lincoln, Mike; Tryggvason, Judy (n.d.): The TESSA Project. School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK.

Darlington, Keith (2005): Effective Website Development: Tools and Techniques. UK: Addison-Wesley Educational Publishers Inc.

Davis, Fred D. (1989): Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 13 (3): 319–340, doi:10.2307/249008

Davis, Lennard J. (1995): Enforcing Normalcy: Disability, Deafness, and the Body. London: Verso.

Deaf Action Commitee on SignWriting (n.d.): What is SignWriting? http://www.signwriting.org/about/ [23.07.2015]

Debevc, Matjaž; Kosec, Primož; Holzinger Andreas (2010): Improving multimodal web accessibility for deaf people: sign language interpreter module. Multimed Tools Appl (2011) 54:181–199. Springer Science+Business Media. DOI 10.1007/s11042-010-0529-8

Dictionary.com (2015) http://dictionary.reference.com/browse/technology [19.02.2015]

Diehl, Sandra; Karmasin, Matthias (2013): Introduction. In: Diehl, Sandra (ed.): Media and convergence management, Berlin (u.a.): Springer Verlag

Dwyer, Tim (2010): Media Convergence. Berkshire: Open University Press.

Encyclopædia Britannica Inc. (2015): Encyclopædia Britannica Online, Entry “Media Convergence” http://www.britannica.com/EBchecked/topic/1425043/media-convergence [19.02.2015]

European Sign Language Center (2012) http://www.SpreadTheSign.com/at/ [04.02.2015]

Gallaudet University (2014): History of Gallaudet University. Washington, DC. http://www.gallaudet.edu/history.html [26.05.2015]

82 Gansinger, Luzia (2009): Multimediale Übersetzungen von schriftlichen Tests in Gebärdensprache. Das Zeichen, Zeitschrift für Sprache und Kultur Gehörloser, Volume 23, Number 81, pp. 116-126.

Gansinger, Luzia & Hilzensauer, Marlene (to appear in 2015): Teaching ESP to deaf sign language users with the SignMedia learning resources. In: Hofer, Christian & Unger- Ullmann, Daniela (eds.): Sprachenlernen mit Erwachsenen. Fachdidaktische Arbeitsergebnisse.

Gathegi, John N. (2013): Technology, Convergence, and the Internet of Things. In: Diehl, Sandra (ed.): Media and convergence management, Berlin (u.a.): Springer Verlag

Gentry, Mary Marshal; Chinn, Kathleen M.; Moulton, Robert D. (2004): Effectiveness of Multimedia Reading Materials When Used With Children Who Are Deaf. American Annals of the Deaf, Volume 149, Number 5, Winter 2004/2005, pp. 394-403. doi: 10.1353/aad.2005.0012

Glauert, John; Kennaway, Richard; Theobald, Barry-John; Elliott, Ralph (2004): Virtual Human Signing as Expressive Animation. Norwich: University of East Anglia, Schoold of Computing Sciences.

Gonsalves, Chris; Pichora-Fuller, Margaret Kathleen (2008): The Effect of Hearing Loss and Hearing Aids on the Use of Information and Communication Technologies by Community- Living Older Adults. Canadian Journal on Aging, Volume 27, Number 2, Summer 2008, pp. 145-157. doi: 10.1353/cja.0.0022

Hacklin, Fredrik (2007): Management of Convergence in Innovation: Strategies and Capabilities for Value Creation Beyond Blurring Industry Boundaries. Zurich: Springer Science & Business Media.

Halfon, Neal; Houtrow, Amy; Larson, Kandyce; Newachek, Paul W. (2012): The Changing Landscape of Disability in Childhood. The Future of Children, Volume 22, Number 1, Spring 2012, pp. 13-42.

Han, Jin K.; Woong Chung, She; Seok Sohn, Yong (2009): When Do Consumers Prefer Converged Products to Dedicated Products?. Journal Of Marketing, Volume 73, Number 4, pp. 97-108. doi: 10.1509/jmkg.73.4.97

Hanke, Thomas (2004): HamNoSys - representing sign language data in language resources and language processing contexts. In: Streiter, Oliver; Vettori, Chiara (eds): LREC 2004, Workshop proceedings: Representation and processing of sign languages. Paris: ELRA, 2004, pp. 1-6.

Hardonk, Stefan; Daniel, Sarah; Desnerck, Greetje; Loots, Gerrit; Van Hove, Geert; Van Kerschaver, Erwin; Sigurjónsdóttir, Hanna Björg; Vanroelen, Christophe; Louckx, Fred (2011): Deaf Paranets and Pediatric Cochlear Implantation: An Exploration of the Decision- Making Process. American Annals of the Deaf, Volume 156, Number 3, Summer 2011, pp. 290-304. doi: 10.1353/aad.2011.0027

83 Hartl, Jakob; Unger, Martin (2014): Abschätzung der Bedarfslage an ÖGS-DolmetscherInnen in Primär-, Sekundär- und Tertiärbildung sowie in Bereichen des täglichen Lebens. IHS Wien: Projektbericht/Studie im Auftrag der Bundesministerien für Wissenschaft, Forschung und Wirtschaft, Bildung und Frauen, Arbeit, Soziales und Konsumentenschutz, September 2014.

Hearing Loss Association of North Carolina, a State Association of HLA of America (2015): Telecoil http://www.nchearingloss.org/telecoil.htm?fromncshhh [04.02.2015]

Hersh, Marion A.; Johnson, Michael A. (2003): Assistive Technology for the Hearing- impaired, Deaf and Deafblind, London: Springer Verlag.

Hilzensauer, Marlene (2006): Information Technology for Deaf People. In Ichalkaranje, N; Ichalkaranje, A; Jain, L.C. (eds.): Intelligent Paradigms for Assistive and Preventive Healthcare. Studies in Computational Intelligence, Volume 19, New York: Springer-Verlag Berlin Heidelberg.

Hintermaier, Manfred (2000): Hearing Impairment, Social Networks, and Coping: The Need for Families with Hearing-Impaired Children to Relate to Other Parents and to Hearing- Impaired. American Annals of the Deaf, Volume 154, Number 1, March 2000, pp. 41-53. doi: 10.1353/aad.2012.0244

Hollauf, Marina (2014): Spread the Sign, Online-Lexikon für Gebärdensprachen. GebärdenSache, Number 3 / 2014, pp. 20-21.

Hope, Sam (n.d.): SignMedia SMART http://Signmediasmart.aau.at/en/home [29.04.2015]

Individuals with Disabilities Education Improvement Act (2004): Pub. L. No. P.L. 108–446 § 602, 20 USC 1401, 300.5.

Jarmer, Helene (2011): Schreien nützt nichts, mittendrin statt still dabei. Munich, Südwest Verlag.

Joo, Jihyuk; Sang, Yoonmo (2013): Exploring Koreans‘ smartphone usage: An integrated model of the technology acceptance model and uses and gratifications theory. Computers in Human Behavior, Volume 29, pp. 2512-2518.

Kennaway, Richard (2003): Experience with and requirements for a gesture description language for synthetic animation. Norwich: University of East Anglia, School of Information Systems.

Kipp, Michael; Heloir, Alexis; Nguyen, Quan (n.d.): Sign Language Avatars: Animation and Comprehensibility. Saarbrücken, Germany: DFKI Embodied Agents Research Group.

Krammer, Klaudia; Bergmeister, Elisabeth; Dotter, Franz; Hilzensauer, Marlene; Okorn, Ingeborg; Orter, Reinhold; Skant, Andrea (2001): The Klagenfurt database for sign language lexicons. In Wilbur, Ronnie B. (ed.): Sign Language & Linguistics 4:1/2. Amsterdam: John Benjamins Publishing Company.

84 Krammer, Klaudia; Bergmeister, Elisabeth; Bornholdt, Silke; Dotter, Franz; Hausch, Christian; Hilzensauer, Marlene; Pirker, Anita; Skant, Andrea; Unterberger, Natalie (2009): Ledasila – eine kostenlose Online-Datenbank für Gebärdensprachen. Das Zeichen, Zeitschrift für Sprache und Kultur Gehörloser, Volume 23, Number 81, pp. 106-115.

Kurbanoglu, Serap; Al, Umut; Erdogan, Phyllis Lepon; Tonta, Yasar; Ucak, Nazan (2010): Technological Convergence and Social Networks in Information Management: Second International Symposium on Information Management in a Changing World, IMCW 2010. Ankara, Turkey: Springer Science & Business Media

Ladd, Paddy (2003): Understanding Deaf Culture: In Search of Deafhood. Clevedon et al: Multilingual Matters Ltd.

Ladd, Paddy; Lane, Harlan (2014): “Deaf Ethnicity” und “Deafhood” – Klärung zweier Konzepte und ihrer Beziehung zueinander. Das Zeichen, Zeitschrift für Sprache und Kultur Gehörloser, Volume 28, Number 96, pp. 42-53.

Legris, Paul; Ingham, John, Collerette, Pierre (2003): Why do people use information technology? A critical review of the technology acceptance model. Information & Management, Volume 40, pp. 1991-204.

Leitner, Barbara (2007): Menschen mit Beeinträchtigungen, Ergebnisse der Mikrozensus- Zusatzfragen im 4. Quartal 2007. Statistische Nachrichten 12/2008.

Lewis, Finley R. (2007): Focus on Nonverbal Communication Research. New York: Nova Science Publishers.

LinkedIn Corporation (2015): SIGNWRITING SYMPOSIUM PRESENTATION 32: Relevance of SignWriting as a Way of Transcribing the Phonology of Sign Languages by Roberto Costa and Madson Barreto http://de.slideshare.net/SignWriting/signwriting-symposium-presentation-32-relevance-of- signwriting-for-phonologysignlanguagesmadsonbarretorobertocosta [18.02.2015]

Lipton, Douglas S.; Goldstein, Majorie F.; Fahnbulleh, Wellington F.; Gertz, Eugenie N. (1996): The Interactive Video-Questionnaire: A New Technology for Interviewing Deaf Persons. American Annals of the Deaf, Volume 141, Number 5, December 1996, pp. 370-378. doi: 10.1353/aad.2012.0228

Lu, Pengfei; Huenerfauth, Matt (2013): Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation. Computer Speech and Language, Volume 28.

Luft, Pamela; Bonello, Mary; Zirzow, Nichole K. (2009): Technology Skills Assessment for Deaf and Hard of Hearing Students in Secondary School. American Annals of the Deaf, Volume 154, Number 4, Fall 2009, pp. 389-399. doi: 10.1353/aad.0.0106

Maher, Jane (1996): Seeing language in sign: The work of William C. Stokoe. Washington DC: Gallaudet University Press.

85 Maiorana-Basas, Michella; Pagliaro, Claudia M. (2014): Technology Use Among Adults Who Are Deaf and Hard of Hearing: A National Survey. Journal of Deaf Studies and Deaf Education, Oxford University Press. doi:10.1093/deafed/enu005

Merriam-Webster, Incorporated (2015): definition of markup language http://www.merriam-webster.com/dictionary/markup%20language [12.05.2015]

Miesenberger, Klaus; Karshmer, Arthur; Penaz, Petr; Zagler, Wolfgang (2012): Computers Helping People with Special Needs, 13th International Conference ICCHP 2012, New York: Springer-Verlag Berlin Heidelberg.

Miesenberger, Klaus; Klaus, Joachim; Zagler, Wolfgang; Karshmer, Arthur (2010): Computers Helping People with Special Needs, 12th International Conference ICCHP 2010, New York: Springer-Verlag Berlin Heidelberg.

Miesenberger, Klaus; Klaus, Joachim; Zagler, Wolfgang; Karshmer, Arthur (2004): Computers Helping People with Special Needs, 9th International Conference ICCHP 2004, New York: Springer-Verlag Berlin Heidelberg.

Monikowski, Christine (1997): Electronic Media: Broadening Deaf Students’ Access to Knowledge. American Annals of the Deaf, Volume 142, Number 2, April 1997, pp. 101-104. doi: 10.1353/aad.2012.0680

MotionSavvy (n.d.) http://www.motionsavvy.com/ [04.02.2015]

Nadel, Brian (2010): 14 tech tools that enhance computing for the disabled. http://www.computerworld.com/article/2522955/computer-hardware/14-tech-tools-that- enhance-computing-for-the-disabled. [11.11.2014]

National Association of the Deaf (n.d.) http://nad.org/issues/american-sign-language/community-and-culture-faq [11.11.2014]

Neelameghan, Arashanipalai; Chester, Greg (2007): Knowledge management in relation to indigenous and marginalized communities in the digital era. Information Studies, 13 (2), 73 – 106.

Oxford University Press (2015): Oxford Dictionaries, Entry “Avatar” http://www.oxforddictionaries.com/de/definition/englisch_usa/avatar [30.03.2015]

Oxford University Press (2015): Oxford Dictionaries, Entry “gadget” http://www.oxforddictionaries.com/de/definition/englisch_usa/gadget [21.05.2015]

Padden, Carol A. (1990): Deaf in America, Voices from a culture. USA: Harvard University Press

Passig, David; Eden, Sigal (2000): Improving Flexible Thinking in Deaf and Hard of Hearing Children with Virtual Reality Technology. American Annals of the Deaf, Volume 145, Number 3, July 2000, pp. 286-291. doi: 10.1353/aad.2012.0102

86 Paul, Peter V. (2013): The Digital Generation: The Good, the Bad, and the Ugly. American Annals of the Deaf, Volume 157, Number 5, Winter 2013, pp. 407-411. doi: 10.1353/aad.2013.0000

Pillai, Patrick (1999): Using Technology to Educate Deaf and Hard of Hearing Children in Rural Alaskan General Education Settings. American Annals of the Deaf, Volume 144, Number 5, December 1999, pp. 373-378. doi: 10.1353/aad.2012.0145

Porta, Jordi; López-Colino, Fernando; Tejedor, Javier; Colás José (2013): A rule-based translation from written Spanish to Spanish Sign Language glosses. Computer Speech and Language, Volume 28.

Powell, Gavin (2007): Beginning XML Databases. Indianapolis: Wiley Publishing, Inc.

Power, Desmond J.; Power, Mary R.; Rehling, Bernd (2007): German Deaf People Using Text Communication Message Service, TTY, Relay Services, Fax, and E-Mail. American Annals of the Deaf, Volume 152, Number 3, Summer 2007, pp. 291-301. doi: 10.1353/aad.2007.0030

Quigley, Stephen P.; Paul, Peter V. (1984): Language and Deafness. San Diego: College-Hill Press.

Rabbitt, Sarah M.; Kazdin, Alan E.; Scassellati, Brian (2014): Integrating socially assistive robotics into mental healthcare interventions: Applications and recommendations for expanded use. Clinical Psychology Review, Volume 35, pp. 35-46.

RelayService (n.d.): Was ist das RelayService? http://www.relayservice.at/index.html [20.07.2015] Roberson, Len (2001): Integration of Computers and Related Technologies into Deaf Education Teacher Preparation Programs. American Annals of the Deaf, Volume 146, Number 1, March 2001, pp. 60-66. doi: 10.1353/aad.2012.0061

Rogers, Everett (2003): Diffusion of Innovations, 5th Edition. New York: Simon & Schuster Inc. ISBN 978-0-7432-5823-4.

Schlesinger, Izchak. M.; Namir, Lila (1987): Sign Language of the Deaf: Psychological, Linguistic and Sociological Perspectives. New York: Academic Press, Inc.

Servicecenter ÖGS.barrierefrei (n.d.): Gebärdenwelt.tv http://www.gebaerdenwelt.tv/ [20.05.2015]

Shanker, Stuart (2000): I See a Voice: Deafness, Language, and the Senses; A Philosophical History. Sing Language Studies, Number 1, Fall 2000, pp. 93-102.

SignTime GmbH (n.d.): SiMAX – Avatar für Gebärdensprache http://www.signtime.tv/simax/ [23.07.2015]

Singh, Rajendra; Raja, Siddhartha (2010): Convergence in Information and Communication Technology: Strategic and Regulatory Considerations. Washington D.C.: World Bank Publications

87 Skant, Andrea; Dotter, Franz; Bergmeister, Elisabeth; Hilzensauer, Marlene; Hobel, Manuela; Krammer, Klaudia; Okorn, Ingeborg; Orasche, Christian; Orter, Reinhold; Unterberger, Natalie (2002): Grammatik der Österreichischen GebärdenSprache; Veröffentlichungen des Forschungszentrums für Gebärdensprache und Hörgeschädigtenkommunikation, Band 4. Klagenfurt.

Snelson, Chareen (2015): Integration Visual and Media Literacy in YouTube Video Projects. In: Baylen, Danilo M.; D’Alba, Adriana (eds.): Essentials of Teaching and Integrating Visual and Media Literacy, Visualizing Learning. Switzerland: Springer International Publishing, pp. 165-184.

Snipview (n.d.): Hamburg Notation System http://www.snipview.com/q/Hamburg%20Notation%20System [18.02.2015]

Statistisches Bundesamt (2015): Behinderte Menschen https://www.destatis.de/DE/ZahlenFakten/GesellschaftStaat/Gesundheit/Behinderte/Schwerbe hinderteMenschen.html [08.04.2015]

Stemper, Jordan (2015): MotionSavvy UNI: 1st Sign Language to voice system https://www.indiegogo.com/projects/motionsavvy-uni-1st-sign-language-to-voice-system [04.02.2015]

Stiftung Warentest (2015): Rauchmelder für Hörgeschädigte: Alarm mit Blitz und Rüttelkissen https://www.test.de/Rauchmelder-fuer-Hoergeschaedigte-Alarm-mit-Blitz-und-Ruettelkissen- 4651190-0/ [23.07.2015]

Stokoe, William C. (2001): Language in Hand: Why Sign Came Before Speech. Washington Dc: Gallaudet University Press.

Stuckless, E. Ross; Carroll, James K. (1994): National Priorities on Educational Applications of Technology for Deaf and Hard of Hearing Students. American Annals of the Deaf, Volume 139, Special Issues 1994, pp. 62-63. doi: 10.1353/aad.2012.0950

Terry, Deborah J.; Gallois, Cynthia; McCamish, Malcolm (1993): The Theory of Reasoned Action: Its Application to Aids-preventive Behavior. Oxford: Pergamom Press Ltd.

Tryggvason, Judy (n.d.): Introduction to eSIGN http://www.visicast.cmp.uea.ac.uk/eSIGN/Introduction.htm [12.05.2015]

Uhlig, Anne C. (2012): Ethnographie der Gehörlosen, Kultur – Kommunikation – Gemeinschaft. Bielefeld: transcript Verlag.

Un Jan, Alberto; Contreras, Vilma (2010): Technology acceptance model for the use of information technology in universities. Computers in Human Behavior, Volume 27, pp. 845- 851.

United States Census Bureau, U.S. Department of Commerce; Economics and Statistics Administration (2010): How Common are Specific Disabilities? http://www.census.gov/content/dam/Census/newsroom/facts-for-features/2013/disability_how- common-7.jpg [08.04.2015]

88 Alpen-Adria-Universität Klagenfurt (2015): Ledasila http://Ledasila.uni-klu.ac.at/TPM/public/public_main.asp?sid= [19.02.2015]

Valli, Clayton; Lucas, Ceil (2000): Linguistics of American Sign Language: An Introduction. Washington DC: Gallaudet University Press.

Venkatesh, Viswanath; Morris, Michael G.; Davis, Gordon B.; Davis, Fred D. (2003): User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly, Volume 27, Number 3, pp. 425-478.

Verlinden, Margriet; Zwitserlood, Inge; Frowein, Han (2005): Multimedia with Animated Sign Language for Deaf Learners. Sint-Michielsgestel. ED-Media 2005.

Vondracek (2010): TapTap http://taptap.biz/ [04.02.2015]

Volterra, Virginia; Pace, Claudia; Pennacchi, Barbara; Corazza, Serena (1995): Advanced Learning Technology for a Bilingual Education of Deaf Children. American Annals of the Deaf, Volume 140, Number 5, December 1995, pp. 402-409. doi: 10.1353/aad.2012.0310

Wachsmuth, Ipke; Sowa, Timo (2002): Gesture and Sign Language in Human-Computer Interaction. International Gesture Workshop, GW 2001, London, UK, April 2001. Berlin: Springer-Verlag Berlin Heidelberg New York.

Wheatley, Mark; Pabsch, Annika (2010): Sign Language Legislation in the European Union. Brussels: European Union of the Deaf. Wise, Paul H. (2012): Emerging Technologies and Their Impact on Disability. The Future of Children, Volume 22, Number 1, Spring 2012, pp. 169-191. doi: 10.1353/foc.2012.0002

ZBand (2014): About the ZBand http://www.zband.biz/about-zband.php [08.02.2015]

89 Appendix

A1: Questionnaire

1) Wie alt bist du? 2) Hast du ein Smartphone? a. Wann ungefähr hast du dein erstes Smartphone gekauft? b. Benutzt du das Smartphone oft? c. Wofür benutzt du es? 3) Hast du seit du das Smartphone besitzt mehr Kontakt mit anderen Gehörlosen? a. Welche Art von Kontakt, direkt oder indirekt via Technologien, zum Beispiel Skype, SMS, E- Mail? b. Gibt es noch immer viele Treffen zwischen den Gehörlosen, oder unterhaltet ihr euch jetzt hauptsächlich über das Smartphone / Internet? 4) Denkst du, neue Technologien wie das Smartphone und der Computer helfen dir in deinem täglichen Leben? a. Bitte gib ein Beispiel, wobei dir Technologien helfen. b. Hast du auch andere Technologien zu Hause oder in der Arbeit, die dich unterstützen? (Beispiel: Lichtwecker, Vibrationsalarm…) c. Was denkst du von diesen Technologien? Was für Erfahrungen hast du damit gemacht? d. Was spielt für dich eine große Rolle, wenn du eine neue Technologie kaufst? (Nützlichkeit, leichte Benutzung) 5) Kennst du einige Webseiten, die Übersetzung in Gebärdensprache bereitstellen? Kannst du ein paar Beispiele geben? a. Nutzt du solche Seiten, um Informationen zu bekommen oder Nachrichten zu sehen? b. Woher bekommst du die meisten Informationen? (Internet, TV, Freunde..) 6) Was denkst du von UNI? a. Gut oder schlecht? b. Was findest du gut? c. Momentan gibt es UNI nur in Amerika. Wirst du es dir kaufen, wenn es UNI auch hier zu kaufen gibt?

7) Du hast die App SpreadTheSign schon einmal ausprobiert. Wie gefällt sie dir? a. Findest du sie gut? Was findest du daran gut? b. Für wen ist diese App nützlich? c. In der Gebärdensprache gibt es viele Varianten. Denkst du, dass alle Gehörlosen in Österreich die österreichischen Gebärden verstehen? 8) Ledasila: Benutzt du die Webseite auch? a. Was ist daran gut? b. Was ist daran schlecht? c. Für wen ist die App nützlich? d. Zurzeit funktioniert Ledasila nur mit dem Internet Explorer. Denkst du, das reicht aus? Sollte man Ledasila auch für andere Browser und als App anbieten? 9) Kennst du noch andere Webseiten oder Apps für Gehörlose? 10) Welche Apps oder Webseiten wünschst du dir?

Remark of the author: The interviews were held in German / Austrian Sign Language. The simplest mode of transcription was chosen, because factors like pauses and hesitation cannot be considered due to the translation between German and Austrian Sign Languages, which makes longer pauses in between inevitable.

90 A2: Interview with Subject 1 (S1)

I: Also, zuerst einmal die Frage: Wie alt bist du? S1: Ich bin 51 Jahre alt.

I: Hast du ein Smartphone? - S1: Ich habe ein iPhone.

I: Also nicht mit Tasten, sondern mit Touchscreen. S1: Man muss doch auch mit der Zeit gehen und Neues dazulernen.

I: Ja, genau! Und wann hast du denn dein erstes Smartphone gekauft? S1: Das war im vorigen August, da habe ich das erste Smartphone bekommen und mich damit auseinandergesetzt, schön langsam dazu gelernt, wie ich das verwenden kann.

I: Und wie gefällt es dir? Bist du begeistert davon oder? S1: Es ist viel besser, ja! Und es sind auch sehr viele visuelle Dinge ähnlich wie am Computer vorhanden auf einem Smartphone.

I: Und benutzt du es oft? S1: Eigentlich relative wenig, also ich habe wenig Zeit dafür.

I: Aber es ist schon praktisch, um mit Freunden oder der Familie zu kommunizieren? S1: Ja, es ist sehr praktisch, das stimmt. Man kann auch Videotelefonieren mit einem Smartphone.

I: Also dazu benutzt du es eigentlich am öftesten? S1: Also mit meinem Sohn mache ich das am meisten. Wer kein iPhone hat, da kann ich auch ein Video aufnehmen und das speichern und es dann weiterschicken. Also das gibt es auch als Möglichkeit.

I: Und hast du auch mit anderen Gehörlosen, also nicht nur mit deinem Sohn, viel Kontakt über das Smartphone? S1: Eigentlich kaum.

I: Also mit anderen, Freunden? - S1: Also das mag ich eigentlich nicht.

I: Aha, ok. Also mit Freunden vom Gehörlosenverein hast du eher direkten Kontakt. S1: Also, da treffe ich die Leute lieber persönlich, das reicht mir.

I: Und hast du das Gefühl, dass immer noch gleich viele Personen bei den Treffen von den Vereinen sind oder werden es immer weniger? S1: Also es wird ganz langsam, oder allmählich weniger, ja. Also es kommen so im Jahr ein bis zwei, vielleicht drei, Personen weniger. Also es wird langsam weniger. Es gibt kaum Nachwuchs.

I: Aber du glaubst nicht, dass das etwas mit den neuen Technologien zu tun hat, dass die vielleicht eher über Skype oder Video chatten? S1: Also bis jetzt verwende ich das kaum. Da muss ich überlegen. Also, Skype verwendet vielleicht meine Schwester, aber das verwende ich kaum. Mein Mann verwendet das als Vorstand beim Sportclub schon. Nein, also Treffen finden trotzdem noch statt, das kann Skype nicht ersetzen.

I: Und denkst du das eben solche Technologien wie das Smartphone oder der Computer dir in deinem täglichen Leben helfen? - S1: Ja!

I: Vielleicht fällt dir irgendein Beispiel ein, was es zu hause oder in der Arbeit der Computer für einen Einfluss auf deine Arbeit oder dein Leben hat? S1: Ja, es ist teuflisch, da bleibt man dann gerne kleben. Man sieht auch immer wieder etwas Neues und da bleibt man dann gerne hängen. Ja, also das finde ich nicht so gut.

I: Und hast du zum Beispiel zu Hause auch irgendeine Technologien, die dich unterstützen, so wie einen Lichtwecker oder eine besondere Klingel? S1: Also Klingel habe ich keine, das ist mir zu teuer, da habe ich kein Geld dafür, aber einen Wecker damit ich aufwachen kann, den benutze ich schon, also mit einer speziellen Technik. Und ich habe auch keinen Feuermelder, der mich visuell darauf aufmerksam macht, also das müsste man mit der Klingel zusammen installieren, das ist mir auch zu teuer.

91 I: Und sonst irgendwas zu Hause, was dir hilft? S1: Also ich habe nur eine Verbindung zu meiner Schwiegermutter, die im Erdgeschoss wohnt, wenn sie einen Notfall hat, damit das dann bei mir blinkt, damit ich dann runterschauen kann. Aber das Blinken ist auch nicht so besonders stark, also am Tag sehe ich das kaum. Und ich müsste wirklich ganz in der Nähe sein. Am Abend sieht man das auch wieder besser.

I: Was spielt für dich eine Rolle, wen du irgendetwas Neues kaufst, zum Beispiel dein Handy, dass du jetzt hast? S1: Also da gibt es eigentlich nichts, wichtig ist, dass die Kommunikation gut funktioniert, aber sonst habe ich keine besonderen Präferenzen. Wichtig ist für mich wirklich das persönliche Treffen, das können neue Medien nicht ersetzen.

I: Und wie bist du zu deinem neuen Handy gekommen? Hat dir das jemand empfohlen? S1: Mein Mann hat da gebohrt, dass ich mir das kaufe.

I: Aber jetzt bist du zufrieden damit? S1: Ja!

I: Ok, dann: Kennst du ein paar Webseiten, die Übersetzung in Gebärdensprache bereitstellen? S1: Nein, da kenne ich nichts.

I: Schaust du zum Beispiel ZIB 20, das wird in Gebärdensprache übersetzt, die Nachrichten. S1: Also das weiß ich jetzt nicht.

I: Und woher bekommst du dann die meisten Informationen, zum Beispiel Nachrichten? Fernsehen oder Internet, oder von Freunden? S1: Also das meiste erfahre ich von den Kollegen und von den Freunden.

I: Ich habe euch ja letzte Woche eine Email geschickt mit einem Link. S1: Ja!

I: Noch einmal kurz zur Erklärung. Also, das ist das Projekt UNI. Und in Amerika hat eine Firma ein neues Gerät erfunden, das Gehörlosen helfen soll, mit Hörenden leichter zu kommunizieren, und umgekehrt. S1: Wie soll das funktionieren?

I: Es gibt ein Tablet, und in dem Tablet ist eine Kamera und ein Mikrofon, und dadurch sollen sie miteinander kommunizieren können. Also die Kamera ... S1: Aber ich verstehe doch nicht, was jemand spricht. Wie soll das funktionieren?

I: Es ist so, wenn jemand spricht, ein Hörender spricht, dann nimmt das Mikrofon das auf und übersetzt es in geschriebenen Text, und den kann der Gehörlose dann lesen. Und umgekehrt, wenn der Gehörlose etwas zeigt, dann nimmt die Kamera das auf und verwandelt die Gesten in Text. S1: Das kann dieser Link schaffen, also das man das wirklich umsetzt in Gebärdensprache, beziehungsweise ..?

I: Also, mit dieser Technik ist es schon ziemlich weit, in Amerika funktioniert es sehr gut, und sie haben geplant, das weltweit herauszubringen. Also jetzt im Herbst kann man es in Amerika kaufen. S1: Da bin ich wirklich sehr neugierig. Und dieses Tablet, da muss man wirklich die gleiche Marke kaufen, wie es eben diese Firma anbietet.

I: Das ist ein ganz bestimmtes Tablet, ja. Und findest du das gut oder schlecht, die Idee, wenn das wirklich so funktioniert wie es funktionieren soll? S1: Ja, ich glaube, das ist eine gute Idee. Für Gehörlose sicher eine gute Sache.

I: Und würdest du es dir auch kaufen, wenn es das da zu kaufen gibt? S1: Nein, ich glaube für mich ist das nichts. Also ich habe auch ein Tablet bekommen, das habe ich gewonnen, aber ein eigenes würde ich mir nicht kaufen. Aber ich bin mir sicher, das mein Mann sich so etwas zulegt, dann kann ich es auch mitbenutzen.

92 I: Ok, und die App SpreadTheSign kennst du natürlich auch? Und wie gefällt sie dir, findest du sie gut? S1: Ja, ich kenne sie. Das ist ein gutes Projekt. Aber ich habe gesehen, dass Hörende diese ÖGS-Gebärden verwenden, die oft nicht dem Kärntner Dialekt entsprechen, und dann andere Gebärden verwenden, als wir sie hier haben. Das wäre gut, wenn auch verschiedene Varianten oder Dialekte aus Österreich, von Wien oder Salzburg, auch verfügbar wären. Also das ist nicht ganz ideal, dass da eine Standardvariante verwendet wird. Aber egal.

I: Also vor allem für die hörenden Studenten, oder, die benutzen diese App auch um Wörter nachzuschlagen, und dann ist es aber eigentlich nicht die richtige Variante. S1: Ja, das passt dann nicht.

I: Und dann noch ein paar Fragen zu Ledasila. Benutzt du Ledasila auch? S1: Ja, das benutze ich.

I: Und wofür zum Beispiel? S1: Da sind wirklich alle Dialekte enthalten, also da sehe ich zum Beispiel einen Wiener Dialekt, also das ist wirklich sehr gut, und für mich auch sehr wichtig, dass ich die ganzen Dialekte von Österreich sehen kann.

I: Gibt es irgendetwas, was daran nicht so gut ist? S1: Ja, man könnte zum Beispiel Videos erneuern. Ich habe da zum Beispiel ein Video gesehen, wo die Handform nicht sehr korrekt war, das würde ich austauschen. Also da würde man wirklich eine andere Person nehmen, die die korrekte Handform verwendet, also dass man das sehr genau sieht, damit Hörende das nicht falsch nachmachen, wenn sie es schwer sehen.

I: Zurzeit funktioniert Ledasila ja nur mit dem Internet Explorer richtig. Denkst du, dass es wichtig ist, das es für alle Browser auch funktioniert? S1: Ja, das ist schade. Ja, das wäre schon eine gute Sache, wenn andere Programme das auch nutzen können. Sollte man auch für das iPhone verwendbar machen, da gibt es das leider auch nicht, auch auf einem Tablet kann man das nicht verwenden, also wirklich nur in dem Internet Explorer. Firefox oder Chrome, also das wäre eine gute Sache wenn es dort verwendbar wäre.

I: Kennst du noch andere Apps für Gehörlose, die du vielleicht auch verwendest? S1: Ja, da gibt es welche: SpreadTheSign, und viele interessieren sich ja auch für bestimmte Gebärden und haben eine Freude, wenn sie dann zum Beispiel auch die Gebärden in dieser App sehen könne, also das sind die Leute mit denen ich Kontakt habe, die das gerne anschauen.

I: Gibt es irgendeine Wünsche, das du dir in der Zukunft wünschen würdest, für Apps oder Webseiten? S1: (...) Bitte noch einmal die Frage.

I: Ja, also zum Beispiel, jetzt gibt es SpreadTheSign oder Sign Media SMART und Ledasila, und fällt dir irgendetwas ein was du gerne hättest, was du in Zukunft dann mit einer App oder Webseite anschauen kannst oder das dir irgendwie hilft? S1: Also für meinen Alltag meinst du das jetzt?

I: Ja, genau. S1: Also, wenn ich zum Beispiel irgendwohin fahren würde? (....) Keine Ahnung, fällt mir nichts ein.

A3: Interview with Subject 2 (S2)

I: Als erstes die Frage, wie alt bist du? S2: Ich bin 30 Jahre alt.

I: Und hast du ein Smartphone? S2: Ja, so eins habe ich.

I: Und wann hast du ungefähr dein erstes Smartphone gekauft? S2: Vor zwei Jahren. Also 2013.

I: Hat dir irgendjemand ein Smartphone empfohlen oder wie bist du dazu gekommen, dass du das gekauft hast? S2: Also von gehörlosen Freunden habe ich das erfahren.

93 I: Benutzt du das Smartphone oft? S2: Ja, klar, täglich verwende ich es.

I: Und wofür zum Beispiel? S2: Ganz wichtig ist für mich den Kontakt mit meinem Freund in Deutschland so herzustellen, und das mache ich hauptsächlich über WhatsApp. Also das ist wirklich ein Vorteil, weil das kann ich täglich benutzen, weil wir uns ja nicht täglich sehen können. Also das ist wirklich ein Vorteil, den mir das Smartphone eben liefert.

I: Hast du seit du dieses Smartphone besitzt irgendwie mehr Kontakt zu Gehörlosen? S2: Das ist eine gute Frage. Aber es haltet sich im Grunde die Waage, es hängt immer von der Situation ab. Heute habe ich jetzt mit WhatsApp wirklich die Möglichkeit, schneller Information auszutauschen, weil es einfach kostenlos ist.

I: Also wenn es eine kurze Information ist, dann eher über das Smartphone. Und wenn es ein längeres Gespräch ist, dann eher direkt. S2: Ja, genau so ist es. So stimmt es. Also direkt wird es einfach klarer, weil ich mich dann in Gebärdensprache unterhalten kann. Es gibt zwar auch die Möglichkeit, ein Video zu schicken, oder direkt gleich mit Video zu kommunizieren, aber das läuft auch nicht immer ganz glatt, oft einmal hängt das Video, und wenn ich jemanden persönlich treffe, dann ist das einfach klar und für mich besser.

I: Gibt es noch immer viele Treffen, zum Beispiel beim Gehörlosenverein, bist du da auch öfter dabei? S2: Ich besuche den Verein einmal im Monat, weil ich auch im Beirat mitarbeite, da komme ich schon regelmäßig in den Verein.

I: Hast du das Gefühl, dass immer noch gleiche viele Gehörlose bei dem Verein dabei sind, oder wird es jetzt langsam weniger? S2: Meinst du das in Zusammenhang mit den neuen Technologien?

I: Ja. S2: Das könnte ich eigentlich nicht sagen, das muss nicht so sein. Es hängt immer von den Situationen ab, also ich glaube nicht dass die neuen Technologien der Grund dafür sind, sondern, dass eben die jungen Gehörlosen oft ein Cochlea-Implantat haben und integriert in Schulen gehen und den Gehörlosenverein nicht mehr besuchen.

I: Denkst du, dass neue Technologien wie das Smartphone und der Computer dir in deinem täglichen Leben helfen? S2: Ja, denke ich schon.

I: Kannst du ein Beispiel geben, wie dir das hilft? S2: Zum Beispiel wenn es neue Wörter gibt, und ich kenne das noch nicht. Man kann das natürlich in einem Wörterbuch nachschauen, das ist allerdings immer mühsamer, und im Internet habe ich da auch wirklich schnell die Informationen zur Verfügung. Und zum Beispiel, also wenn ich mit Gehörlosen auf Englisch kommuniziere, kann ich das auch schnell in einer Übersetzungsmaschine übersetzen lassen, das ist für mich angenehm.

I: Und hast du noch andere Technologien, zum Beispiel zu Hause, wie einen Lichtwecker oder einen Vibrationsalarm, oder ähnliches? S2: Ja das habe ich. Also ich habe zum Beispiel bei der Türglocke einen visuellen Alarm oder für den Wecker, und ich verwende auch einen Laptop zu Hause.

I: Und was denkst du von diesen Technologien? Was hast du für Erfahrungen damit gemacht, funktioniert das immer? S2: Meinst du jetzt allgemein oder ein Smartphone oder Tablet?

I: Zum Beispiel zu Hause der visuelle Alarm, oder der Lichtwecker. S2: Ja, eigentlich gute Erfahrungen.

I: Was spielt für dich eine Rolle, wenn du eine neue Technologie, zum Beispiel das Smartphone, kaufst, was ist da für dich wichtig? S2: Wichtig ist, dass es gut funktioniert, dass man es gut verwenden kann, das hängt immer davon ab was es auch ist.

I: Kennst du ein paar Webseiten, die Übersetzung in Gebärdensprache bereitstellen? S2: Die eine Übersetzung liefern (...) eigentlich nicht.

94 I: Zum Beispiel irgendeine Webseite, die einen Avatar verwendet? S2: Ich habe schon erfahren, dass es so etwas gibt, aber selbst das noch nie ausprobiert.

I: Woher bekommst du dann die meisten Informationen, zum Beispiel Nachrichten? Ist das eher vom Internet, vom Fernsehen, von Freunden? S2: Das hängt von der Situation ab. Ich kann sagen, dass ich mir sehr viel aus dem Internet hole, auch hier von meinen Kolleginnen erfahre ich viel, auch von meinen Eltern und Freunden. Aber das hängt auch wirklich immer von der Situation ab.

I: Ich habe euch letzte Woche per Email einen Link geschickt zu dem Projekt UNI, hast du dir den Link angeschaut? S2: Ja.

I: Noch einmal kurz zur Erklärung: In Amerika hat eine Firma ein neues Gerät erfunden, das Gehörlosen helfen soll, leichter mit Hörenden zu kommunizieren und umgekehrt. Und dafür wird ein Tablet mit einer bestimmten Kamera und ein Mikrofon benutzt und damit soll die Kommunikation sich verbessern. S2: Ja stimmt, das habe ich gesehen.

I: Und was denkst du von diesem Projekt? S2: Das ist eine gute Idee, es ist ein guter Schritt, ein toller Schritt eigentlich. Man muss sich vorstellen, wenn ich das jetzt selbst verwenden würde, gibt es teilweise vielleicht auch Schwierigkeiten, also ich weiß nicht genau, wie diese Technik abläuft, wie das ausschaut. Wenn ich zum Beispiel mit zwei Händen gebärde, jetzt muss ich aber ein Tablet halten, kann ich nur mit einer Hand gebärden. Ob das dann richtig übersetzt wird, zum Beispiel brauche ich für die Gebärde Arbeit oder Baby beide Hände. Wenn ich jetzt das Tablet halte, und diese Gebärde nur mit einer Hand ausführe, weiß ich nicht, ob das Tablet das auch versteht. Und es ist für mich teilweise auch nicht so angenehm, dieses Tablet zu halten, wenn ich eben jetzt gebärden will, weil wenn ich das mit beiden Händen mache und die Hände frei habe, ist es für mich einfach angenehmer als das Tablet zu halten.

I: Also, wenn man beide Hände frei hätte, wäre es eigentlich eine gute Idee, aber wenn man es mit einer Hand halten muss, ist es eher umständlich. S2: Ja, genau, so ist es für mich in meiner Vorstellung.

I: Und würdest du es dir dann kaufen, wenn es das bei uns auch zu kaufen gibt? S2: Hmm, das ist eine gute Frage. Also wenn man jetzt mit dem Smartphone, das man besitzt, zufrieden ist, frage ich mich ob das notwendig ist, jetzt zusätzlich noch ein Tablet, das diesen Service bietet, zu kaufen. Wenn natürlich viele Gehörlose das verwenden und positiv davon berichten, dann würde ich es mir vielleicht auch kaufen, aber jetzt kann ich es momentan eigentlich nicht sagen.

I: Und die App SpreadTheSign kennst du auch? S2: Ja, klar kenne ich, da arbeite ich ja mit.

I: Wie gefällt dir die App? Was findest du gut daran? S2: Meiner Meinung nach ist das eine gute App, also wenn man da viel unterwegs ist, kann man das immer gebrauchen.

I: Und für wen ist diese App nützlich? Denkst du, es ist eher für Gehörlose oder für Studenten oder Personen die Gebärdensprache lernen möchten? - S2: Für alle, alle können diese App verwenden. I: In der Gebärdensprache gibt es ja auch viele Varianten. Denkst du, dass dann alle Gehörlosen die Gebärden auf SpreadTheSign verstehen, weil das ist ja teilweise nicht Variante und generell für Österreich eine Form? S2: Das ist auch eine gute Frage. Ich glaube nicht, dass es jeder verstehen kann. Zum Teil wird es schon verstanden, das hängt davon ab, ob die Leute sich bewusst sind, dass es eben einen Standard gibt, das es Varianten gibt, aber es hängt auch wirklich von den Personen ab, die das ansehen. Das ist eben diese Sache mit den Videos, die man sich anschaut. Aber wenn man sich persönlich trifft und mit jemandem trifft, der eine andere Variante spricht, ist das nie ein Problem. Diese App, da kann man nicht sehen, wohin diese Gebärde gehört, in welche Region, also nach Kärnten, Steiermark, oder wo auch immer. Da sieht man eben nur einen österreichischen Standard. Vielleicht meinen dann aber auch Hörende, das eben das die einzige Gebärde ist, die in Österreich verwendet wird.

I: Und im Gegenzug dazu gibt es ja Ledasila, wo es diese Varianten ja gibt. S2: Ja, das stimmt.

95 I: Und was findest du bei Ledasila gut? S2: Ledasila kann man nur im Internet benützen, finde ich aber sehr positiv, weil es einen reichhaltigen Wortschatz enthält, da gibt es sehr sehr viele Gebärden und man kann Vergleiche anstellen zu den verschiedenen Varianten, das ist eine sehr tolle Sache. Also wenn man jetzt zum Beispiel die Videos ansieht, da gibt es zwei Möglichkeiten, ein großes und ein kleines Video anzusehen und ich meine, das man nur noch das große braucht. Das kleine Video glaube ich braucht man nicht, aber das könnte man vielleicht in Zukunft auch technisch vereinheitlichen.

I: Momentan funktioniert es ja nur mit dem Internet Explorer richtig. Denkst du, das ist wichtig, dass es auch bei anderen Browsern und zum Beispiel auf dem iPhone oder auf anderen Smartphones funktioniert? S2: Also beides wäre sehr gut, im Internet zum Beispiel auch für Firefox und Chrome wäre es nicht schlecht, und es wäre auch für die Apple-Benutzer eine gute Sache, da gibt es wirklich eine breite Möglichkeit, wie man es im Internet anbieten könnte. Und optimal wäre es halt wirklich, wenn es als App auch zur Verfügung stünde. Das würde man sehr gebrauchen können, weil es ganz einfach die Technik ist, die heute überall verwendet wird, also das wäre sehr zu empfehlen wenn es das auch als App gäbe.

I: Kennst du noch andere Apps, oder benutzt du andere Apps für Gehörlose? S2: Ja, es gibt zwei, die ich verwende. Ich habe mir darüber schon Gedanken gemacht. Signme, da kann man eben Gebärden als gezeichnete Animation sehen. Und die zweite gehört zum Bereich Gesundheit, „IsignIT“, da geht es um das Thema Gesundheit, also diese zwei Apps verwende ich, und die dritte ist eben SpreadTheSign.

I: Und wünscht du dir für die Zukunft irgendwelche Apps oder Webseiten, die dir noch helfen können? S2: Ich habe da schon die letzten Tage überlegt, aber mir ist eigentlich nichts eingefallen. Also ich habe nur einen Wunsch, das habe ich aber vorher schon gesagt, Ledasila sollte auch als App vorhanden sein. Und auch für SpreadTheSign, was ich sehr gut finde, aber da fehlt vielleicht auch noch, zu welchem Bereich diese Gebärde gehört, das wäre sonst auch noch eine gute Sache, das gibt es momentan nicht. Aber was mir noch fehlt, kann ich eigentlich nichts sagen, mir fällt da nichts ein, also es gibt sehr viele Dinge, die ich benutze, Gebärdenwelt, Signtime verwende ich auch, von dem Gehörlosenbund die Homepages und von den Landesverbänden, aber was mir fehlt kann ich eigentlich nicht sagen, vielleicht fällt mir noch etwas ein, aber im Moment gibt es da nichts.

A4: Interview with Subject 3 (S3)

I: Die erste Frage: Wie alt bist du? S3: Ich werde 36 Jahre alt.

I: Und hast du ein Smartphone? S3: Ja, ich habe das iPhone 5.

I: Wann hast du dein erstes Smartphone gekauft? S3: Das war ein Samsung vor fünf Jahren.

I: Und benutzt du das Smartphone oft? S3: Ja, täglich.

I: Und wofür? S3: Also ich schreibe SMS, WhatsApp benütze ich, Email, Internet und Facebook. Manchmal verwende ich auch den Taschenrechner, auch die Spiele habe ich schon verwendet.

I: Hast du auch, seit du ein Smartphone hast, mehr Kontakt mit Gehörlosen? S3: Das ist eine gute Frage. Also ich mache damit Termine aus, praktisch ist es auch, dass ich auf WhatsApp Fotos schicken kann, aber das ich mehr Kontakt deswegen hätte, kann ich eigentlich schwer beantworten, das weiß ich nicht.

I: Also ist es eher so, dass das Smartphone für so kurze Nachrichten benutzt wird, aber wenn du eine längere Unterhaltung hast ist es eher direkt. S3: Ja das ist so, stimmt.

96 I: Gehst du auch zu Treffen vom Gehörlosenverein? S3: Ja, freilich.

I: Kommt dir vor, dass in den letzten Jahren weniger Personen beim Gehörlosenverein dabei sind, also hat das vielleicht irgendetwas damit mit diesen neuen Technologien zu tun, dass das dazu führt, das mehr kurze Nachrichten gesendet werden und es weniger Treffen gibt? S3: Das ist möglich, es schaut so aus. Aber es ist schwer zu beantworten. Es stimmt, dass in den Gehörlosenvereinen weniger Personen sind, aber ob neue Technologien Schuld sind, kann ich jetzt nicht sagen. Es gibt jetzt die Möglichkeit, einen Termin auch kurzfristig abzusagen, das war früher eben nicht so, deswegen sind auch Personen trotzdem zum Treffen gekommen, auch wenn etwas anderes dazwischengekommen wäre.

I: Denkst du, diese neue Technologien wie das Smartphone oder der Computer helfen dir in deinem täglichen Leben? S3: Ja, das denke ich schon. Also beim Computer denke ich das weniger, aber mehr glaube ich, dass ich das Handy für das tägliche Leben sehe, weil ich da einfach Informationen schnell bekommen kann, also Facebook kann ich zum Beispiel für Tipps zum Kochen verwenden, oder ich sehe auch Gebärden, die heute neu verwendet werden, also das bekomme ich alles über das Handy, über das Smartphone.

I: Hast du zu Hause auch noch andere Technologien, die dich unterstützen, zum Beispiel einen Lichtwecker oder einen Vibrationsalarm? S3: Bitte noch einmal, ich hab es nicht verstanden.

I: Hast du noch andere Technologien zu Hause, die dir helfen, zu Beispiel Lichtwecker oder einen Vibrationsalarm? S3: Ja, das verwende ich zu Hause. Also ich habe da eben einen Wecker, der mich in der Früh weckt, ich bin in eine neue Wohnung übersiedelt, es funktioniert aber da der Lichtalarm noch nicht, also, das wäre mein Wunsch, dass es über das Handy funktionieren würde, das wäre vielleicht für die Zukunft noch etwas, das wir gebrauchen könnten.

I: Was spielt für dich eine Rolle, wenn du eine neue Technologie kaufst? Was ist dir da wichtig? S3: Also wichtig ist für die Videoqualität. Zum Beispiel Facetime, dieses Programm, das ich zum Videotelefonieren verwende, das muss wirklich optimal sein. Das hängt immer davon ab, wo man es auch verwendet, wenn der Empfang gut ist, ist auch die Videoqualität sehr gut. Also ich brauche (..) ich habe einen guten Verbrauch, also 3 Gigabyte brauche ich, auf das schaue ich.

I: Kennst du ein paar Webseiten, die Übersetzung in Gebärdensprache bereitstellen, zum Beispiel einen Avatar? S3: Also so etwas gibt es eigentlich nicht, das deutscher Text in Gebärden übersetzt wird. Das habe ich jetzt nur zum Beispiel gesehen bei SignOnOne, da wird zum Beispiel ein Text übersetzt, zum Beispiel das die Kronen Zeitung in Gebärdensprache übersetzt wird. So etwas habe ich noch nicht gesehen. Also das wäre schön, wenn das in Zukunft auch von einem Avatar übersetzt wird. Ob die Mimik da auch so optimal ist oder ob das nur steife Standardgebärden sind, das weiß ich nicht, wirklich eine schöne lebendige Gebärdensprache, das weiß ich nicht, aber das wäre eine gute Idee, wenn es so etwas gäbe. Selbst verstehe ich oft in der Zeitung oft komplizierte Texte nicht also da würde ich das wirklich sehr begrüßen, wenn es so etwas gäbe. Es gibt auch viele Internetseiten, die sehr kompliziert sind, schwer zu verstehen sind, da könnte man so etwas gebrauchen.

I: Woher bekommst du die meisten Informationen, zum Beispiel Nachrichten, übers Internet, oder Fernsehen, oder Freunde? S3: Welche Nachrichten meinst du jetzt?

I: Zum Beispiel tägliche Nachrichten, was so in der Welt passiert, oder generell Informationen, wenn du etwas suchst, oder etwas wissen möchtest. S3: Manchmal forsche ich eben selbst, oft erzählen mir das Gehörlose, auf Facebook habe ich auch viele Möglichkeiten Neuigkeiten zu erfahren. Ich habe kein Zeitungsabo, das kommt bei mir weniger in Frage, auf Facebook sehe ich ja zum Beispiel, es war ein Flugzeugabsturz, und dann wenn es mich interessiere, da forsche ich einfach selber nach.

97 I: Ich habe ja letzte Woche eine Email mit einem Link geschickt, zu dem Projekt UNI. Hast du dir den Link angeschaut? S3: Ja, den habe ich mir angesehen. Das gefällt mir sehr gut, das würde ich sofort haben wollen, also das würde ich mir sofort besorgen. Das kann man zum Beispiel bei einem Arzt, in einem Amt, immer schnell verwenden und da bekommt man auch wirklich als Text serviert, wenn da jemand eben spricht, und auch wenn ich gebärde, wird das dann als Sprachausgabe für die Hörenden zur Verfügung gestellt, also das ist wirklich eine sehr tolle Technik, die es auch in Österreich geben sollte.

I: Dann würdest du dir es kaufen? S3: Ja, freilich, sofort kaufe ich mir so etwas. DA braucht man keinen Dolmetscher mehr, da hat man 24 Stunden jemanden quasi, oder etwas, zur Verfügung, das eben die Übersetzung übernimmt, das ist sehr toll, ja. Aber dann würde es wahrscheinlich auch weniger Dolmetscher geben, wenn eben die Technik auch so entwickeln würde. Aber ob das wirklich voll funktioniert, also ob das wirklich vollständig meine Gebärden übersetzt, weiß ich nicht. Dass da wirklich alle Feinheiten übertragen werden, oder nur das was mit den Händen gebärdet wird, oder wirklich auch die Feinheiten, die man mit der Mimik transportieren kann, also selber brauche ich das ja auch, ich brauche auch die Mundgestik und die Mimik noch dazu. Muss man schauen, aber die technische Möglichkeit finde ich toll.

I: Die App SpreadTheSign kennst du auch? S3: Ja.

I: Was findest du daran gut? S3: Das ist eine gute Sache. Es gibt da viele neue Wörter, zum Beispiel Fachgebärden, die ich nicht kenne, also die kann ich mir dann da ansehen, und es gibt auch die Möglichkeit, dass ich die Gebärden in den verschiedenen Ländern vergleichen kann. Wenn ich zum Beispiel nach Spanien fahren würde, kann ich mir da schon ein paar Wörter ansehen, und wenn ich da auf jemanden stoße, der auch gehörlos ist, könnte ich schon in spanischer Gebärdensprache mit ihm sprechen.

I: Und denkst du, dass die österreichischen Gehörlosen auch die ganzen Wörter verstehen in der Österreichischen Gebärdensprache, die man bei SpreadTheSign anschauen kann, weil es gibt ja auch Varianten? S3: Das weiß ich nicht. Wenn die Gehörlosen ein Fremdwort sehen, vielleicht übernehmen sie das auch stillschweigend, dass sie das einfach so gebärden. Aber es könnte schon sein, dass sie es dann wenn sie es im täglichen Leben verwenden, dass Gehörlose diese Gebärde vielleicht schwerer verstehen. Es könnte schon sein, dass Dialekte da vielleicht, wenn es sich mehr verbreitet, untergehen und der Standard sich dann durchsetzt, könnte sein.

I: Bei Ledasila gibt es ja diese Varianten. S3: Ja das stimmt, das ist eine gute Sache, ja. Und es sollte auch der Kärntner Dialekt eigentlich bleiben, er sollte nicht verschwinden, es gibt ja auch im Deutschen verschiedene Dialekte oder wie im Lavanttal gesprochen wird, sollte ja auch bleiben.

I: Also das ist etwas Positives bei Ledasila, dass es die Varianten gibt. S3: Ja, das ist sehr gut, dass diese Varianten sich in Ledasila präsentieren.

I: Zurzeit funktioniert Ledasila ja nur mit dem Internet Explorer, denkst du, esist wichtig, dass es auch mit anderen Browsern und als App funktioniert? S3: Das wäre schon mein Wunsch, dass Ledasila auch als App funktioniert, n Zukunft einmal. Also, das kann man jetzt nur im Internet Explorer anschauen, auf Apple oder einem iPhone kann man sich das nicht ansehen. Das ist schade, ja.

I: Benutzt du, oder kennst du auch noch andere Apps für Gehörlose? S3: Ja es gibt SignMe, das ist eine App, und eine zweite für den Gesundheitsbereich, das hat die Frau Helene Jarmer produziert, und das heißt „IsignIT“, das ist schon produziert, für Österreich eine App, und es gibt noch andere Apps aus Amerika und Deutschland, aber die habe ich nicht heruntergeladen. Eine habe ich aber auf dem Handy, die kommt aus Deutschland, das ist ein Gebärdenlexikon, von Karen Kästner, die hat auch eine eigene App für die deutsche Gebärdensprache in Verwendung, und das ist eine große Datenbank für Deutsche Gebärdensprache, und da schaue ich schon manchmal rein, damit ich das mit den österreichischen Gebärden vergleichen kann.

98 I: Und für die Zukunft, würdest du dir zum Beispiel wünschen, dass es Avatare für Zeitungsartikel gibt, oder eben Ledasila als App. S3: Ja genau, das wäre schön. Und es wäre auch gut, wenn zum Beispiel Dolmetscher auch über ein Tablet zum Beispiel erreichbar wären. Es gibt jetzt in Österreich keine 24-Stunden-Telefonvermittlung, das ist nur beschränkt auf eine bestimmte Zeit, also das endet dann um 12 Uhr mittag, da kann man dann in der restlichen Zeit keine Dolmetschzentrale mehr nutzen, oder keine Telefonzentrale mehr nutzen, und das würde ich mir wünschen, dass es das für 24 Stunden gibt, wenn man dann zum Beispiel einmal zum Arzt muss. Also in Österreich gibt es eben diese Telefonvermittlung und verbal voice gibt es in Deutschland, man müsste in Österreich nachschauen, welchen Namen dafür es in Österreich gibt, Da gibt es in Wien eben eine bestimmte Firma, wo Dolmetscher angestellt sind, eine Telefonvermittlung, aber da sind die Dolmetscher eben nur zu einem bestimmten Zeitraum verfügbar. Und ich würde auch gerne einmal mit meiner Mutter telefonieren, und ihr erzählen, was es Neues gibt, jetzt kann ich mich nur auf Kurznachrichten, also SMS eben, beschränken, das wäre sehr angenehm für mich, wenn ich am Abend einmal Lust hätte, meine Mutter anzurufen und das eben in Gebärdensprache machen könnte und das für sie dann ins Deutsche übersetzt würde, da könnte ich mich einfach auch ausführlicher mit ihr unterhalten und müsste mich nicht nur auf so kurze Nachrichten beschränken, die ich via SMS schicken kann. Und diese Telefonvermittlung in Wien, da kann ich wirklich nur einen Arzttermin zum Beispiel ausmachen, aber längere Unterhaltungen kann ich da jetzt auch nicht führen.

A5: Interview with Subject 4 (S4)

I: Zuerst einmal die Frage: Wie alt bist du? S4: Ich bin 44 Jahre alt.

I: Hast du ein Smartphone? S4: Ja, genau das habe ich.

I: Wann hast du dein erstes Smartphone gekauft ungefähr? S4: Das ist jetzt mein Zweites. Das war 2013, und vorher hatte ich ein altes Smartphone, aber das war keine gute Qualität, aber jetzt habe ich wirklich seit 2013 ein gutes Handy.

I: Benutzt du das Smartphone oft? S4: Ja, sehr oft.

I: Wofür zum Beispiel? S4: Also meistens verwende ich Facebook, WhatsApp verwende ich auch, jetzt muss ich aber nachschauen (...) ja, hauptsächlich halt Facebook, auch SpreadTheSign verwende ich, Fotos mache ich mit dem Smartphone, und auch die Bankgeschichte, ich habe ein Onlinebankkonto, das verwende ich auch über das Smartphone, und eben die Informationen, die ich von A1 bekomme. Mehr eigentlich nicht.

I: Hast du auch mit Gehörlosen viel Kontakt über das Handy? S4: Also am meisten über WhatsApp.

I: Bist du auch beim Gehörlosenverein dabei? S4: Eigentlich jetzt nicht mehr, ich bin selbst kein Mitglied mehr. Also ich besuche den Verein kaum, also meistens treffe ich Gehörlose privat.

I: Aber früher einmal warst du dabei beim Verein? S4: Ja, früher einmal war ich Vereinsmitglied, stimmt.

I: Und jetzt nicht mehr, woran liegt das? Keine Zeit mehr oder? S4: Früher war ich in Villach Vereinsmitglied, es hat aber Probleme gegeben.Ich habe im Sportverein Volleyball gespielt, und später hatte ich Probleme mit meinem Rheuma mit meinen Händen bekommen, dort war ich dann nur noch unterstützendes Mitglied, und bin eigentlich nie mehr hingekommen, oder habe den Verein nie mehr besucht, und dann bin ich ausgetreten und treffe mich jetzt lieber privat mit Gehörlosen.

99 I: Aber privat trotzdem noch viel Kontakt mit anderen Gehörlosen. S4: Ja, schon. Also meistens ist es natürlich meine Familie, dass ich jetzt täglich oder ständig Kontakt hätte, könnte ich nicht sagen. Ich tausche mich dann bei WhatsApp aus, das wir eben auf Besuch kommen oder zusammen baden gehen oder auf einen Kaffee treffen, das mache ich.

I: Denkst du, dass neue Technologien wie das Smartphone und der Computer dir helfen in deinem täglichen Leben? S4: Also am meisten das Smartphone, aber der Computer eigentlich weniger.

I: Zum Beispiel das Smartphone zur Kommunikation oder? S4: Ja.

I: Hast du zuhause noch andere Technologien, zum Beispiel einen Lichtwecker oder Vibrationsalarm? S4: Ich habe für die Türglocke ein Blitzlicht, beim Fax wenn eines ankommt habe ich kein Blitzlicht, und das Fax verwende ich ja kaum. Meistens schickt mir meine Mutter ein Fax, aber ich verwende über Smartphone also eben SMS, WhatsApp, und beim Schlafen, dass ich einen Wecker benütze, der mich per Licht weckt. Das Fax verwende ich mit meiner Mutter, und wenn ich das nicht mehr brauche, kommt das Fax auch weg. Und ich könnte ja auch mit dem Smartphone ein Email schicken, das ist irgendwie auch sehr praktisch.

I: Was ist für dich wichtig, wenn du eine neue Technologie kaufst, zum Beispiel das Smartphone, worauf schaust du da? S4: Kannst du mir ein Beispiel dafür nennen, mir ist nicht ganz klar was du meinst?

I: Zum Beispiel wie viel es kostet, oder ob das dir jemand empfohlen hat, oder ob es gute Leistung bringt, also ob es schnell ist das Smartphone oder viel Akku hat. S4: Also wichtig ist, dass es praktisch ist, das man schnell jemanden erreichen kann, wenn jetzt ein Email schnell zu lesen ist, ich kann ja nicht immer den Computer einschalten für ein Email, also das ist für mich wichtig, dass ich das schnell einen Zugang habe.

I: Kennst du ein paar Webseiten, die Übersetzung in Gebärdensprache bereitstellen, zum Beispiel mit einem Avatar? S4: Ich habe erfahren, dass es so etwas gibt, aber ich verwende so etwas nicht.

I: Woher bekommst du die meisten Informationen, zum Beispiel Nachrichten, weltweite Nachrichten von jedem Tag, aus dem Internet, über Fernsehen, Freunde? S4: Am meisten über Facebook, da kommen wirklich viele Informationen von Gehörlosen. Da gibt es verschiedene Gruppen, Foren, wo sich Gehörlose austauschen, da bin ich sehr interessiert daran. Und auch natürlich aus dem Fernsehen. (.....)

I: Letzte Woche (...) Projekt UNI? (.....) nimmt die Kamera die Gesten auf, und dann ist es in Text und in Sprache wird das widergegeben. S4: So etwas gibt es? Das habe ich noch nie gehört, dass Gebärden aufgenommen und in Text übersetzt werden, das habe ich noch nie gehört. Also das muss ja wirklich eine tolle Arbeit sein, aber was ist wenn da ein Fehler in der Übersetzung auftritt, wie werden die behandelt? Das kann ich mir vorstellen, das das wirklich Sprache in Text übersetzt wird, aber Gebärden in Text, das ist für ganz etwas Neues.

I: Also, momentan ist es nur in Amerika mit der American Sign Language, und es gibt aber auch die Möglichkeit, spezifische Gebärden oder auch Varianten einzuspeichern, damit sich das Gerät dann sozusagen an den Gehörlosen anpasst. Und so soll das dann funktionieren und so die Kommunikation zwischen Gehörlosen und Hörenden erleichtern. S4: Sehr interessant.

I: Was hältst du von dieser Idee, denkst du es ist gut oder schlecht? S4: Ich habe das noch nie gesehen, also das müsste ich einmal ausprobieren. Momentan kann ich mir das schwer vorstellen. Aber selbst verwende ich am Computer die Telefonvermittlung, die verwende ich schon, zum Beispiel wenn ich einen Arzttermin oder eine Beschwerde habe, dann kontaktiere ich die Telefonvermittlung, die verwende ich. Es ist nur schade, dass dieses Fenster sehr klein ist und auch keine besondere Qualität hat. Ich bräuchte wirklich ein größeres Fenster mit einem scharfen Bild, also das wäre wirklich gut, wenn das in HD gesendet würde.Ich habe schon erfahren, dass es wirklich in Amerika solche Telefonvermittlungen gibt, die auch Dolmetscher anstellen und die Bildqualität sehr sehr gut ist, also wirklich wie am Fernseher, also nicht dieses kleine Bild wie es das bei uns in Österreich gibt, da muss man sehr aufpassen, da muss man wirklich auch schon, Leute die mit den Augen ein Problem haben, die können das nicht so lange nutzen.

100 I: Also so etwas würdest du dir für die Zukunft wünschen zum Beispiel? S4: Ja genau, also wirklich ein Vollbild, wo man sehr gut sieht und es dann auch keine Missverständnisse gibt.

I: Du kennst ja auch die App SpreadTheSign. S4: Jaja, die verwende ich.

I: Und wie gefällt sie dir? S4: Ja, ist eine gute App. Manche Wörter werden gleich gebärdet, da gibt es verschiedene Bereiche, wo Gehörlose fragen, warum Gebärden ähnlich ausschauen. Zum Beispiel im Bereich Baby, Bildung, im allgemeinen Bereich, und da gibt es eben gleiche Wörter, die in dieser App drin sind, und da werde ich öfter gefragt, warum das denn so ist. Aber es ist gut. Und den meisten Gehörlosen gefällt es auch.

I: Und es gibt aber auch Varianten in der Gebärdensprache, denkst du, dass alle Österreicher diese Österreichische Gebärdensprache verstehen, die bei SpreadTheSign benutzt wird? S4: Ich glaube schon, dass sie verstanden werden. Die meisten Gehörlosen beschweren sich nur, wenn sie meinen, es sind falsche Gebärden drin. Ich sehe zum Beispiel auf Facebook, da gibt es eine Gehörlosengruppe, die eben zum Beispiel aus Deutschen, Italienern, Franzosen bestehen, und das ist eine gute Werbung, wenn man die auf die Gebärden in SpreadTheSign hinweisen kann. So verbreitet sich auch diese App in Europa immer mehr.

I: Die Webseite Ledasila, die kennst du natürlich auch. S4: Ja, freilich kenne ich das. Das ist eben nur für Österreich, da sind nur österreichische Gebärden.

I: Und wie findest du das? S4: Ja, es ist okay. Für mich hat das zweite Video keinen Sinn. Da gibt es noch ein zweites Video, das braucht eigentlich niemand, das war früher für die alten, langsamen Computer, die verwendet ja heute keiner mehr, jeder hat leistungsstarke Computer, also das kleine Bild bräuchte man nicht mehr, das verwendet niemand mehr, nur noch die großen Videos werden verwendet. Und das brauchen sicher sehr viele Hörende.

I: Zur Zeit funktioniert Ledasila ja nur mit dem Internet Explorer richtig, denkst du, es ist wichtig, dass es auch mit anderen Browsern oder auf dem Smartphone funktioniert? S4: Ja, es wäre wichtig, dass es auch mit Apple verwendet werden kann, auch Mozilla, Chrome, da läuft es nicht, es wäre gut, wenn es auch dort überall laufen würde.

I: Benutzt du noch andere Apps? S4: Ledasila gibt es nicht als App.

I: Aber sonst irgendeine andere App, außer SpreadTheSign, die du öfter benutzt? S4: Mit Gebärden meinst du?

I: Für Gebärdensprache zum Beispiel. S4: Also da verwende ich eigentlich nur SpreadTheSign, also ich will auch nicht zu viele Apps auf meinem Smartphone verwenden, nur ausgewählte, ich lade mir nicht aller herunter. Das wäre auf einem Tablet vielleicht besser.

I: Und hast du noch irgendeinen Wunsch für die Zukunft, irgendeine App, bei der du dir vorstellen kannst, dass sie Gehörlosen helfen würde? S4: Ja, vielleicht gäbe es da noch etwas, aber keine Ahnung was.

A6: Interview with Subject 5 (S5)

I: Als erstes die Frage, wie alt bist du? S5: 45, also der 45. Geburtstag kommt bald, jetzt bin ich 44 Jahre alt.

I: Und hast du ein Smartphone? S5: Ja, das verwende ich.

101 I: Und wann hast du dein erstes Smartphone gekauft? S5: Ich habe früher ein altes Handy verwendet, also noch kein Smartphone, und das Smartphone habe ich seit 4 Jahren? Ja, ich glaube 4 Jahre ist das erste Smartphone alt. I: Benutzt du das Smartphone oft? S5: Ja, das verwende ich oft. Also das brauche ich für die Kommunikation, WhatsApp zum Beispiel und SMS schicke ich, und es ist auch gut verwendbar, um Videotelefonie zu machen.

I: Hast du seit du das Smartphone hast auch mehr Kontakt mit den Gehörlosen? S5: Ja, ich habe mit vielen Gehörlosen Kontakt, über WhatsApp, SMS, über Videotelefonie habe ich Kontakt zu Gehörlosen. Und auch mit hörenden Personen, zum Beispiel meiner Schwester oder Verwandten, kann ich jetzt auch SMSen. Früher habe ich das nicht gekonnt, also jetzt mit der neuen Technik ist das wirklich viel besser mit dem SMS hin- und herschicken.

I: Gehst du auch zu Treffen beim Gehörlosenverein?` S5: Ja, da komme ich schon hin.

I: Denkst du, dass in den letzten Jahren immer weniger Personen zu den Treffen gekommen sind oder sind es immer noch gleich viele? S5: Früher hat es diese technischen Mittel nicht gegeben, da waren die Gehörlosenvereine immer voll. Zum Beispiel vor 25 Jahren waren die Vereinstreffen immer sehr sehr gut besucht, aber jetzt wird das eigentlich weniger.

I: Denkst du, dass das etwas mit diesen neuen Technologien zu tun hat? Dass man vielleicht eher Kurznachrichten schickt und sich so kurz unterhaltet? S5: Möglich wäre es, ja. Die Jugendlichen kommen weniger zu den Gehörlosentreffen, zu den Vereinen. Auch sind sie bei den Sportvereinen nicht mehr dabei, sie gehen jetzt einen anderen schulischen Weg, der integriert. Es ist eher so, dass Gehörlosenvereine Gehörlose suchen müssen und die dann in die Vereine bringen. Und ich sehe, dass in Graz, wo ich auch Mitglied bin bei dem Sportverein, viele Jugendliche dabei sind, und ich habe gefragt, was der Grund dafür ist. Sie haben gesagt, es gibt Schulen, die drei/vier Gehörlose in der Schule haben, und die dann die Vereine auch besuchen, die pflegen Kontakte zu Gehörlosen. In Kärnten gibt es junge Gehörlose beziehungsweise Schüler die kaum Kontakt zu anderen Gehörlosen haben.

I: Und denkst du, dass neue Technologien wie der Computer und das Smartphone dir in deinem Alltag helfen? S5: Also Computer, Smartphone und so verwende ich täglich, ja.

I: Zum Beispiel den Computer, wofür? S5: Zum Beispiel schaue ich mir am Computer die Gebärdenwelt an, da habe ich Informationen in Gebärdensprache, oder in der ORF TvThek kann ich mir auch die Dolmetscheinblendungen für die Nachrichten ansehen. Wenn ich zuhause viel zu tun habe und um halb Acht die Zeit im Bild versäume und keine Zeit habe, das anzusehen, habe ich ab halb Neun die Möglichkeit, mir das am Laptop noch einmal anzusehen in der TVThek. Da habe ich die Zeit im Bild mit Dolmetscheinblendung, das ist für mich ein Vorteil. Früher war das immer ein Stress um halb Acht die Nachrichten zu sehen und jetzt werden sie in der TVThek gespeichert und da habe ich auch die Möglichkeit, das mit Dolmetscheinblendung anzusehen. Oder zum Beispiel, Konkret wird auch gedolmetscht, das sehe ich mir dann auch an.

I: Kennst du ein paar Webseiten, die Avatare benutzen, zur Übersetzung? S5: (Kopfschütteln)

I: Kennst du keine. Aber wäre das für dich sinnvoll, würdest du dir das wünschen? S5: Nein, das glaube ich nicht. Meine Meinung: Ich halte das nicht für gut. Ich möchte einen echten Menschen sehen, der das gebärdet. Auch bei der Zeit im Bild wird ein richtiger Mensch hingesetzt, der das eben vorliest, da gibt es auch keine Avatare, also mir gefällt das nicht. Es gibt da viele Homepages in Europa, und meistens sieht man wirklich echte Menschen und nicht Avatare.

I: Hast du zuhause auch noch andere Technologien die dich unterstützen, also zum Beispiel einen Lichtwecker oder Vibrationsalarm? S5: Ja, das habe ich. Ich habe einen Wecker mit einem visuellen Alarm, auch bei der Türglocke habe ich das.

102 I: Und was spielt für dich eine Rolle, wenn du eine neue Technologie, zum Beispiel dein Smartphone, kaufst? S5: Wichtig ist, dass sie Gebärden verwenden kann. SMS war ja nicht schlecht, WhatsApp ist auch nicht schlecht, aber Videotelefonie ist für einfach wirklich absolut wichtig, da kann man wirklich eine lange Kommunikation führen und mit jemandem, genauso wie Hörende auch lange telefonieren können, kann ich das jetzt in Gebärdensprache auch tun, das ist sehr interessant, weil eine Kurznachricht als SMS oder als WhatsApp ist immer nur sehr kurz, wenn ich da im Deutschen schreiben muss. Aber wenn ich Gebärden kann, kann ich wirklich sehr sehr lange mich austauschen.

I: Letzte Woche habe ich einen Link per Email geschickt. Hast du dir den Link angeschaut zu dem Projekt UNI? S5: Ja das habe ich mir angesehen. Das ist sehr erstaunlich, dieses Projekt, sehr neu, das finde ich nicht schlecht. Das ist eine gute Sache, mit diesem Tablet die Übersetzungen zustande zu bringen, und auch wenn Hörende sprechen, dass man das sogar lesen kann, dann klappt die Kommunikation gut. Da könnte ich mir das schon vorstellen. Bei meinem Smartphone gibt es auch ein Mikrofon, und wenn jemand Hörender etwas spricht, dann bekommt man das auch eingeblendet, also diese Funktion habe ich da. Aber umgekehrt, wenn ich gebärde, bekommt das der Hörende nicht als Text eingeblendet.

I: Und würdest du dir das kaufen, wenn es das dann bei uns auch gibt? S5: Ja, ich würde es schon kaufen.

I: Du kennst ja auch die App SpreadTheSign. Wie gefällt sie dir, was findest du daran gut? S5: Ja, es ist eine gute App. Sie gefällt mir auch sehr gut. Da können Hörende Gebärden lernen und das sehr schnell finden, verschiedene Wörter können sie schnell suchen. Wenn man jetzt zum Beispiel die Information weitergibt, diese App gibt es, Hörende laden sich das schnell herunter, so wie zum Beispiel meine Schwester. Und die lernt dann einfach wie nebenher ein paar Gebärden. Da bin ich ganz erstaunt, wie schnell das dann geht. Und ich habe ihr auch gesagt es gibt Ledasila, das verwendet sie nicht, weil das nur im Internet zur Verfügung steht und da braucht man mehr Zeit. Und an der Fachhochschule habe ich den Studierenden auch die Information gegeben, dass es diese App gibt, SpreadTheSign wird am meisten verwendet. Ledasila weniger, weil SpreadTheSign als App einfach schneller verfügbar ist. Das muss man einfach so akzeptieren.

I: Bei SpreadTheSign gibt es aber nur eine Variante pro Sprache, also in Österreich nur österreichische Gebärdensprache und es gibt keine Dialekte, keine Varianten. Denkst du, dass dann die Gehörlosen das trotzdem verstehen? S5: Das ist eine gute Frage, das weiß ich schon. In SpreadTheSign wurde eine Variante ausgewählt, aber viele Gehörlose sagen, sie gebärden das anders. Wie man das jetzt lösen kann? Man muss aufklären, dass da eine Variante, die die Mehrheit bekommen hat, übernommen wurde, aber die Dialekte bleiben ja trotzdem bestehen. Zum Beispiel auch in Wien gebärden die Dolmetscherinnen in einer bestimmten Weise, da beschwere ich mich auch nicht darüber, dass die jetzt anders gebärden als ich.

I: Zu Ledasila: Wie gefällt dir Ledasila? S5: Ja, das ist eine gute Sache. Ledasila ist sehr gut, da sind sehr Viele videos eingespeichert, und auch sehr viel beschrieben, wie schaut die Antwort aus, wie ist die Semantik und es gibt auch Beispielsätze im Kontext zu dieser Gebärde, das kann man sich ansehen. Da gibt es auch Glossentranskriptionen, das find ich eigentlich sehr gut.

I: Du hast ja schon gesagt, man kann Ledasila nur im Internet Explorer anschauen, denkst du, es ist wichtig, dass es auch auf anderen Browsern und als App funktioniert? S5: Ja, das wäre gut. Hauptsächlich denke ich, es wäre als App zu verwenden. Wenn man Ledasila als App hätte, dann kann man die Kärntner Gebärden aus bestimmten Wortfeldern zum Beispiel speichern und sich das dann anschauen. Das könnte ich mir schon vorstellen, das wäre eine gute Sache.

I: Hast du noch andere Apps auf deinem Smartphone, also so ähnlich wie SpreadTheSign, das für Gehörlose ist? S5: Nein, da habe ich keine. Die anderen sind meistens auf Englisch, die interessieren mich nicht. Da verstehe ich nichts, das lösche ich dann gleich wieder.

I: Hast du noch irgendwelche Wünsche für die Zukunft, irgendeine App oder Webseite, die du dir wünschen würdest, damit du vielleicht Dinge besser verstehst oder einfach leichter kommunizieren kannst? S5: Dieses Tablet mit dem Projekt UNI finde ich ganz speziell besonders, da kann man wirklich für kurze Besprechungen diese Anwendung verwenden, das wäre für eine kurze Unterhaltung sehr gut, ja.

103