STUDY OF THE USE OF WEARABLE DEVICES FOR PEOPLE WITH DIFFERENT TYPES OF CAPABILITY

Alejandro Rigueira Cabrera

Master’s Thesis presented to the Telecommunicatons Engineering School Master’s Degree in Telecommunications Engineering

Supervisors Prof. Enrique Costa Montenegro Prof. Milagros Fernández Gavilanes

2017

What we think, we become. — Buddha

To my family and friends.

Abstract

Nowadays, in some territories, a large number of people possess a , and even in some cases they have more than one. Not only do they have some but they use them several times a day for different activities and applications. The smartphone penetration rate is really high, but other devices cataloged as intelligent have also appeared as in the case of the . The goal is to use one of these wearable devices, which can be purchased from any regular electronic devices distributor on the market, as input element that allow interaction with the smartphone as a joystick or other external appliance. Our target audience is mainly those with possible conditions in hands and fingers that imply lack of mobility or limited one in many of them. Facilitate access to anyone, anywhere, anytime. To bring this about, it will be developed an Android app as proposed solution which be able to detect arm movements using a which has the necessary sensors to it. Although not all apps and aspects on the smartphone are reachable due to certain limitations, the proposed solution fulfills to a large degree the proposed objective, including easy access to a custom keyboard.

Key words: Accessibility service, Android, Android wear, Gesture sensing, Input device, .

i

Contents

Abstract i

List of figuresv

List of tables vii

1 Introduction1 1.1 Introduction...... 1 1.2 Objective...... 2 1.3 Target audiences...... 3 1.4 State of the art...... 3 1.5 Universal Design...... 6 1.5.1 What is it?...... 6 1.5.2 Principles and considerations...... 7 1.5.3 User interface, usability...... 8 1.6 Proposal...... 8

2 Technologies 11 2.1 Android...... 11 2.1.1 Why Android...... 11 2.1.2 Android fragmentation and devices...... 12 2.1.3 Accessibility on Android...... 14 2.1.4 Android limitations...... 15 2.2 Wearable device...... 17 2.3 Android wear...... 18 2.4 Development tools...... 19

3 Development 21 3.1 Mobile...... 21 3.1.1 Keyboard...... 21 3.1.2 Mouse cursor...... 25 3.1.3 Communication service...... 25 3.1.4 Preferences...... 26 3.2 Wear...... 26

iii Contents

3.2.1 Movements detection...... 26 3.2.2 Movements assignment...... 28 3.2.3 User interface...... 30 3.3 Tests and debugging...... 30

4 Use manual 33 4.1 Mobile...... 33 4.1.1 Keyboard...... 36 4.1.2 Mouse cursor...... 38 4.1.3 Settings preferences...... 39 4.2 SmartWatch...... 43

5 Conclusions 47 5.1 Possible improvements and future study lines...... 47 5.2 Conclusions...... 48

Bibliography 53

iv List of Figures

1.1 Word-wide smartphone sales. Units sold, in thousands...... 1 1.2 Smartwatch sales forecast according Gartner. Millions of unit sales....2 1.3 Wristband-type input device...... 5 1.4 Sensing arm-shape change based on capacitive sensing...... 6

2.1 Android fragmentation...... 12 2.2 Screen sizes and densities...... 14 2.3 Asus ZenWatch 2...... 17 2.4 Smartwatches using Android Wear...... 19

3.1 Application keyboard...... 22 3.2 Activity lifecycle...... 23 3.3 IME lifecycle...... 24 3.4 Mouse pointer...... 25 3.5 Reference smartwatch...... 29 3.6 Smartwatch on a left hand (left figure) and on a right hand (right figure). 29 3.7 Interaction modes on smartwatch...... 30

4.1 App icon...... 33 4.2 Application main screen...... 33 4.3 About application message...... 34 4.4 Enabling keyboard...... 34 4.5 Selecting keyboard...... 35 4.6 Notification application running...... 35 4.7 Usage test screen...... 36 4.8 Application keyboard...... 36 4.9 Keyboard rightward movement selection...... 37 4.10 Keyboard leftward movement selection...... 37 4.11 Keyboard upward movement selection...... 37 4.12 Keyboard downward movement selection...... 37 4.13 Enabling mouse cursor...... 38 4.14 Mouse cursor...... 39 4.15 Application settings preferences...... 39 4.16 Application hand settings...... 40

v List of Figures

4.17 Application leftward sensors sensibility preference...... 41 4.18 Application back button preference...... 41 4.19 Select directions on smartwatch...... 42 4.20 Show directions assigned on smartwatch...... 42 4.21 Application keyboard scanning time settings...... 43 4.22 Main, buttons, swipe and sensors screens on smartwatch app...... 44 4.23 Get out of swipping screen on smartwatch app...... 44 4.24 Select and back buttons activation using sensors...... 45 4.25 Sensor activation screen...... 45

vi List of Tables

2.1 Worldwide smartphone sales to end users by operating system in 4Q16 (thousands of units)...... 11 2.2 Platform versions...... 12 2.3 Screen sizes and densities...... 14 2.4 Android wear versions...... 18

vii

1 Introduction

1.1 Introduction

Nowadays, in some territories, a large number of people possess a smartphone, and even in some cases they have more than one. Not only do they have some but they use them several times a day for different activities and applications. According to the Deloitte Mobile Consumer Report 2015 [1], 88% of Spaniards have a smartphone. Only Singapore has a higher penetration rate of smartphone devices, 92%. The average penetration rate in the EU countries participating in the study is 78%.

The report shows that more than 50% of respondents claimed to check their mobile in the first 15 minutes of the day. They also claim to do a daily average of 41 times. This, according to Deloitte, gives "a very clear idea of the high degree of digitalization of Spanish society at present".

Figure 1.1: Word-wide smartphone sales. Units sold, in thousands

1 Chapter 1. Introduction

It is clear the great penetration in the society and the great deployment of . Increasingly they used more and more as it can be seen in Figure 1.1. There are more than 700,000 apps on [2], Google’s official Android app store. Applica- tions of messaging, management, buying and selling, education, games, photography, information. . . There are also movies, music, books, magazines and more.

In addition to the well-known smartphones, other devices have appeared also cataloged as intelligent devices. Some of these, in turn, fall within the category of wearable devices. That is, they can be worn on the body as implant or accessories. Among these devices are those known as smartwatch, which have grown in popularity and sales. In Figure 1.2 you can see the sales forecast for smartwatches according to Gartner [3, 4] (consulting and research company of information technologies).

Figure 1.2: Smartwatch sales forecast according Gartner. Millions of unit sales

1.2 Objective

Being able to access all these services and products can be very useful for any user. However not everyone can achieve it for several reasons. We will talk later about people who have difficulties making use of some of these devices. We will also talk about a design model that aims to get the environments, products, services and systems to be employed by as many people as possible.

Currently, the typical application of a smartwatch is as a notifying device, although there are many applications that are usable directly on it doing more than just notify. Example of this are simple actions that interact with the mobile device or the acquisition of data such as heart rate.

The goal is to use these wearable devices, such as smartwatches, as input element that allow interaction with the smartphone as a joystick or other external appliance. Although theire use are possible for a large number of people, and so it is intended, the target audience, for which the study is mainly focused, are people with motor conditions that are difficult to a normal use of a smartphone or tablet.

2 1.3. Target audiences

1.3 Target audiences

Alterations in the regular operation of hands and fingers may be congenital or caused by external reasons as a result of diseases, accidents or other events. The incorrect and unusual development of hands can occur sporadically, or to be the result of an inheritable genetic alteration. There are other less common causes such as environmental factors, diet and infections among others. A clear and unfortunate example of this was the use of thalidomide in pregnant women [5,6].

Congenital anomalies of hands are common, but most of these are minor [7, 8]. Causes are very diverse and can occur sporadically or have genetic origin. Some alterations can significantly affect the function of hands and can have a significant psychological impact on the patient and his environment. This type of disturbance affects approximately 1% to 2% of all live births and alterations in the upper limbs are observed in approximately 10% of these patients. It is estimated the current incidence of malformations or congenital abnormalities in hands are 2.3 cases per 1,000 live births.

Some alterations are agenesis and/or malformations in the phalanges. Agenesis is the partial or total absence of an organ or tissue of the organism. Normal and correct non-development can be caused by variety of reasons. Other alterations may lead to lack of independence and digital control due to conditions at metacarpal level. This can affect a possible lack of precision and fine motor control but allowing a thick one.

In addition to possible conditions in hands and fingers, there are several others that imply lack of mobility or limited one in many people. They can also cause erratic movements such as Parkinson’s Disease or Cerebral Palsy [9, 10].

In any case, for a variety of reasons, these people have difficulties to make use of smartphones and other devices that require certain precision due to the reasons previously explained. We want to avoid depriving a person of being able to use high-value tools nowadays with all the possibilities offered by these devices and their computational ubiquity. Facilitate access to anyone, anywhere, anytime.

It should be noted that in addition to the above, it is sought that anyone can make use of the tools for accessibility that will arise.

1.4 State of the art

There are a multitude of wearable devices with several purposes and utilities. A wide amount of prototypes and proposals, and also products are available for sale. There are glasses, bracelets, watches, rings, clothes, etc. The most popular are probably bracelets and smart watches, able to display notifications and interact with the mobile phone; measure pulsations, sleep time and many more things.

3 Chapter 1. Introduction

Some of the main uses and areas are:

– Health – Sports and wellness – Entertainment – Industrial – Military

Some considerations are necessary to be taken into account [11]. One of them is the target audience, more concretely, people with some kind of motor disability or any disease that affects it. Another important aspect is the scope and objective of possible solutions that facilitate access to the target group. The goal is to use some wearable device as an input mechanism. Its operation can be very wide, allowing to handle completely another device or several, or to be very specific focusing on a task such as the use of a keyboard specially designed for some circumstances. These solutions can work in isolation or collaborate with other inputs such as voice recognition or electromyography1[12] in addition to more “traditional” methods.

Fixed these elements according to the context, it is necessary that the operation is adjusted to the purpose so, the complexity of using the proposed solution and its possible configuration must be taken into account. In the case of complex solutions, it could be required more training in the operator for the correct use of this, as well as a more arduous and difficult configuration. However, it can be very tiring and cumbersome with everything this entails. The user may not understand how to use the solution correctly or the necessary configuration, even doing it wrong and not fulfilling its purpose.

There are no apps on the Play Store related to the target set. At least none were found after several searches in Spanish and English. This means that, although they may exist, it is not easy to find them for any user. There are many tools and applications to use a smartwatch as an output device or notifier, but few that use it as an input device, much less beyond the interaction with a particular application, such as a service of instant messaging or alarms.

There are no products on the market specifically for the accessibility of people with disabilities through the use of wearables. There are some products that can be used for this purpose as a ring [13, 14, 15] that through gestures allow to perform some tasks like launching applications or writing by the use of gestures. The rest of the elements found are ideas and prototypes that make use of hardware to measure or that are currently not commercialized between the devices wearables. One of them makes use of the margin or edge of a smartwatch to extend its possibilities [16]. Another is able to use the position of the hand as an input device for cross-device applications [17], which is very interesting

1Electromyography or EMG is a procedure to detect the electrical signals of the nerve cells that control muscles, motor neurons, using tiny devices called electrodes. An EMG translates these signals into values able to being interpreted.

4 1.4. State of the art for the case study.

One would have to study if the ring requires much precision, something that a person with some motor disability might not reach for its correct use. Its operation makes use of gyroscopes and accelerometers to detect the movements (existing elements in diverse smartwatches), as well as sensors in the own body of the ring that detects the pulsations of the user. The price is not greater than that one of a smartwatch, so it can be an interesting product. It should be noted that although the device is on a finger, it actually depends more on movements involving the whole hand and not just the finger individually. This is important because speaking of a possible reduced mobility, it would not be surprising that the user was limited in joints and the precision of its movements in any case can be very variable. There are ideas and solutions that directly involve the movement of the fingers, like the use of a camera to capture this placement and movement [18, 19]; or gloves that capture the movement of all elements of the hand [20, 21].

A very interesting idea is the use of sensors around the wrist to detect the shape of the hand in an instant, if you have a clenched fist, an open hand or two straight fingers, as an example [22]. If this idea and technology would be added to smartwatch straps, it would expand its input possibilities.

Figure 1.3: Wristband-type input device.

In the Figures 1.3 and 1.4, the technology mentioned as an idea for a prototype is shown. It would multiply the possibilities of detecting different movements, being able to differentiate, for example, to move the hand upwards with the open palm or with the fist done. Although for certain users this might not be possible, there is no doubt about the great amount of possibilities offered by this idea.

5 Chapter 1. Introduction

Figure 1.4: Sensing arm-shape change based on capacitive sensing.

Another idea worth mentioning is to detect when blowing the clock [23] through the microphone, if the smartwatch has any, an idea could be approached. However, the large noise that can pick up the microphone and the discrimination of the desired sounds can be complicated in certain environments, situations or people who have difficulties to perform this action.

1.5 Universal Design

1.5.1 What is it?

The term Universal Design (also known as “Design for all”) [24] was coined and defined by the American architect Ronald L. Mace (1941-1998) and was ratified and qualified in the Stockholm Declaration of the year 2004, in which its objective was defined as: “. . . make it possible for all people to have equal opportunities to participate in every aspect of society. . . [for which] the built environment, everyday objects, services, culture and information [. . . ] must be accessible and useful For all members of society and consistent with the continuous evolution of human diversity”.[25]

Design for all is a design philosophy that aims to ensure that environments, products, services and systems can be used by as many people as possible. It is a design model based on human diversity, social inclusion and equality. It is applicable and in fact it is employed in architecture, engineering or web pages and software development, among other fields of enforcement.

Some other definitions are as follows:

– “The design of products and environments that can be used by all people, to the greatest

6 1.5. Universal Design

extent possible, without the need for adaptations or specialized designs”[26]

– “Design of products, services and environments that can be used by as many people as possible, regardless of age and physical characteristics (e.g. weight, visual or auditory capacity, and arm mobility).”[27]

– “Design for All in the Information Society is the conscious and systematic effort to apply principles, methods and tools proactively, with the aim of developing products and services of Telecommunications and Information Technology (T&T) that are accessible and usable by all citizens, avoiding the need for later adaptations or specialized designs”[28]

1.5.2 Principles and considerations

Some general principles are:

• Equivalent use: the design is useful and scalable to people with diverse abilities.

• Flexible use: the design accommodates a wide range of individual preferences and abilities.

• Simple and intuitive: the use of the design is easy to understand, taking into account the experience, knowledge, language skills or degree of current concentration of the user.

• Sensitive information: the design effectively communicates the necessary information to the user, taking into account the environmental conditions or the sensorial capacities of the user.

• Tolerance to error: the design minimizes the risks and adverse consequences of involuntary or accidental actions.

• To require little physical effort: the design can be used effectively and comfortably with minimal fatigue.

• Size and space for access and use.

Some principles and considerations regarding the mobile interface design [29, 30] are as follows:

• Screen size: mobile screens are, or they can be, small, even the bigger ones.

• Make navigation simple: keypads and touch screens do not make for precise naviga- tion.

• Keep content to a minimum: make content short and to the point, remembering the small screen space and according to it.

7 Chapter 1. Introduction

• Reduce the inputs required from Users: the less the users has to fiddle with their mobile phone; the more they’re going to enjoy using it.

1.5.3 User interface, usability

The screen of a smartwatch is small for certain functionalities. If you need or want to include many functionalities or a great input interaction into these devices it is very important to plan it well. Designing how it will be done. What a smartphone could do with several buttons is not always possible to translate it to screens of such small dimensions as in the case of a smartwatch. Some buttons on the smartwatch screen may be acceptable for certain applications or for certain people. People who have no problem using the on-screen keyboard on a mobile device will not have much trouble pressing buttons of a similar size on these screens. However someone with no much precision in the movements, or for whatever reason is difficult to get their interaction right, appreciate less fine handling of the elements on the screen.

In addition to the size the issue of the motion detection appears. It is not only important to be able to distinguish different movements with an acceptable precision but it is equally crucial to take into account what movements are these. Not all movements are possible and just as easy for everyone. Again it is important to remember the different capacities of the users and their possible limitations.

Not only is it important to know what happens in the wearable device but also in the device with which it communicates. Using an input device such as a joystick, gamepad or keyboard, you can quickly see that most applications are not designed for it. It is not easy and sometimes impossible to scroll through the interface and access all its elements. Although there are principles for design taking into account human diversity, social inclusion and equality is not always taken into account for various reasons. Further information on design for all or universal design can be found in the corresponding annex.

1.6 Proposal

As we can see in the state-of-the-art, most of the proposals aimed at our target are prototypes and ideas which are not in the market. If you wanted to obtain optimal results it would be necessary to design custom hardware for the needs of users. However, we intend to use common commercial devices to be easily used. We also observe that there are no currently solutions for the described objective.

Our proposal will be to use a smartwatch with the required features to build a prototype. It can be purchased from any regular electronic devices distributor. It will need to have sensors such allow it to distinguish movements made by the hand. The solution carried out will have to be able to be used satisfactorily by the target public. Given the variety

8 1.6. Proposal of this, it is necessary to provide some configuration freedom like being able to choose the used hand or the movement sensitivity.

9

2 Technologies

2.1 Android

We have focused on the study of the use of wearable devices with Android. Both smartphones (or tablets) and wearables devices implement it. The proposal explained below as well as some considerations carried out are under this operating system. Today there are other devices with Android as TVs for which the study could be extended.

2.1.1 Why Android

Android is a free and open-source operating system. Both the operating system and the tools developed on it are free, although lots of software bundled in it (including Google apps and the vendor-installed software) are not opensource. There are versions for phones, tablets, TVs, media centers, game consoles, etc. The devices that use it are very varied in functionality, features and price, opening the range of possible users. This is a widely used operating system, as we can see in table 2.1.1.

Operating 4Q16 Market 4Q15 Market 4Q16 Units 4Q15 Units System Share (%) Share (%) Android 352,669.9 81.7 325,394.4 80.7 iOS 77,038.9 17.9 71,525.9 17.7 Windows 1,092.2 0.3 4,395.0 1.1 Blackberry 207.9 0.0 906.9 0.2 Other OS 530.4 0.1 887.3 0.2 Total 431,539.3 100 403,109.4 100

Table 2.1: Worldwide smartphone sales to end users by operating system in 4Q16 (thousands of units)

Although the numbers may or may not correspond with total accuracy to reality, they

11 Chapter 2. Technologies serve to gain an idea of the magnitude of these. There is a clear dominance of Android smartphones, followed, although by far, by those who use iOS. This means that an application designed for Android could potentially be used by 80% of users with a smartphone.

2.1.2 Android fragmentation and devices

Despite the advantages offered by the number of devices that use this operating system, the variety of them offers a challenge for the correct development of the applications. There are multiple versions of Android with certain differences between them. Some of these differences are more or less important. From obsolete or new methods and services to varying security restrictions, adding new ones or hardening them.

Version Code Name API Distribution (%) 2.3.3-2.3.7 Gingerbread 10 1.0 4.03-4.0.4 Ice Cream Sandwich 15 0.8 4.1.x Jelly Bean 16 3.2 4.2.x - 17 4.6 4.3 - 18 1.3 4.4 KitKat 19 18.8 5.0 Lollipop 21 8.7 5.1 - 22 23.3 6.0 Marshmallow 23 31.2 7.0 Nougat 24 6.6 7.1 Nougat 25 0.5

Table 2.2: Platform versions

Figure 2.1: Android fragmentation

The data reflected in the Table 2.1.2 and the Figure 2.1 were collected over a period of 7 days ending May 2, 2017. Any versions with less than 0.1% distribution are not shown.

As it can be seen, there is a great fragmentation. It is a problem in the Android ecosystem that makes it difficult to create applications that work on all devices or perform just as

12 2.1. Android well. 81.9% of devices use a version greater than or equal to Android 4.4 (KitKat) and 70.3% greater than or equal to Android 5.0 (Lollipop). Developing an application capable of working correctly from one of these versions ensures that most users will be able to use it.

Not only are there multiple versions that are being updated, but it is also very important to point out the various distributions that exist. The first one is probably the most common of all. Android as Google introduces in their devices Nexus and Pixel, and the version that is normally used as a synonym for Android. This version has the peculiarity to be closely linked to Google applications and you can find how many of its services have been introduced to integrate with functions as basic as the phone, contacts or messages. The second most important form of Android is Android Open Source Project, better known as AOSP. This is the equivalent of Android for Nexus and Pixel but totally free, so part of the closed code for Google Apps is not included. This version is probably the most important of all, even more than that of Google itself. This is the origin of the success of the system and the form of Android in which all are based to make their own. Google is the main contributor to AOSP, but its development is not intrinsically linked to it.

On the other hand, we can find differences depending on the manufacturers. The version created by Samsung, HTC, Sony, LG and others is not only different in appearance. There are a lot of lines of code inserted in their cores that increase and change quite the possibilities of Android. However it seems that Google wants to get rid of these forms of Android. Perhaps precisely for this reason it is now urging manufacturers to upload the different parts of their layers of customization in Google Play, leaving the form of Android Nexus inside, and with special apps and launchers on it.

Another form of Android that exists are ROMs, which could be considered an intermediate case between AOSP and Android manufacturers. Basically they are modifications of AOSP but instead of being created by multinationals, they were by groups of developers whose main characteristic is that they also release the code. A feedback with AOSP occurs that does not exist in the cases of the ROMs of the manufacturers. There are a lot of ROMs, from the commercial Cyanogen to Paranoid, AOPK. . . All of which can be easily installed and provide a fairly comprehensive overview of how important it is for AOSP to continue to compete in quality with Android (Google).

Although it may seem something merely anecdotal, the variety of versions that exist has major implications. Some manufacturers limit certain possibilities if they exist in other versions. Below we will see some example applied in the development of the proposal that will be exposed to fulfill the proposed objective.

Not only are there multiple versions of the operating system, but there is also a wide range of devices. As previously seen, around 80% of smartphones sold have Android as

13 Chapter 2. Technologies

ldpi mdpi tvdpi hdpi xhdpi xxhdpi Total Small 1.0% - - - - - 1.0% Normal - 2.1% 0.2% 34.6% 34.8% 17.6% 89.3% Large 0.1% 3.5% 1.8% 0.4% 0.4% - 6.2% Xlarge - 2.4% 0.5% 0.6% - 3.5% Total 1.1% 8.0% 2.0% 35.5% 35.8% 17.6% -

Table 2.3: Screen sizes and densities their operating system. Among them, we find a great variety of trademarks with various models on sale. Not only smartphones but also tablets and other devices that use this system, however we will focus on the first two. Due to this, there are devices with great variety, related to the memory, RAM or processor among other technical details. We should not forget another important detail, the screen size and its density as can be seen in the Table 2.1.2 and figure 2.2. What for the developer may be an interface design problem, for some users may result in drawbacks for their usability (a term explained in detail below). On a very large screen it can be difficult to access all of its points. In a small one, we could find problems with little precision in the movements.

Figure 2.2: Screen sizes and densities

Size: Density:

• xlarge: at least 960dp x 720dp • ldpi (low): ∼120dpi • large: at least 640dp x 480dp • mdpi (medium): ∼160dpi • normal: at least 470dp x 320dp • hdpi (high): ∼240dpi • small: at least 426dp x 320dp • xhdpi (extra-high): ∼320dpi • xxhdpi (extra-extra-high): ∼480dpi

2.1.3 Accessibility on Android

Android implements a series of services and aids for accessibility. Among them are the followings.

14 2.1. Android

• Receive voice messages. – TalkBack: allows you to interact with the device via voice messages and touch options. TalkBack describes actions and reports alerts and notifications. – Listen selection: allows you to listen to voice messages only on certain occasions. • Use a switch, keyboard or mouse: use a switch or keyboard to control the device. • Use voice commands: allows you to use the voice to open applications, scroll and edit text without using hands. This service is a limited beta and is only available in English. • UUse a braille display. • Adapt the screen. – Screen content and font size. – Expansion gestures. – Color and contrast options. • Subtitles.

However some of these options are only available for the more modern versions of Android. As we have seen above, the high fragmentation of this operating system is a problem in order to access these accessibility facilities. Also some of the above options may be slow and not very fluent. Example of this is to use screen sweep to select elements in it. Others due to their characteristics may be uncomfortable at times. Hiring the voice for example can be complicated in very noisy environments, or be inconsistent use.

2.1.4 Android limitations

To develop this project we found several limitations, some of them due to security aspects associated with Android. Others, due to the design of the applications since not all, are navigable or easy to interact with them with joysticks or directional pads. The smartwatches, in general, are not intended for this type of solutions but mostly for tactile use only in addition to the possibility of using voice commands (in current versions).

Some problems arise due to the aforementioned fragmentation of Android. It is difficult to create applications and services that work well for all devices that use this operating system due to the fragmentation. There are old versions that we cannot despise in order not to leave a considerable number of people out. More current versions, like Android 6 (or Android Marshmallow), require explicitly asking the user for permission to perform some actions. In the same way some actions are treated differently depending on the version or with different security restrictions, which has been increasing lately. However, despite good news for the security of user data and its terminal, it can also be sometimes not transparent or it can even be annoying to be authorizing permissions constantly. The

15 Chapter 2. Technologies less you ask the user or you will be interrupted from using the functionality that is best provided.

Not only does it affect the version of Android, but also if it is AOSP, the Google version, a ROM like Cyanogen or a customization of some manufacturer. For example, [31], a popular Chinese company dedicated to the design, development and sale of smartphones, apps and other electronic products, includes an operating system customization on its devices known as MIUI [32]. It modifies the usual appearance of any Android device, including features and limits. Among these modifications are that it does not allow the authorization of services for accessibility beyond this own. This implies that certain functionalities that could be carried out in other devices, we will not be able to have in those of this company. The same can happen with other versions of Android implemented by some companies.

This solution proposes to use a smartwatch and gestures with the arm as if it were a joystick or a direction. This implies that the events corresponding to these buttons or movements, as if they were a joystick, should be replicated corresponding to the desired gestures. Any application has the ability to launch events but only within them. Security restrictions prevent them from being thrown outside the context of each application. Only a keyboard is capable of launching events in the application itself. One solution to be able to carry out the above is to create our own keyboard responsible for emulating the desired interactions.

A hardware device connected via USB is not recognized in the same way as a generic device adapted for this use. Although the events launched are the same, such as clicking on a directional pad the downward-pointing button, a device connected via USB is able to access more screen elements than an event launched from another service or application, including an event launched from the same ADB1 [33]. For example, when accessing a drop-down list a hardware device is able to scroll through it while a software element is unable.

Accessibility services created cannot access the device interaction buttons, both hardware and software. From buttons such as raising or lowering the volume to the back or home buttons. Pressing some of them can be simulated with events. There is, however, the aforementioned problem of the size limitations of the smartwatch and the movements that users could make more or less comfortable (or even possible). That is, it would be possible to add the interaction with any of these buttons but there is not available space on the smartwatch for it due to the screen size. Nor can they access the notification bar where configuration accesses are also available.

1Android Debug Bridge (ADB) is a versatile command-line tool that lets communicate with a device (an emulator or a connected Android device). It facilitates a variety of device actions, such as installing and debugging apps, and it provides access to a Unix shell that can be used to run a variety of commands on a device.

16 2.2. Wearable device

The official information provided to developers on some elements of accessibility should be broad and with numerous examples and applications. Unfortunately, the absence of these is surprising.

2.2 Wearable device

As has been proven there are many ideas and prototypes. However for this development is intended to use a commercial device that anyone can access. Given the proposed idea, it is necessary a device that has sensors with which to detect movements. We chose the smartwatch Asus ZenWatch 2. [34]

Figure 2.3: Asus ZenWatch 2

Existing sensors in the smartwatch are:

• android.sensor.accelerometer • android.sensor.step_counter • android.sensor.gyroscope • android.sensor.tilt_detector • android.sensor.gravity • android.sensor.wrist_tilt_gesture • android.sensor.linear_acceleration • com.pnicorp.sensor.sleep • android.sensor.game_rotation_vector • com.pnicorp.sensor.coach • android.sensor.gyroscope_uncalibrated • com.pnicorp.sensor.inactivity_alarm • android.sensor.significant_motion • com.pnicorp.sensor.activity • android.sensor.step_detector

The accelerometer sensor indicates the acceleration of a movement on three axis X, Y and Z. This sensor, unlike the linear acceleration one, includes the gravity acceleration. In

17 Chapter 2. Technologies other words, being quiet it will detect a value of 9 m/s2 (gravity acceleration). However the linear_acceleration sensor removes this value caused due to the gravity. The gyroscope sensor measures the inclination of the smartwatch. Despite indicating the smartwatch that all of them exist, not all are accessible. This is the case of android.sensor.tilt_detector and android.sensor.wrist_tilt_gesture.

2.3 Android wear

This operative system was announced on March 18, 2014 [35]. Since its announcement and appearance until today there have been several versions with modifications which added functions as can be seen in the table 2.3.

Android base Version Release date system 4.4W1 4.4 June 2014 4.4W2 4.4 October 2014 1.0 5.0.1 December 2014 1.1 5.1.1 May 2015 1.3 5.1.1 August 2015 1.4 6.0.1 February 2016 1.5 6.0.1 June 2015 2.0 7.1.1 February 2017

Table 2.4: Android wear versions

When the system was announced it was indicated that manufacturers would have no option to incorporate their own personalization layer. However Google backtracked and allow customization by both manufacturers and developers. In this way the same problems, like the fragmentation of Android mentioned above, may arise. However to carry out this proposal, no special services are required on the smartwatch. It is necessary to have the suitable sensors to be able to detect the movements and communication with the device to which it is connected. This communication is carried out using Bluetooth. Wi-Fi connection may be also used. On figure 2.4 it can be observed smartwatches using Android Wear.

Android Wear allows you to use the device using a series of gestures. For example by turning the wrist outward it allows to move downwards in the interface of the wear device. Although this may be interesting, it should be deactivated in order to correctly use the proposed solution.

18 2.4. Development tools

Figure 2.4: Smartwatches using Android Wear

You can use voice commands to perform tasks with the Android Wear watch. Users can use it to control their phones. For example, they can query, accept and reject important information such as incoming calls and messages directly from the clock. Track events, tasks, calendar and notes, plus various applications with versions for it. These are usually simple tasks. It emphasizes the reception and sample of information of notifications.

2.4 Development tools

To carry out a proposed solution prototype several tools have been used. AndroidStudio was used to develop using Android. It is the oficial integrated development environment (IDE) for the Android platform. It is based on JetBrains’ IntelliJ IDEA software and designed specifically for Android development. It contains the Android software devel- opment kit (SDK) and it provides some features like gradle-based build support or an Android Virtual Device (Emulator) to run and debug apps.

To manage the source code and all the resources it was employed Git[36]. It is a free and open source distributed version control system (VCS) for tracking changes in computer files and coordinating work on those files. There are several and popular services which provide Git solutions, like Github [37]. We used Bitbucket [38] because it provides a free and private Git repository.

We also needed physical devices to test a real use of the implemented solution. The smartwatch was already mentioned and showed. Regarding the smartphone we use a Bq aquaris M4.5 and a Xiaomi Mi Pad tablet. Using various and distinct devices we can study the performance on different operating system versions and screen sizes. The smartphone screen is 4.5 inches and the tablet one is 7.9 inches. The first device has Android 6.0 Marshmallow and the second one has MIUI 7[32], a Xiaomi custom Android 4.4 KitKat.

19

3 Development

Details on the development of the previous proposal will be explained below. It will indicate the solution implemented, with the problems encountered and the solutions applied to them. They exist two differentiated parts: mobile and wear. They respectively correspond to the smartphone and smartwatch application. Both parts should communicate with each other to exchange information.

3.1 Mobile

3.1.1 Keyboard

A lot of applications require the use of keyboard or input method editor (IME). One of the most popular applications and used several times a day by the vast majority of users of a smartphone is an instant messaging application. Although it allows to send diverse elements like images or videos, it also allows to send notes of voice. Nevertheless it began being designed to use the keyboard and to transmit a message. It is true that there is the possibility of using voice commands, but only the most modern versions of Android. Anyway, as seen earlier is not always a possible option. In the same way you may want to send a message with some intimacy without the need for whoever is around to hear it.

An input device, such as a joystick, cannot access the keyboard that appears on the screen. This is a major limitation since it is necessary to create a keyboard to be able to launch events outside the application to provide a solution to your access. A keyboard on which the user can scroll and select what he/she wants from it, and could be adapted as desired.

A keyboard can be viewed as a matrix with alphanumeric characters. The most common keyboard configuration is QWERTY, but there are more standard configurations like AZERTY, QWERTZ or HCESAR. In addition to the possibility of configurations adapted according to the needs. It should include series of rows and columns which the user should

21 Chapter 3. Development be able to scroll. Even a scroll that could be performed automatically and the user is only concerned to indicate the key you want when this is the current position selected. In Figure 3.1 we can observe the QWERTY configuration of the proposed application keyboard. There are also other alphanumeric characters which user can use alternating the keyboard pressing the corresponding key.

Figure 3.1: Application keyboard

Each one of the previous keyboard screens correspond to a specific xml file which describes it. Each one of this contains a matrix indicating the different rows with the codes and labels of each key. The keyboard is also composed by two layouts which implements the custom keyboard and poput layout when the user press a key. We need the drawables of those keys which have icons like the return key.

We also require a custom keyboard view to create and draw the cited desired matrix. It exists a “KeyboardView” class we have extended to fulfill our needs, including the different matrices. Using this custom keyboard view we can highlight and select keys like in some existing solutions. The one employed by Stephen Hawking is a good example [39]. In this example a cursor scans across the keyboard by row or by column and Hawkings can select a character by moving his cheek to stop the cursor. Instead select row and column we have created a cyclic keyboard. When a row ends being scanned the cursor goes to start of the next row. Another approach is to move the cursor with each command, key by key.

Android IMEs make use of a special service called “InputMethodService” so we need to ex- tend it and implement the keyboard listener “KeyboardView.OnKeyboardActionListener”. This service listens all the keyboard interactions and it handles them. For instance, changing the view of the keyboard as it can be seen on Figure 3.1. It uses a broadcast receiver which it gets the information sent from the communication service, i.e. the movement and selection events indicated by the smartwatch.

The IME lifecycle is different from the Android activity lifecycle, it can be seen on Figures 3.2 and 3.3.

22 3.1. Mobile

Figure 3.2: Activity lifecycle

23 Chapter 3. Development

24

Figure 3.3: IME lifecycle 3.1. Mobile

3.1.2 Mouse cursor

It may be easy to think of using something similar to the usual and well-known mouse cursor, see Figure 3.4. It cannot only be more intuitive for some users to use this icon to scroll through the screen and select what they want, but it can also make it easier for some applications or screens where accessibility is somewhat limited. For example, as it was already mentioned, some elements are not accessible through software events such as lists. Using the mouse cursor it may be possible access it. On the contrary, the screen is loaded with foreign elements to those of the applications. The option of using a mouse cursor should be optional. The user should be able to allow this option and to show or hide the cursor whenever he wants.

Figure 3.4: Mouse pointer

It would be necessary to superimpose the known and characteristic image of the cursor on the rest of the elements that exist on the screen. This image should be on different sizes to draw the correct one according to the screen span. It should be not focusable and touchable by users. To select something, with the analogue mouse click, it is necessary to look for the element which one the cursor tip is pointing, the nearest one, and to emulate a screen touch on that point. All these actions are managed by an accessibility service. It draws the pointer, it manages the click action and the movements on screen updating the mouse cursor position. This service uses, as on the keyboard case, a broadcast receiver to detect the movements and click commands from the communication service.

When the custom keyboard view appears on screen the mouse cursor hides and it will appear again when the first one disappears. This action is managed by the keyboard service. If instead of the application keyboard we use another one, the mouse cursor will not hide and we can use it to click on the keyboard.

3.1.3 Communication service

The communication between the smartphone and the smartwatch is managed by a service which extends the Android WearableListenerService and a class which implements GoogleApiClient.ConnectionCallbacks and GoogleApiClient.OnConnectionFailedListener interfaces.

The communication class performs as a communication interface to being used by the rest of activities. It allows to connect to the node (smartphone to smartwatch and vice versa), it provides a method to send messages to the corresponding node and another one

25 Chapter 3. Development to send the user preferences to the smartwatch (it will be explained later). This class also contains all the char strings and values used along the application to manage and identify the communication messages. It is a good practicing of clean code which it avoids some inconsistencies, errors and the called “magic strings” [40].

The service is responsible of listening the received messages and to manage them. Ac- cording to message path string, the service send broadcast intents for being received by the corresponding service or activity.

3.1.4 Preferences

Due to the users variety and their needs and preferences, they should be able to configure several aspects on both. Both the smartwatch and the smartphone. For instance, the posibility of an automatic keyboard scan and the cursor time between keys, the activation and use of the mouse cursor, the sensibility on movements detection, etc. All this configuration options will be explained on the use guide later.

To make this possible we use a layout with the configuration screen and an activity which manages it. The activity does not just print the layout screen, it is also in charge of listen any preference change. When the user change a movement sensibility, for instance, this activity calls a method of the communication class which it reads the preferences and send them to the smartwatch. It is really important to keep coherence between the smartwatch and the smartphone configuration options. They must be exactly the same at any moment. If the user change a preference, he must see the change at the moment and not to order any synchronization. For that reason, when the smartwatch and the smartphone get connected, the last one send all the configuration information to the first one.

3.2 Wear

The communication service is exactly the same as on the smartphone case. The same service listening, the same implemented interfaces and structure so we will not repeat the corresponding description.

3.2.1 Movements detection

Starting from the chosen smartwatch, for the detection of movements the sensors that seem more appropriate are those of linear acceleration and gyroscope. The difference between acceleration and linear acceleration is that the latter removes what is caused by gravity.

26 3.2. Wear

Using only linear accelerometers, braking the movements must be kept in mind. That is, moving the arm to the left produces a certain acceleration on the corresponding axis proportional to the movement performed. However, it is necessary to take into account that at the end of the movement and braking the arm will produce an acceleration in the opposite direction of the initial movement. Normally, the braking movement is sharp or abrupt, causing an acceleration peak in the direction opposite to the braking movement. This is easy to check with a few tests. Also the movements made by any person can be erratic. Although a movement performed by a person’s arm, such as moving it upwards, can be clearly identified at first glance, there are small movements different from those intended. That is, by moving the arm upwards as in the previous example it is possible and not uncommon to make a small downward movement before starting the desired upwards. This previous movement could be also done in another direction as to the right, left or any combination of these directions. In the same way small changes are made in the acceleration or in the direction deviated from a hypothetical straight line.

To detect arm movements sideways or up and down it seems intuitive to think of using the linear acceleration sensor and overcome a certain threshold to detect the movement in the corresponding direction. This threshold is essential to avoid false positives due to small movements of the arm or tremors, as well as small movements in the opposite direction to the desired movement before performing this. It is important to keep in mind that the user is a person and not a machine.

To avoid the above-mentioned problems of deceleration peaks at the end of a movement different strategies could be employed. It is common in operating systems, concurrent systems or in parallel and distributed programming to make use of semaphores, control variables, timers and similar strategies to solve such problems. A possible solution would be that once a movement that exceeds a certain threshold towards a certain direction is made use of some of these strategies to avoid detecting movements in another direction. That is to say, if, for example, the arm starts to move upwards by overcoming a certain acceleration, it will only be possible to detect, for a certain time, a movement in this direction if another predetermined threshold is exceeded. During that time of appropriation, using the analogy of a concurrent system, the remaining addresses would be nullified. Before a solution like this it is clear that there are certain values very critical for a correct operation. As many thresholds as the appropriation time must be set correctly. Otherwise this time could expire while a person could still be performing the movement and acceleration peaks would not be solved. By means of tests it would be possible to establish adequate values for some users, but it certainly seems unlikely that they are times suitable for all types of people. Very large times could be employed to avoid this but then the user experience could be poor due to slow running. Certain users would have to wait much longer than desired by them to be able to make another movement that is detected. This is because we can not eliminate appropriation exceeded a certain threshold, which would be a solution to this drawback. That is to say, that exceeded 9 m/s2, for example, it is estimated that this address is the one desired by the

27 Chapter 3. Development user. However again we must take into account the diversity of potential users for whom this threshold could be very varied. It is true that all these values could be assigned individually to each user through configuration options. But requiring the user so much configuration for their basic use seems unsatisfactory and difficult.

Instead of linear acceleration the gyroscope can be used. This provides better results than the previous solution and more simple. Not having the problems of acceleration peaks the previous strategies are not necessary. However it is not a good solution because for the desired movements it does not provide good accuracy since these are fairly linear.

Another possible solution is to combine the use of several sensors. The previous two could be used making use of the linear acceleration and the gyroscope. Through the linear acceleration and the overcoming of a threshold the movement is detected towards a certain direction and with the gyroscope this one is discriminated to avoid the peaks caused when finishing the movements. After detecting and identifying a movement, a timeout is established during which the sensors are deactivated and no movements are detected. Both movements associated with the shift in a direction as those associated with selecting something or a backslash action or cancel. This timeout is for once a movement is made, for example to move the hand to the right, the user can return to the initial position and to make another movement again. If something like this is not used, once the movement is made it could not be repeated without activating a movement in the opposite direction, in the example it would be a movement towards the left.

The displacement in the different directions is discreet. Moving in a particular direction produces a shift in the associated direction as if the corresponding address key was pressed once in a dpad. That is, continuous and indefinite movement is not permitted by performing any action.

After numerous tests it was concluded that the solution proposed using the combination of gyroscope and accelerometers is adequate and has a correct accuracy for the proposed proposal. However, there is a clear need to allow the user to adjust the thresholds according to their capabilities. Individually adjustable thresholds for each direction (up, down, left and right).

3.2.2 Movements assignment

The default hand to put the smartwatch is the right one. It is important to determine the associated direction to a movement. Let’s consider the smartwatch as it can be seen on Figure 3.5, where the device top is the up side of the Figure and the device bottom is the down side of the Figure. On Figure 3.6 we can see the difference using the smartwatch on one hand or the other one (both hands pointing up). It is easy to observe when we use the smartwatch on the left hand, the device top corresponds to a left movement. However, when we use the smartwatch on the right hand, the left movement correspongs to the

28 3.2. Wear device bottom.

Figure 3.5: Reference smartwatch

Figure 3.6: Smartwatch on a left hand (left figure) and on a right hand (right figure)

A really interesting idea is to allow the user to enable and disable movements. For instance, a user can disable the right and left movements and he only allows the up and down ones. Another idea is the option of assigning hand movements to screen movements. That is, for instance, when the user moves his hand to the left, he can assign this movement to up direction. If we combine this two ideas, users have many options to configure movements as they want. The user can assign on the smartwatch the enabled directions to the possible movements (up, down, left and right). He can also see on the smartwatch the current movement-directions assignment. Of course, he can restore default preferences. Each option, to select directions and to show them has its own activity and layout. It is necessary to distinguish the enabled directions in both cases and to read the appropriate preferences or to send the new directions assignment the smartphone.

29 Chapter 3. Development

3.2.3 User interface

As we saw before, the user interface on small screens is crucial. Even more if the target audiences are focused on people with motor difficulties and movement limitations, so restrictions are greater. The proposed smartwatch user interface provides three possible ways to interact with it. It can be seen on Figure 3.7.

Figure 3.7: Interaction modes on smartwatch

The three options are directional buttons, sweeping and sensors. We will explain how to use them on the use guide. All of them includes two more actions: select and back buttons. Although actual smartphones have more buttons there is no space to include them in an easy way to use. The smartwatch has a side button but its function is to open home screen and the list of applications so it is not option.

Each option has his own layout and activity. The button options activity just send to the smartphone the corresponding event to the button pressed. The functioning of the sweeping option is intuitive, sweep to the left to send a left directional button event, as instance. However, the Android Wear interface includes the going back action sweeping to the right, so we disable it when this option is selected. The user can read a toast message when it starts indicating it is necessary a double tap to go back instead the usual way. The sensors option is specially thought to the mentioned target audience so it only includes a button to the select action. This button is screen size to facilitate its use without fine precision. To activate the going back button the user can select two different options: to shake the smartwatch or to turn the hand 90 degrees.

3.3 Tests and debugging

It is common in software development to create and use tests to verify the correct functioning of the created functionalities. It recreates every interaction and situation to check that the behavior is the appropriate one. Emulators like Android Emulator (Android Studio emulator, official integrated development environment for the Android platform) or GenyMotion allow emulating movements, location, connections, etc.

The problem lies in the erratic movements typical of a person already described above. It has been preferred to recreate the conditions and movements on physical hardware to

30 3.3. Tests and debugging obtain more real situations on which to evaluate the results. This, in spite of providing an environment closer to the possible employment that users will make, makes it difficult to debug and acquire data from the smartwatch.

By means of tests, the various thresholds have been established for the correct operation of the application. There were discovered also the difference between certain movements that might look the same. For example, the amplitude and speed of the hand moving downward are not the same as if the movement is performed upwards. The same happens with the movements to the left and right. However, as has been said, the thresholds and the sensitivity of the movements can be modified individually by the user.

31

4 Use manual

On this chapter we will explain the development characteristics of the app and how to use it. Both the mobile part and the wear part.

Figure 4.1: App icon

4.1 Mobile

On Figure 4.1 it can be seen the app icon which will launch it by pressing on it. The main screen will appear, see Figure 4.2, and it will automatically enable Bluetooth if it is available.

Figure 4.2: Application main screen

33 Chapter 4. Use manual

This main screen provides use information, nevertheless, we can also get more information on the about screen, see Figure 4.3.

Figure 4.3: About application message

We can see in Figure 4.3 at the screen bottom a small red message which says the app is stopped. This is because it is necessary to enable it and to select our own custom keyboard. To do this, we can see two checkboxes which they allow to perform these needs. We can also observe another checkbox to enable the mouse cursor option. When the enabling keyboard checkbox is pressed, the available virtual keyboards enabling screen appears and we will enable the AccessWatch one as it can be seen on Figure 4.4.

Figure 4.4: Enabling keyboard

34 4.1. Mobile

On Figure 4.5 we can see the selecting keyboard checkbox action when it is pressed. It will only be available to press when the keyboard is enabled. We need to select the AccessWatch one to use it.

Figure 4.5: Selecting keyboard

Once it is selected the bottom screen message will be “App running” and it will be green. It will also appear a notification message on the smartphone which indicates that the app is running, as can be seen on Figure 4.6. If we press the notification, which cannot be removed, the app main screen will appear. It is useful to access it quickly and to be able to change the preferences. The screen timeout to turning off will be also set at the maximum value to avoid it turns off while its use throughout the smartwatch.

Figure 4.6: Notification application running

35 Chapter 4. Use manual

On the main app screen it can be seen a button which will launch a usage test screen, see Figure 4.7. It provides several common interaction elements on apps like checkboxes, radio buttons, a spinner, edit texts (one of them allow all type of text and the another one only allows numbers), buttons, a time picker and a date picker. The purpose of this screen is to allow the user to test the app and its preferences configuration to adjust them.

Figure 4.7: Usage test screen

4.1.1 Keyboard

The application keyboard is a fully functional and complete keyboard, based on the layout and characters of the Google Gboard keyboard. On Figure 4.8 we can see the full keyboard. It will be activated at the same times and circumstances as this one. If it is visible on the screen and the back button is pressed it will disappear. On the application keyboard it can be seen a character in a different color (yellow). This is the selected character at that time. Moving in a direction using the smartwatch will move the yellow selector in that direction.

Figure 4.8: Application keyboard

The displacement on the keyboard is cyclic and alternating row or column according to

36 4.1. Mobile the direction of the displacement. For example, if the selector is in the first row (the top row) in the last column (the one on the right), moving to the right will be in the second row and the first column, see Figure 4.9. If you move to the left then you will return to the previous character, see Figure 4.10. In case of being in the first or last row and moving beyond this one is changed of column. For example, in the first row and second column, moving up will switch to the last row and first column, see Figure 4.11. If it was in the last row and second column, moving down would change to the first row and third column, see Figure 4.12.

Figure 4.9: Keyboard rightward movement selection

Figure 4.10: Keyboard leftward movement selection

Figure 4.11: Keyboard upward movement selection

Figure 4.12: Keyboard downward movement selection

These displacements, taking advantage of this cyclical construction, can be carried out automatically from time to time. This possibility can be activated and deactivated by the user in the configuration of the application. You can also set the time, among a number

37 Chapter 4. Use manual of possibilities, that it takes the selector to move. That is, the time between switching from one key to the next. The user only needs to press the selection button when the selector is in the character he wants.

4.1.2 Mouse cursor

The mouse cursor overlaps the screens of the various applications and the system. This can be enabled from the main screen of the application on the smartphone. It will be necessary to activate the corresponding service, see Figure 4.13. It occurs after pressing the corresponding checkbox on the app main screen. This is necessary in the most current versions of Android. In the older ones it is not necessary to ask the user to activate or allow anything. Once the use of the mouse cursor is activated, it will be what moves by the screen when the user indicates it by highlighting one of the corresponding movements or through the buttons.

Figure 4.13: Enabling mouse cursor

When the keyboard appears on the screen the mouse cursor disappears and reappears when the keyboard disappears from the screen. Since the user may want to use the control without the cursor it is possible to show it and hide it by placing the arm vertical with respect to the ground. The cursor, as with the movements, is produced in a discrete way. The distance traveled with each movement is configurable by the user among a series of preset options. A clear drawback is that when the pointer reaches the limit of the screen it does not scroll if there is more content in that direction. We can see the mouse cursor over a regular screen on Figure 4.14.

38 4.1. Mobile

Figure 4.14: Mouse cursor

4.1.3 Settings preferences

The user can modify and select some settings preferences. On Figure 4.15 it can be seen the different available preference options. We will explore and explain every option and its function below.

Figure 4.15: Application settings preferences

39 Chapter 4. Use manual

• Hand. Indicates the hand with which the smartwatch is used. It is used to correctly detect movements, see Figure 4.16. By default the right hand is established as the one used. Values: – Left. – Right.

Figure 4.16: Application hand settings

• Sensibility. Sensitivity assigned to directions. Each direction corresponds to a movement and is assigned individually to each one, upward, downward, leftward and rightward, see Figure 4.17. Values: – Very low. – Low. – Medium. – High. – Very high.

• Back button. The user has the possibility of pressing the back button in two different ways. Shaking the hand or turning it ninety degrees out, see Figure 4.18. Values: – Shake. – Rotate.

40 4.1. Mobile

Figure 4.17: Application leftward sensors sensibility preference

Figure 4.18: Application back button preference

• Sensors activation. Each direction can be individually turned on or off. This can be useful or interesting for users who are only interested in some directions. This way you can avoid false positives from unwanted movements. It can be seen on the center image of Figure 4.15

• Reverse up/down. As is usually the case with joysticks, it allows you to exchange the movements corresponding to the up and down directions. It can be seen on the center image of Figure 4.15

41 Chapter 4. Use manual

• Select sensors direction. This option allows the user to configure as he/she wishes the relationship between the movements made and the directions that correspond to them on the smartphone. It also, together with activating and deactivating the different directions, allows users to configure the movements as they wish. For example activate only the left and right directions and make them correspond with the movements up and down of the hand. When the user presses this option the different directions with their move- ment options are displayed on the watch. The user will choose the correspondence as he wishes. See Figure 4.19. You will be shown one by one the different addresses that are enabled. In orange color the directions of the available movements are observed. Once selected, this will change to blue. On the image example, the user is asked for the up direction and he/she chooses the triangle which turns into blue color, asking then for the next enabled direction. Hereinafter up triangle direction cannot be selected anymore, being left three options for the next directions.

Figure 4.19: Select directions on smartwatch

• Show sensors direction. Given the above preference options the user may not be clear or not remember the correspondence between the movements and the directions assigned in the smartphone. Clicking this option will be displayed on the smartwatch. Note the Figure 4.20 to see the result on the smartwatch. To go back from this screen on the smartwatch the user just need to press it on any part. As we told before, the hand used is taken into account, so, depending on it, the message will be displayed on the corresponding direction so it can be read properly by the user.

Figure 4.20: Show directions assigned on smartwatch

42 4.2. SmartWatch

• Keyboard scanning. – Enable keyboard scanning. – Time. Scrolling time between different keyboard keys if the scanning is activated. Values: ∗ 0.5 seconds. ∗ 0.75 seconds. ∗ 1 second. ∗ 1.5 seconds. ∗ 2 seconds.

Figure 4.21: Application keyboard scanning time settings

• Default preferences. Restore default preferences. Right hand, medium level sensitivity on every direction, back button using rotation, all directions enabled, reverse up/down off and keyboard scanning disabled.

4.2 SmartWatch

Now we will explain some use characteristics corresponding to the smartwatch. It must be synchronised with the smartphone through the Google’s AndroidWear app. After that we can use the AccessWatch app. On Figure 4.22 it can be seen the main screen of it with the three available options: buttons, swipe and sensors. It exists a button to access to each option on the main screen.

The swipe option allows the user to scroll in the desired direction sliding over the smartwatch screen (up, down, left or right). Both options, buttons and swipe, have only

43 Chapter 4. Use manual two selection buttons: one to select or access, and one to cancel (back button). On the swipe screen each button is half-screen, however on the sensor option only exists a select button which it occupies the entire screen to facilitate the access to the any user with possible mobility limitations.

Figure 4.22: Main, buttons, swipe and sensors screens on smartwatch app

In the smartwatch general use, a rightward swipe is used to go backwards or to leave some application. In the swipe option this movement should correspond to a rightward scrolling instead of exiting the app. Therefore this screen will disable this usual functionality of the smartwatch. The user is instructed to double tap when starting the screen of this option to go back as seen in the figure 4.23.

Figure 4.23: Get out of swipping screen on smartwatch app

The third option, sensors, has only one button as we told, so it is only necessary to touch the smartwatch screen at any point. To be able to use a back button there are two options: shake or turn ninety degrees the wrist out. The user can choose from the configuration options which to use. Using this sensor option the screen momentarily changes color according to the action performed to indicate that it has been detected. When we press the screen, select button, this will turn green but when we active the back button action the screen will turn red as it can be seen on Figure 4.24.

44 4.2. SmartWatch

Figure 4.24: Select and back buttons activation using sensors

When a movement in a certain direction is detected it will indicate this and it will turn blue in the corresponding direction the movement was made as can be seen on Figure 4.25. When any of this action is performed, in addition to the aforementioned, the smartwatch vibrates lightly.

Figure 4.25: Sensor activation screen

45

5 Conclusions

5.1 Possible improvements and future study lines

As with any mobile application, it is necessary to observe the practical use in a real environment with different circumstances and users. The feedback provided by these and their use allows to correct errors, adding improvements and functionalities and modifying or eliminating features that may interfere with the correct use.

The custom keyboard use allows a serie of possible improvements. As example we can find the Google’s Gboard keyboard. It provides the option to include and to use a spelling checker. Another interesting improvement is to predict or to suggest words and even sentences as from the before written words. It introduces the idea of learning from the user behaviour. Using some kind of machine learning it may be possible to predict which they can be the next words the user pretend to introduce.

Continuing with the custom keyboard idea, sometimes predefined sentences can be appropriate text into certain circumstances and it can be faster to select them than write them. Users do not need to type a large number of characters to write a desired message but just selecting it. These sentences would be predefined according to usual expressions or they could be added and modified by users in accordance with their needs, although more configuration freedom implies more complexity and error probability. Both the predefined sentences idea and the machine learning incorporation can even be complementary and coexisting, applying the machine learning to the suggested predefined sentences. They would also need the study of the user interface to make it simple and accessible.

As we saw before (see Table 2.1.1), Android is the most popular smartphone operating system. It is almost the 80% of the devices but the second one is iOS (Apple’s devices operating system), which it is in almost 18% of smartphones. The next operating systems are a very low market share, 1% or less. So if the proposed solution was also implemented

47 Chapter 5. Conclusions to work on Apple devices, it will encompass the 98% of current smartphones and it would be available to almost all the users. It is not only interesting to study the implementation into iOS, but also the interoperability between this operating system devices (iPhone, iWatch and iPad) and Android ones. It would allow users to employ the proposed app on an iWatch and an Android smartphone and vice versa.

Of course, all future developments should imply the improvement of the existing func- tioning and user interface if it is possible or deemed necessary. To carry this out it will also be necessary to care the user feedback looking for suggestions and error reports.

5.2 Conclusions

Remembering our objective, it is to use a wearable device, a smartwatch, as input device that allows interaction with the mobile device as a joystick or other external device. Although the use is possible for a large number of people, and so it is intended, the target audience, for which the study is mainly focused, are people with motor conditions which they are difficult to achieve a normal use of a smartphone or tablet.

We employed an Asus ZenWatch 2 smartwatch to develop a proposal which fulfill the set objective. The development technology is Android due to the high percentage of market smartphones and smartwatches which use it, like the employed one for the prototype. It is also open source and free to develop into it. The main idea is to use the smartwatch as an input device through gestures, moving the hand like it was a joystick. The proposed solution employs two sensors of the smartwatch, gyroscope and linear accelerometers, to detect the mentioned movements. Other several options using different strategies and sensors were also studied. However, the achieved solution was the most effective one detecting the movements and it gives a simple approach. The user can configure multiple app use aspects adapting it to his/her needs and wishes. The high possible configuration level by the user allows the app to be used in various ways and adapted to the him/her although it introduces more complexity and potential fault points. He/she can configure several features and characteristics like the employed hand, each direction enabling and its sensibility, and the way to activate the back button. Even to customize the relationship between the hand movement direction and the direction on screen. These configuration options are useful if the user employ the sensors interaction choice, however it is also possible to use direction buttons and to swipe over the smartwatch screen.

The app includes an own custom keyboard which allows the user to select keys and type text moving through this keyboard. It also allows the option of automatic movement through keys and the user just need to select it when the desired one is marked. It is also remarkable the use of a mouse cursor which overlays the rest of screen elements. It is handled just as the previous cases and it can be hidden (or shown) by a simple hand gesture.

48 5.2. Conclusions

Not all apps and aspects on the smartphone are reachables for the operating system limitations but the proposed solution fulfills to a large degree the proposed objective.

49

Bibliography

[1] “Consumo móvil en España 2015,” https://www2.deloitte.com/es/es/pages/ technology-media-and-telecommunications/articles/Consumo-movil-2015.html, De- loitte.

[2] “Aplicaciones - Google Play – Google,” https://play.google.com/intl/all_es/about/ apps/index.html, Google.

[3] “Technology Research | Gartner Inc.” http://www.gartner.com/technology/home.jsp, Gartner.

[4] A. Zimmermann, “Opinion: Watchmakers get creative in face of smart threat,” http://money.cnn.com/2016/03/18/luxury/watches-smartwatches-industry/, CNN Money.

[5] A. of victims of thalidomide in Spain, “AVITE,” http://www.avite.org/.

[6] “Thalidomide and pregnancy | mothertobaby,” Organization of Teratology Informa- tion Specialists (OTIS), Aug. 2014.

[7] C. C. V.-V. Carlos A Vidal Ruiz, Diego Pérez-Salazar Marina and P. C. Leede, “Anomalías congénitas más comunes de la mano,” Revista Mexicana de ORTOPEDIA PEDIÁTRICA, vol. 14, no. 1, pp. 5–11, Feb. 2012.

[8] W.-D. e. a. Lamb DW, “Incidence and genetics,” vol. 14, no. 1, pp. 21–27, Feb. 1998.

[9] “Your Guide to Cerebral Palsy,” https://www.cerebralpalsyguide.com/, Cerebral Palsy Guide.

[10] NINDS, “Cerebral Palsy Information Page | National Institute of Neurologi- cal Disorders and Stroke,” https://www.ninds.nih.gov/Disorders/All-Disorders/ Cerebral-Palsy-Information-Page, National Institute of Neurological Disorders and Stroke.

[11] G. L. Calhoun and G. R. McMillan, “Hands free input devices for wearable computers,” Mar. 1998, pp. 118–123.

51 Bibliography

[12] “Electromyography (EMG),” http://www.mayoclinic.org/tests-procedures/emg/ basics/definition/prc-20014183, Mayo Clinic.

[13] “Ring ZERO,” http://ringzero.logbar.jp/.

[14] Z. D. González, “Put a Ring on It | Wearable Technologies,” https://www. wearable-technologies.com/2014/04/put-a-ring-on-it/, Apr. 2014.

[15] L. inc., “Ring : Shortcut Everything. by Logbar inc. — Kickstarter,” https://www. kickstarter.com/projects/1761670738/ring-shortcut-everything?lang=es, Feb. 2014.

[16] G. L. Robert Xiao and C. Harrison, “Expanding the input expressivity of smartwatches witch mechanical pan, twist, tilt and click,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014, pp. 193–196.

[17] UCLIC, “Ring : Shortcut Everything. by Logbar inc. — Kickstarter,” https://uclic. ucl.ac.uk/research/futureinterfaces/watchconnect, University College London.

[18] F. Ahmad and P. Musilek, “Ubihand: A wearable input device for 3d interaction.”

[19] O. Asai and Y. Hasegawa, “Wearable input device by fingertip motion.”

[20] J. Rowberg, “Keyglove,” http://www.keyglove.net/, 2015.

[21] Y. S. K. Byung Seok Soh and S.-G. Lee, “Improving the usability of a wearable input device scurry,” Tech. Rep.

[22] J. Rekimoto, “Gesturewrist and gesturepad: Unobtrusive wearable interaction de- vices,” IEEE/ACM Trans. Netw., pp. 21–27, 2001.

[23] W.-H. Chen, “Blowatch: Blowable and hands-free interaction for smartwatches,” pp. 103–108, 2015.

[24] “Design for all foundation,” http://designforall.org, Design for All Foundation.

[25] “Diseño para todos,” http://www.xn--diseoparatodos-tnb.es/Paginas/default.aspx, Observatorio de la Innovación en el Diseño para Todos.

[26] “Universal design principles and guidelines,” Trace Center, accessed 2009-10-05.

[27] “Fujitsu’s activities for universal design,” Fujitsu, accessed 2009-10-09.

[28] C. S. et. al., “User interfaces for all: Concepts, methods and tools,” 2001.

[29] “The 10 principles of mobile interface design,” http://www.creativebloq.com/mobile/ 10-principles-mobile-interface-design-4122910, Creative Bloq, 2012.

[30] “Designing for the mobile environment - some simple guide- lines,” https://www.interaction-design.org/literature/article/ designing-for-the-mobile-environment-some-simple-guidelines, Interaction De- sign Foundation.

52 Bibliography

[31] “Xiaomi MIUI Official Global Siteo,” http://en.miui.com/http://en.miui.com/, Xioami.

[32] “Mi Global - Mi Global Home,” http://www.mi.com/en/, Xiaomi.

[33] “Android Debug Driver | Android Studio,” https://developer.android.com/studio/ command-line/adb.html?hl=es-41.

[34] “ASUS ZenWatch 2 (WI501Q),” https://www.asus.com/es/ZenWatch/ASUS_ ZenWatch_2_WI501Q/, ASUS.

[35] https://www.android.com/intl/en_us/wear/, Google.

[36] L. Torvalds, “Git,” https://git-scm.com/.

[37] “The world’s leading software development platform · GitHub,” https://github.com/, Github.

[38] “Bitbucket | The Git solution for professional teams,” https://bitbucket.org/, Atlas- sian.

[39] S. Hawking, “The computer - Stephen Hawking,” http://www.hawking.org.uk/ the-computer.html.

[40] R. C. Martin, Clean Code: A Handbook of Agile Software Craftsmanship, P. Hall, Ed., 2009.

53