MASARYK UNIVERSITY FACULTY OF INFORMATICS

Testing techniques for mobile device applications

DIPLOMA THESIS

Bc. Radim Göth

Brno, 2015

Declaration

Hereby I declare, that this paper is my original authorial work, which I have worked out by my own. All sources, references and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source.

Brno, ______

Bc. Radim Göth

Advisor: Mgr. Petr Neugebauer, MBA

II

Acknowledgement

I would like to express my gratitude to my supervisor Mgr. Petr Neugebauer, MBA for the opportunity to develop this master thesis within the Y Soft Corporation and for useful comments and help during the thesis elaboration. Furthermore, I would like to thank Tomáš Kuba for consultations, support and advices. I am also very thankful to my family and close friends for their patience and support.

III

Abstract

This thesis investigates approaches for validation and verification of mobile applications. There is a description of mobile operating systems, application types and mobile test environment, followed by various techniques, tools and guidelines for mobile applications testing. Practical part of the thesis describes developing and testing mobile application SafeQ Terminal Demo for Y Soft Corporation. This application serves for demonstrating behavior of YSoft SafeQ on mobile devices.

IV

Keywords

Mobile application, testing, Android, iOS, Windows Store, software quality, YSoft SafeQ.

V

Contents

1 Introduction ...... 3 2 Mobile application development options ...... 5 2.1 Native applications ...... 5 2.2 Web applications ...... 5 2.3 Hybrid applications ...... 6 3 Mobile operating systems ...... 7 3.1 Android ...... 7 3.2 iOS ...... 8 3.3 Windows & Windows Phone ...... 9 4 Mobile testing approaches and techniques ...... 11 4.1 Test equipment ...... 11 4.1.1 Real devices ...... 11 4.1.2 Simulators and emulators ...... 11 4.1.3 Mobile device cloud ...... 12 4.1.4 Summary...... 14 4.2 Test levels ...... 15 4.2.1 Component testing ...... 15 4.2.2 Integration testing ...... 16 4.2.3 System testing ...... 16 4.2.4 Acceptance testing ...... 17 4.3 ...... 18 4.4 Continuous integration ...... 19 5 Mobile application qualities...... 20 5.1 Reliability ...... 21 5.1.1 Interruptions ...... 21 5.1.2 Networks ...... 22 5.2 Transferability ...... 23 5.2.1 Installation ...... 23 5.2.2 OS updates ...... 23 5.3 Security ...... 24 5.3.1 Data storage ...... 24 5.3.2 Transport security ...... 24 5.3.3 Binary protection ...... 25 5.4 Usability ...... 26

1

5.5 Performance ...... 26 5.5.1 Launch time ...... 27 5.5.2 Responsiveness ...... 27 5.5.3 Battery life ...... 27 5.5.4 Network usage efficiency ...... 27 5.6 Application store compliance ...... 28 6 Testing tools ...... 29 6.1 Experitest ...... 29 6.2 Ranorex ...... 30 6.3 Calabash & Xamarin Test Cloud ...... 31 6.4 Perfecto mobile ...... 32 6.5 IBM Mobile Quality Assurance ...... 32 6.6 Testing tools overview ...... 33 6.7 Automated validation tools ...... 34 7 Testing guidelines ...... 36 8 SafeQ Terminal Demo ...... 39 8.1 Motivation ...... 39 8.2 Analysis ...... 39 8.2.1 Stakeholders ...... 39 8.2.2 Requirements ...... 39 8.2.3 Solution design ...... 40 8.2.4 Functional requirements specification ...... 40 8.2.5 Non-functional requirements specification ...... 41 8.3 Architecture ...... 42 8.4 Functional testing ...... 45 8.5 Non-functional testing ...... 45 8.5.1 Performance ...... 45 8.5.2 Transferability ...... 52 8.5.3 Security ...... 52 8.5.4 Reliability ...... 52 8.5.5 Usability ...... 53 8.5.6 Windows Store compliance ...... 53 9 Conclusion ...... 54 10 Bibliography ...... 55 Appendix A – User stories ...... 62 Appendix B – SafeQ Terminal Demo screenshots ...... 66

2

1 Introduction

Mobile application testers and developers have to adapt to ever-changing mobile environment and market trends. Reported by Critercism [1], there are 2,582 manufacturer models, 106 mobile versions and 691 mobile carriers. That results in enormous amount of distinct environment for testing. New technologies appear more often than ever before and they are immediately used for business. New mobile operating system was launched during the writing of this thesis1. Cellular networks have improved from 3rd generation to 4th generation. Positioning services are now common as well as voice control and many others. Screen sizes varies between one and half inch on smartwatches and more than fifteen inches on tablet PCs. Computational power is still lower than traditional PC and developers have to always take power consumption into account. Nevertheless, smooth application behavior without delays or freezes on every available device is standard requirement.

Large variety of devices and high demands makes mobile testing more interesting and difficult. In addition, smartphones or tablets are not suitable for long writing or some difficult tasks. For this reason mobile applications are much more challenging for designers and UX specialists. New versions of mobile operating systems come out more frequently than we are used to in case of personal computers, so even the project managers have difficulties in their jobs because they must reconcile release dates with OS updates.

Word mobile gets different meaning in time. A decade ago, mobile device was represented by a laptop or feature phone. Now, laptops are not widely considered as a true mobile devices. There are other devices that are much more mobile like wearables or smartphones. On the other hand, applications that were originally designed for handhelds is now ordinarily installed into devices that are not mobile at all like televisions or information kiosks. We started to call them smart. New coming trend is one application suitable for all devices equipped with display – smartwatches, smartphones, tablets, PCs or smart TVs. The differences between some of these devices are unrecognizable. Some smartphones are the same size as tablets and some tables, after connecting keyboard, are fully-fledged laptops. The reason for

1 http://www.ubuntu.com/phone 3 using mobile applications on all of these devices is obvious – one code and one layout for multiple devices. When mobile OS designers solved problems with different screen sizes and various hardware on smartphones, it was easy to use the same patterns on every kind of smart device.

In the theoretical part we will be dealing with test environment, testing techniques and tools only for smartphone and tablet applications.

Practical part describes development and testing of the mobile application SafeQ Terminal Demo for Y Soft Corporation. This application is targeting on Window Store and Windows Phone platforms and it is built on the top of the portable class library (PCL). Therefore, it can be used with a multiplatform framework Xamarin as a base for Android or iOS SafeQ Terminal Demo applications.

Because of that, whole thesis aims mainly to common testing techniques and tools which can be used independently of the target platform.

4

2 Mobile application development options

Mobile applications can be developed in several ways. Determination of chosen option is important from testing perspective because many tools and test cases depend on approaches used for development. All testing techniques and tools mentioned in this thesis aims mainly on native or hybrid applications, which could be both downloaded from an application store.

2.1 Native applications These applications are often developed in platform specific SDK and languages which are commonly or C++ for Android, Objective-C or Swift for iOS and C#, or C++ for Windows Phone.

Native application can use all phone peripheral and resources like GPS, NFC, Bluetooth, accelerometer, compass, etc. It can access the contact list, gallery or user media files. These applications are usually installed and updated from the platform specific application store. Native applications are limited only by OS API and application store policies. They usually provide the best performance [2].

2.2 Web applications This category refers to a standard web application developed using traditional web technologies (HTML, CSS, Javascript) and server-side code in Node.js, PHP, ASP.NET, etc. These web applications are suitable for browsing on smartphones or tablets and user needs a browser and Internet connection to use them.

There are two main approaches to make the web application suitable for handheld devices. The first approach uses responsive design, which dynamically reorders page layout. This is supported with new features introduced in CSS 3 [3].

The second approach uses information from user HTTP request to determine whether client uses mobile web browser. If so, customer is redirected to the web site tailored for handhelds devices [4].

Even the web applications can use some device resources which are provided by the web browser – in most cases, it is possible to use GPS, camera

5 or accelerometer, but cannot use NFC or Bluetooth [5]. Communication with other applications is also restricted on browser functionality. Because of that, web application cannot use phone contact list or messages.

Web applications are not installed from application stores and they do not provide such a good performance as native.

2.3 Hybrid applications This is a mix of previous technologies. Hybrid applications are often built for multiple platforms so their main attribute and advantage is portability. There are several leading approaches how to build hybrid applications. The most often used cross-platform frameworks utilize web technologies [6]. The difference between this hybrid and web application is that the hybrid one has native application package and it is often installed from the platform specific application store.

Widely used cross-platform frameworks provide container for running web application written usually in Javascript. This approach is used by Apache Cordova2, but there are more frameworks working similarly. Using several Javascript libraries and Cordova API, it can be implemented gesture control, battery status check and some other platform dependent features [7]. Another approach to achieve transferability of mobile application is using Mono, which is multiplatform open source implementation of Microsoft .NET framework based on ECMA standards for C# and the Common Language Runtime [8]. The code is written in C# and is compiled to platform specific binaries using Mono or distributed along with Mono runtime library. One of the frameworks build on Mono is Xamarin, which will be discussed further in section 6.3.

Hybrid applications are not fully platform independent, but most of them cover at least Android and iOS [9]. The range of available APIs depend on used framework. In general, hybrid frameworks provide better and wider usage of device resources than web applications but less than native ones. If some new feature appears in OS update, it takes some time until hybrid framework developers implement it. Also quality of UI, security and performance are often worse than the native applications [10].

2 https://cordova.apache.org/ 6

3 Mobile operating systems

Good knowledge of mobile operation systems and application environment is essential for every mobile application tester. This chapter briefly presents the major mobile operating systems which are Android, iOS, Windows and Windows Phone.

3.1 Android Android is the fastest growing OS in terms of popularity and the most widespread mobile operating system [11]. Android is owned and managed by the Open Handset Alliance – industry consortium for creating hardware, software and telecommunication open standards for mobile devices. This consortium is led by Google [12].

Since Android is open source Linux-based project, most manufacturers and cellular carriers take advantage of that and modify the operating system to suit their hardware, increasing the complexity of the system3. This fact, along with slow new version adoption4, makes Android the most fragmented mobile OS, which increases test costs and complexity [13]. However, many different testing tools and frameworks target primarily Android due to its popularity. 3.1.1 Publishing The main channel for application distribution is Google Play, but users can also download applications from other stores like Samsung Apps or Amazon Store.

Access to the Google Play Developer Console5 is needed for publishing applications on Google Play. Registration is done via valid Gmail ID and costs $25. It is recommended to read the Google Play Launch Checklist before submitting an application to Google Play. This checklist helps developers to

3 There are 18,796 distinct Android devices in 2014 compared to 11,868 in 2013 [56]. Different devices owned by the users refers to device fragmentation. Different OS versions refers to platform fragmentation. 4 New Android OS version has to go through multiple levels of testing and approval with the various device manufacturers and the wireless services providers before it is delivered to the users [16]. Because of that, some users get the OS updates even several months after release. 5 https://play.google.com/apps/publish/ 7 get application ready for publishing. There are information about the Google Play policies, recommendations for testing, design guidelines, and tips for other decisions that should be taken into consideration before submitting. Whole checklist can be found at the Google Play developer site [14].

Google uses fully automated approval process, which analyzes every new application or application update with Bouncer. Bouncer is program for detecting malicious applications and suspicious developer’s accounts [15]. Google also relies on users, who can report misbehaving of applications, and every application can be remotely uninstalled by Google [16]. Whole approval process does not take much time. Application is usually ready to be downloaded from the store after only two hours which facilitates application updates [17].

3.2 iOS iOS is mobile operating system developed and owned by Apple Inc. It is closed source, UNIX-like operation system based on Darwin (BSD) and OS X [18]. iOS was originally developed for iPhones, but now iOS runs on iPad, iPod Touch or Apple TV. Other manufacturers are not licensed for using iOS. Therefore, only Apple-made devices can run it.

Most iOS users (97%) have the iOS version 7 or the latest 8 [19]. As a result, testing on all available devices is not as difficult as on Android. OS fragmentation is presented mainly by the different OS feature set (some features are not available on some devices)6 [20]. On the other hand, Apple devices are one of the most expensive and their purchases increase testing costs.

Standard integrated development environment (IDE) for native application is Xcode which can be installed only on Apple’s operating system OS X. 3.2.1 Publishing App Store is the only one channel for an application distribution. To publish application on the App Store, Apple ID and iOS developer account is required. Apple offers two types of accounts – individual and enterprise.

6 E.g. Siri on iPad 2 8

In both cases, sign in fee costs $99 per annum. For identity verification of a company, it is necessary to have Dun & Bradstreet (D-U-N-S) number7.

Applications submitted to the App Store have to pass through one of the toughest approval processes. Application should meet the App Store Review Guidelines8 and iOS Human Interface Guidelines9. It is obvious, that special emphasis is put on user privacy, look and feel, content and functionality [19]. Application on the App Store should be something useful or unique and meet the high requirements on quality. Whole application approval process takes between four and eleven days10. During this process, application goes through the automatic and manual tests. If an application is rejected, developers will receive reasoning why which could be very useful feedback.

3.3 Windows & Windows Phone With Windows 8, Microsoft moved Windows OS towards to mobile devices. Windows 8.1, which is current version, is able to run on PCs and tablets with x86 or ARM11 architecture. Moreover, Microsoft has operating system Windows Phone especially for smartphones. These two platforms are about to converge in Windows 10 which is currently in beta.

Microsoft has two types of application projects – Windows Store 8.1 applications for PCs or tablets and Windows Phone 8.1 applications for smartphones. These two projects share lot of APIs, tools and design principles. Therefore, they were partially merged to the Windows Universal Applications, which enables code sharing between Windows Store and Windows Phone applications projects.

Microsoft is third in smartphone OS market share far beyond the Android and iOS. Furthermore, most Windows Phone users have cheap low-end handhelds [21], which corresponds with the fact that 71% of Windows Phone applications are installed on low memory devices12 [22].

7 http://www.dnb.com/get-a-duns-number.html 8 https://developer.apple.com/app-store/review/guidelines/ 9 https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/Mobil eHIG/ 10 http://appreviewtimes.com/ 11 Microsoft has special version for ARM devices called Windows RT. 12 Device with 512MB of memory or lower. 9

3.3.1 Publishing Windows Store and Windows Phone Store, are exclusive application distribution channels, which will be also merged into Universal Windows Store in Windows 10 [23].

Microsoft requires single developer account to the both stores. Developer account can be individual for $19 or company for $99, which enables greater access to device capabilities like enterprise authentication or access to software and hardware certificates.

For store approval process, Microsoft chose slightly different approach than the others. The application content compliance is reviewed manually during the approval process but other tests are automated (security, technical compliance) [24]. Uploading application on Windows (Phone) Store is very similar to uploading application on Google Play which is described in 3.1.1. The approval process is fast; applications are often approved within an hour [25].

10

4 Mobile testing approaches and techniques

4.1 Test equipment In this chapter we will discuss another part of the mobile test environment – real devices, simulators, emulators and test clouds. All of them have benefits and drawbacks. This chapter is based on [26], [27], [28] and personal experiences with mentioned equipement.

4.1.1 Real devices Testing on real devices is necessary for every mobile application. It gives the most realistic results and tester can perform all required test scenarios. On the other hand, it is extremely difficult and expensive to test application on all available devices.

Advantages  Real user experience – to achieve trustworthy user testing, handling a real device is necessity.  Interactions with sensors – if tested application uses sensors (e.g. accelerometer, compass, camera or NFC), then tester has to operate device in his hands. Some emulators mimic behavior of these sensors but their results are not very realistic.  Reliability – passing all test cases on the real device gives us the best guarantee that user with the same device with same configuration will not have any issue.

Disadvantages  Variety of devices / costs – testing on few real devices does not assure that the application will work on other devices. Buying new test devices is expensive.  Mobile carrier limitations – it is difficult to test how mobile application will perform on different mobile carrier because of different network infrastructure or prohibitions of some web pages.

4.1.2 Simulators and emulators Simulators and emulators are type of software that enables running another computer system on the host platform.

They are available for each platform and often integrated into IDE like Android Studio, Xcode or Visual Studio.

11

There is a difference between simulator and emulator. Simulator imitates software behavior and uses all available resources from host computer like CPU, memory, network, etc. Testing on simulator can provide biased results because host computer can be much faster than the mobile device.

Emulators provide more realistic results because they replicate target device more precisely. Processor speed, used memory or memory card presence, even the network connection type or signal strength can be set to create more realistic environment.

There are lot of simulators and emulators on the market with different features and limitations. However, they all have some similar advantages and disadvantages.

Advantages  Costs – standard emulators delivered with the SDKs are for free.  Always up to date – new version of emulator is always distributed with new version of SDK.  Fast and simple to use – application can be easily deployed and installed from IDE.  Setting options – location, screen size and resolution, connection type, camera, accelerometer, etc., can be simulated.

Disadvantages  Misleading test results – emulators are not production environment and there is always a possibility that the application could behave slightly different on real device because of different hardware, network environment or software differences.  Impossibility perform all test cases – when working with a simulator, testing is limited to host computer resources and emulator features. Test applications that use NFC or compass is not typically possible as well as interruption of the application with incoming calls or SMS.  Not very responsive – some emulators are slow and animations or games are delayed. Long manual test cases can be protracted. 4.1.3 Mobile device cloud Successful testing of an application functionality on one device does not provide assurance across all others of the same operating system [27].

Mobile device test cloud is a service which enables to run automation or manual tests on hundreds of physical devices in the cloud. Providers of these

12 services have large variety of smartphones and tablets connected by USB cable in the rack.

There are three common types of mobile device clouds available on the market – public and private [26].

Devices in public cloud are always accessible over the Internet. They can be geographically spread to achieve different configurations with different mobile carriers. It is possible to rent time on the devices for automated or manual testing, but these devices are always shared with other users. This type of mobile devices cloud offers for example Perfecto Mobile (8.4) or Keynote Mobile Testing13.

In case of private cloud, providers usually detach requested devices for exclusive use, which increase security. Private cloud can be dedicated if the company does not have IT department, or located in-house in the customer VPN behind the corporate firewall. Example of private dedicated cloud is Soasta TouchTest.14 SeeTest Cloud (6.1) is an example of the private in-house cloud.

The main purpose of mobile device test clouds is to find bugs which appear only on some devices or some OS versions. It can reveal bugs that can be hidden from developers if they use emulators and only few physical devices. For example, UI thread overloading (which is common mistake) may occur like a short delay on physical testing device and on simulator, it could be imperceptible. But on older or low-end devices with low computational power, it can cause highly irresponsible user interface in the better case, freeze or crash in worse case.

The next paragraphs summarize advantages and disadvantages based on [26] and my personal experience with testing applications in the cloud.

Advantages • Variety of devices – testers can quickly reveal bugs or shortcomings that originate from using different hardware or software in the mobile devices. There is no need to upgrade and downgrade developer’s physical devices to execute the tests on several OS versions. Developers

13 http://www.keynote.com/solutions/testing/mobile-testing 14 http://www.soasta.com/products/touchtest-private-device-cloud/ 13

do not have to buy new devices, because cloud providers usually add them into their portfolio very quickly. • Automation capabilities – mobile device cloud providers take advantage of automated frameworks. Automated tests run simultaneously on many devices which boost testing efficiency. Testers can view the test recording (screens and actions) for each device in the interactive web portal.

Disadvantages  Costs – this could be a blocker for small or mid-size companies that don’t have their mobile solution as priority, because of high prices for mobile cloud services15.  Impossibility to do all test cases – although tests are done on real devices, it is impossible to test compass, accelerometer, GPS or camera. Mobile device is mounted in a box so everything that depends on device movement is impossible.  Security – in case of public cloud, there are some security issues that are bounded with sending application package to the cloud. Application will be running in potentially unsecured environment where competitors also testing their apps.  Not very responsive – manual testing can be protracted due to network latency. 4.1.4 Summary The best testing strategy is to combine mentioned test equipment according to application requirements and size of the budget. Small or start-up companies, which develop several simple applications, do not often need the device cloud and can perform tests on physical devices and emulators. On the other hand, international companies with a broad portfolio of applications with critical functions (e.g. mobile banking applications) can leverage mobile application testing by their own private device cloud.

15 Xamarin test cloud starts at $1,000 per month for individuals. 14

4.2 Test levels Test levels introduced in this chapter are based on ISTQB16 (International Software Testing Qualifications Board) with respect to mobile development. Each test level is corresponding to the specific development stage of V-model (Figure 1).

Figure 1: V-model [29].

4.2.1 Component testing Definition: Component testing (also known as unit, module or program testing) searches for defects in, and verifies the functioning of, software modules, programs, objects, classes, etc., that are separately testable. [30]

Unit testing of a native application for Android and Windows Phone is the same as unit testing of any other application written in Java or .NET respectively. Developers can use popular unit and mocking frameworks like JUnit or Mockito for Android, or NUnit for Windows Phone. Tests can run locally (on Java virtual machine or .NET CLR) and results are shown in IDE.

In the case of Android, proper framework (e.g. Robotium17) should be used because official Google’s unit testing framework18 requires running the unit tests on device or emulator. This approach is much slower than running the tests on Java virtual machine [31].

16 http://www.istqb.org/ 17 https://code.google.com/p/robotium/ 18 AndroidTestCase http://developer.android.com/reference/android/test/AndroidTestCase. 15

Unit testing on iOS is different. Apple devices using A-series processors based on ARM architecture, but Mac computers, on which are iOS applications developed, runs on processors with x86 architecture. And because Objective-C is compiled language, iOS simulator has to be used for running ARM targeted application on x86 Mac. Second option is to run the unit test on the real device.

The previous enumeration of programming languages suggests that tests on this level are done by programmers.

4.2.2 Integration testing Definition: Integration testing tests interfaces between components, interactions with different parts of a system, such as the operating system, file system and hardware, and interfaces between systems. [30]

ISTQB further distinguish integration between components of the system and integration between different systems. Therefore, the integration tests, as opposed to the unit tests, must run on the emulator or preferably on the real mobile device connected to the developer’s computer or the integration server.

As mentioned in the introduction, average application depends on six cloud services (five is median). Most common services are Facebook login, cloud data storage, web and monitoring services. These services are not always reliable; their integration and reaction on outages should be tested [1].

Integration tests are done by programmers or qualified quality assurance engineers. 4.2.3 System testing Definition: System testing should investigate functional and non-functional requirements of the system, and data quality characteristics. [30]

Due to simulators and emulators limitations discussed in section 4.2, these tests should be performed on the real devices in real environment (network conditions, targeted OS versions). Tests on this level are prepared and executed by quality assurance engineers.

16

4.2.4 Acceptance testing Definition: “Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.” [32]

Acceptance testing is always conducted in production environment, i.e. intended devices, OS versions and networks, by the end users or stakeholders.

Beta testing is a typical form of the acceptance testing. All three platforms supports beta testing in a similar manner – an application package is uploaded to the store but is private. Access is granted only for users, who are added to the beta testing group simply by inserting their store ID or email in developer portal19.

If the application is intended for public use, testing through crowd can be taken into consideration. Portals like uTest20 or TestBirds21 are gathering mobile testers and distribute applications to them. It is an easy way how to get quick feedback before public release.

19 Android Developer Console, iTunes TestFlight, Microsoft Dev Center 20 http://www.utest.com/ 21 http://www.testbirds.com/ 17

4.3 Test automation Test automation is defined as: The use of special software (separate from the software being tested) to control the execution of tests and the comparison of actual outcomes with predicted outcomes. [33]

Automated tests can perform repetitive tasks which are difficult to test manually and then improve test efficiency and reduce costs.

Mike Cohn introduced test automation pyramid in his book Succeeding with Agile [34], which divides automated tests into three levels by their purpose and express their expected quantity. It means that the software project should contains much more unit tests than UI tests.

UI

Service

Unit

Figure 2: Automation pyramid [34].

Unit test (discussed in section 4.2.1) forms the base of test automation and are also the foundation of the test-driven development.

Service layer represents testing of a system on integration, service or API level (without user interface). Mike Cohn especially marked service tests that call APIs which are regularly used by the UI (view) layer. An example of automated tests on this layer are tests on view-models (8.3.1), which can perform single tasks regardless of the application user interface layer that is bind to the view-model properties.

UI tests, which simulate user interaction with the software through the graphical user interface, are placed at the top of the pyramid. These tests are poorly maintainable and their creation is time consuming. UI test scripts can be written manually or recorded by the special software which speeds up their creation. Examples of UI automation software for mobile applications are SeeTest Automation (8.1) or Ranorex (8.2).

18

4.4 Continuous integration Martin Fowler has described continuous integration (CI) as follow [35]: “Continuous Integration is software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.”

CI is practice comes with the agile software development. It gives faster feedback to the developers and leveraging the test automation. It is widely used among the organizations with matured development process, which are typically use a build server for implementation of the CI.

The continuous integration poses a challenge for the mobile application development, because many automated tests have to run on real devices. Therefore, application deployment and installation on the device should be also automated.

The main tasks of build server role for mobile application development (based on [36], [37]) are:

1. Download the source codes from the repository. 2. Compile and inspect the source codes, create new binaries. 3. Connect to the device or emulator. 4. Execute the unit tests. 5. Set up external dependencies (databases, services, etc.). 6. Execute higher level tests (integration, service or UI tests). 7. Create the application package. 8. Create the report.

After application passes all the tests run by the build server, application package along with the UI tests can be sent to the device cloud. Running the test on the device cloud is expensive (see table 2 in section 6.6) and often limited by maximum device testing hours and device reservation timeslots. Thus, automated tests on device cloud are not executed so often. Example of this process is shown in section 8.3 with Xamarin Test cloud.

19

5 Mobile application qualities

Every application in any application store or marketplace has tens of direct competitors. To find an application that provides really unique functionality is almost impossible. Therefore, most applications fight for their users with quality. If the application does not match with user expectations on quality, it can be simply replaced by another application in matter of seconds.

Testing increases software quality by finding defects, which are consequently fixed. Software quality is defined by ISO/IEC 25010 [38] as:

Software quality is the degree to which the software product satisfies stated and implied needs when used under specified conditions.

ISO/IEC 25010 also describe quality model which defines eight software internal and external quality characteristics.

Figure 3: Software product quality model [38].

This chapter mentions five quality characteristics: reliability, transferability, security, usability and performance.

These characteristics were chosen because they require specific testing approach for mobile applications [26]. Moreover, they are considered as the most important for mobile application success as implied from several publications [39] [40] [41].

20

5.1 Reliability

5.1.1 Interruptions Mobile application lifecycle differs from classic desktop one where operating with several open applications at the same time is normal (as well as minimizing, maximizing, switching between applications or running in background). Same operations are executed on mobile applications, but they are handled differently due to performance restrictions (such as low memory or battery power). Managing mobile application lifecycle in code is difficult and switching between application states (e.g. active, inactive, suspended, background) is potential source of faults. Moreover, mobile applications can be forced to immediately change its state (e.g. from active to background due to incoming call).

To achieve the best reliability of the product, application under the test must be tested on various types of interruptions.

• Incoming call, SMS or MMS. • Alerts or notifications from other applications or services. • Loss of power, alerts on attaching or detaching charger. • Cable insertion or removal for data transfer, Bluetooth triggered events. • Network outage and recovery alerts. • Media player interruptions. • Lock, home or back button pressed. • Auto lock. • Memory card or SIM card insertion or removal.

Testing application on every interruption in any time or any application state is impossible and unnecessary. Preferable procedure is to identify areas or situations where the application may crash after unpredictable interruption. These situations could happen when the application under the test:

• Handles high-performance task. • Operates with sensors or camera. • Switches between pages. • Communicates over the network.

21

Test automation using mobile device cloud can accelerate this testing technique and increase the probability of bug exposing22. 5.1.2 Networks A lot of mobile applications depend on the Internet connectivity. Moreover, some of them are only clients consuming web services. Week signal strength or switching between connection types frequently cause failures. Network based testing investigate misbehaving of an application that uses network connection and then, ensure the reliability of a product.

Types of network-based tests are:

• Connection over different adapters (Wi-Fi, 3G, etc.). • No network or airplane mode. • Network interoperability. o Changing from Wi-Fi to 3G or 4G network and vice versa. o Switching between connection endpoints. • Network fluctuations (signal strength or jitter).

5.1.2.1 Network virtualization Test application under different network conditions is complicated; testers have to simulate network conditions in the “laboratory” environment or have to leave their offices and test the mobile application in real-life situations.

Network-based testing on an emulator is easy and fast. Most of the emulators can switch connection type or simulate signal strength. But like any other test on the emulator, it is inaccuracy.

Network virtualization via proxy is used for example by Experitest (6.1). Virtualization (proxy) server must be installed on local computer and each mobile device Wi-Fi adapter must be configured to use this server as proxy which governs the network conditions for devices.

This type of network virtualization is convenient for testing reliability (if the application will handle network disruption) but it is useless for testing end user experience if the application under the test behaves differently on Wi-Fi and mobile network. Well-designed applications do not transfer large amounts of data while they are using mobile networks (if it is not necessary).

22 Some public device clouds can automate sending SMS, calls and push notifications to devices. 22

Therefore, if they are connected to the network virtualization proxy, which simulates 3G mobile network, they are actually connected through Wi-Fi and behave accordingly.

5.2 Transferability Transferability issues are the reasons why device test clouds exist. Almost every application tries to support large variety of devices to reach as many users as possible. However, platform and device fragmentation makes this task harder. If the user lost the data from application after OS update, if will certainly appear on the store as negative review. 5.2.1 Installation Installation testing checks if the application is installed correctly on device under specific conditions.

Scenarios to test  Installation on internal device memory.  Installation on SD card with different settings (e.g. mass storage mode).

Related testing  Moving application from internal memory to SD card and vice versa.  Uninstall application from internal memory and from SD card.

These testing scenarios are suitable for automation on the device cloud. Simple smoke test can assure that the installation of the application on many different devices with different OS versions will be successfull. Note, that there are scenarios in which the application should not be installed. For example when the device does not have needed capability (e.g. missing NFC for contactless payment application). 5.2.2 OS updates Mobile application developers get new versions of OS in advance to have a time to prepare their applications. OS updates are common cause of application crashes [42]. Test application after OS updates can be convenient on mobile test cloud, but many of them do not support test flow when application is installed on device with specific OS version and OS is consequently updated on higher version.

23

5.3 Security

5.3.1 Data storage According to Hewlett Packard research, 75% of applications do not use proper encryption techniques when storing data or leave them unencrypted [43].

OWASP (Open Web Application Security Project) describes three common reasons for data leakage [44]:

• Developers often assume that no one will have access to application local data. • Developers are not fully conversant with the operating system and unintentionally save the data to shared locations. • Usage of week cryptographic algorithms.

Basic test for data leakage is application footprint analysis, which contains these steps [45]:

1. Acquire rooted device23 (to be able to access all folders and files). 2. Create a file system backup for future reference. 3. Setup directory monitoring tool to monitor mobile application folders. 4. Install the application. 5. Check created files, compare modified files with pre-installation backup in CompareDiff or TotalCommander. 6. Use the application or run the automated UI tests. 7. Check created files, compare modified files with post-installation backup.

Folder monitoring tools can show all newly created or changed files. Subsequent analysis of these files verifies that all data was stored in proper locations and no sensitive information was stored unencrypted. 5.3.2 Transport security Cyber security company FireEye conducted a research on 1,000 most downloaded applications on Google Play. They found out that 73% of an applications, which use SSL/TSL to communicate with remote server, do not check certificates and 77% of an applications, which use WebKit24, ignore SSL

23 Rooted or jail-broken device is device with cracked OS where user has root permissions. 24 User control for displaying web pages used by many cross-platform frameworks. 24 errors. Disrespect of basic security principles makes the application vulnerable to man-in-the-middle attack [46]. Previously mentioned study from Hewlett Packard states that 18% tested applications send user names and passwords over HTTP.

It is important to note that lot of security issues of this type could be produced from third party libraries (e.g. analytical or advertising) [46].

Transport security testing should assure that the data are transported securely with proper use of cryptographic protocols. 5.3.3 Binary protection Binary protection is one of the most important and most underestimated part of mobile application security [43].

Android and Windows packages (APK, APPX) are actually ZIP archives and can be very easily downloaded from the stores and unpacked. Apple protects applications on the App Store with an encryption (Apple FairPlay Digital Rights Management) but this security feature could be very easily bypassed if the attacker has jail-broken device.

Moreover, most of the Android and Windows applications have been developed in Java or C# respectively. These languages are translated to the intermediate languages which are higher abstraction than machine code with metadata, making it easier to decompile. iOS source codes are compiled to native machine code with many optimizations. However, there are many tools developed specially for decompiling iOS applications binaries.

If the attackers manage to decompile binaries to readable source codes, they can find there vulnerabilities or information for another attacks. Applications, which do not have properly secured binaries, can be modified and published in the unofficial applications stores. However, protect binaries from reverse engineering can never be perfect, but techniques like code obfuscation, jail-break detection or checksum control can make reverse engineering complicated [47].

Tester can check if the application is protected from decompiling by widely used tools like dex2jar (Android), ClutchMod (iOS) or .Net reflector (Windows).

25

5.4 Usability About 26% of users abandon the application after first use and one of the main reason why users tend to immediately delete them is poor user experience [26].

Usability is quality in use defined by the ISO 25010 [38] as: The effectiveness, efficiency and satisfaction with which specified users achieve specified goals in particular environments.

ISO 25010 also describes usability sub-characteristics likability and pleasure, which represent cognitive and emotional satisfaction. On the mobile application market, these attributes are more important than we are used to on desktop applications, because users very often leave reviews on usability on application store. Figure 4 shows the percentage of user reviews related to usability in Apple App Store in productivity category [40].

Figure 4: Review frequency [40].

User experience (UX) of the application differs by device and OS. UX will be different on mobile device with or without full hardware keyboard and Android users are accustomed to different UX patterns than the iOS or Windows Phone users [40].

5.5 Performance Performance requirements are also very important for mobile application market success. The most important key performance indicators are application launch time, UI responsiveness, memory footprint and battery life [48].

26

5.5.1 Launch time Launch time refers to the time delay when application starts. According to the consumer expectations survey [41], the median of expected mobile application launch time is two seconds and 80% of respondents expect that the launch time will be three seconds or less. The App Quality Alliance25 (AQuA) required to show message or progress bar if the launch time is longer than five seconds. The launch time should be measured by automated test script [48].

5.5.2 Responsiveness Responsiveness means that the application GUI is able to react quickly without delays or freezes. Mobile users are always in hurry and nobody wants to wait for slow application. Application responsiveness is measured in milliseconds as time delay which takes rendering different application pages or controls. Delays between switching pages can be measured in code, by GUI automation tools or by the tools dedicated specially for this purpose. 5.5.3 Battery life Application should not overload CPU or network because high CPU or network load decrease the battery life. The best way how to measure battery life is to measure CPU, GPU and network usage with a special tools which can count how much the application drain the battery26. 5.5.4 Network usage efficiency Network usage efficiency is one of the main performance criterion for mobile applications by AQuA. AQuA defines three basic network usage tests:

• Duplicated content – application should cache frequently used downloadable content. • Periodic transfers – advertisement or analytics data may not overuse internet connection. • Closing connections – network transfers should be closed when the application is deactivated.

25 http://www.appqualityalliance.org/ 26 From my own experience – best way how to measure application impact on battery is to combine mentioned measurements with beta testing, where tester use the application for a week as a common user would probably use it. After that period compare the application battery consumption with other commonly used applications in the OS settings. 27

AQuA also recommends to perform these tests with free AT&T Application Resource Optimizer (ARO) tool. ARO can collect network traffic while tester navigates through the application and automatically evaluate data wastage.

5.6 Application store compliance Certification testing is the final procedure before the application is uploaded on the application store. Tester have to assure that the application comply with all platform specific guidelines and will be approved to application store. This is very important mainly in the case of iOS where approval process takes the longest time and any guidelines breach can significantly protract application release.

Pre-publish procedure and testing (based on [49] [50] [51]):

1) Prepare assets for release – Assure that application icons, promotional images and screenshots exist and have desired size and format. Prepare EULA, cryptographic keys and text describing application on the store. 2) Prepare application for release – Clean the project from unneeded libraries, testing methods or debug settings. 3) Configure application manifest – Define supported orientation and application capabilities (used resources, sensors, access to contact list, etc.). 4) Create signed application package – applications have to be digitally signed by the developer’s certificate. This certificate does not need to be signed by the certified authority; the self-signed certificate is sufficient. 5) Run automated validation tools – Passing the automated validation tests is necessary to pass store approval process (in case of iOS and Windows). 6) Prepare accessories or servers – If the application cooperates with specific hardware accessories (e.g. wristband), consume web services or any other external resources, ensure that these resources are also ready for production. 7) Secure application package – use tools for protecting application binaries from reverse engineering. 8) Test application for release – perform both functional and nonfunctional tests to determine overall end user experience.

28

6 Testing tools

Following tools were chosen to cover the most types of mobile testing tools that are available on the market and can be used to test YSoft SafeQ Demo application as well.

6.1 Experitest Experitest provides mobile application testing solution called SeeTest which consist of these components:

 SeeTest Automation – tool for creating functional tests through GUI.  SeeTest Cloud – private mobile cloud solution.  SeeTest Manual – tool for connecting to mobile device in the cloud for manual testing.  Network virtualization – proxy server for simulating network conditions.

SeeTest Automation and SeeTest Manual leverage tool called Silk Mobile developed by Borland. Advantage of Experitest over the Borland that Experitest provides also device test cloud solution.

All components of SeeTest are very easy to install and use. It has very intuitive user interface, documentation is brief and clear. I was able to create and run SeeTest Automation script after watching five minutes long instructional video.

However, for testing on iOS, it is mandatory to acquire unlocking file from third-party company. It took whole week until I received the unlocking file for my device. This unlocking file, which is in fact side-loading key, must be acquired for each iOS device separately.

Problems with speed did not end. User actions recording is annoyingly slow because it must be performed on the computer. After connecting a device by USB cable to the computer (or connect device through the device cloud), the device screen is mirrored to the SeeTest window on the computer. In this window, tester performs recording actions and inputs via mouse and keyboard. After each action, tester must wait for the UI response which is delayed because of mirroring the screen. Recording the test scenario will be much better directly on device but this option is not supported.

29

Test execution is also really slow (sometimes slower than recording itself), but it can be executed in parallel on several devices.

Recorded test script can be exported to many programming languages. Test scripts are easy to read, understand and change.

SeeTest cloud is private – it is solution for set up cloud from devices that company already owns. It is not a service where you can buy time for testing on multiple devices.

Experitest was chosen for ability to record and play UI tests on Android, iOS and Windows Phone on private device cloud. Tester must get used to slow action recording and slow test execution but everything else works well.

6.2 Ranorex Ranorex Studio is .NET based tool for UI testing of mobile, desktop and web applications. It supports Android, iOS and Windows 8.1. Ranorex Studio looks similarly to Visual Studio; this could be disadvantage for non- programmers because whole studio offers lot of options and functionality that is familiar only to programmers.

Because of complexity of this tool, it took some time to learn all necessary. On the other hand, Ranorex Studio provides very good automation capabilities and features for organizing tests.

Test scripts are generated to C# code and even if scripts are readable and understandable well, they do not have to be modified in code. Everything can be set up and adjusted via GUI.

Script recording is fast and simple. Validation sections can be added during the recording with ease only by selecting GUI components and adjustment of validation criteria. Test recording of mobile applications on Android or iOS can be performed right on the device, which can be connected wirelessly on the same subnet.

However, recorded scripts were not always able to run for the first time. It was difficult to record test case and immediately play it because recorded scripts simply need some modifications. After using Ranorex for a while, I was able to identify some patterns in which recorded tests fail and prepare scripts better. Even though, test scripts are still error-prone.

30

Ranorex also providing capability for key-driven testing. Key-driven testing separates automation from the test case design. An application features can be tested separately by the independent automated scripts and labeled (e.g. StartApplication, Login, EndApplication, etc.). From these test scripts, QA engineer can easily compose test scenarios or acceptance tests right in the Ranorex Studio.

6.3 Calabash & Xamarin Test Cloud Xamarin is framework for mobile multiplatform development discussed in section 2.3. Beside developer tools, Xamarin offers sophisticated functional testing solution using Calabash.

Calabash is UI acceptance driven test framework for Android and iOS extending functionality of Ruby test framework Cucumber. Tests in Calabash (resp. Cucumber) can be written in natural language (using given-when-then conventions and Gherking grammar rules), which is easy to read even for non-technical QA engineers or business analyst.

Xamarin Test cloud extends UI acceptance testing for the Xamarin applications. When the unit and Calabash tests passed on the emulator or real device connected to the build server, the build server creates an application package, which could be sent along with Calabash tests on the selected devices to the Test Cloud as shown in Figure 5. When the Calabash tests are finished on all selected devices in the cloud, interactive test report with a list of devices and passed/failed tests are shown on the web portal.

Figure 5: Xamarin Test Cloud [37].

31

6.4 Perfecto mobile Perfecto mobile is a public device cloud service with test automation and network virtualization capabilities.

Tester can access hundreds of devices through web portal, Visual Studio or HP UFT extensions. After login to the web portal, tester can chose from large variety of smartphones and tablets with Android, iOS and Windows Phone. Subsequently, device screen appears in the web browser and device is ready to use. Manual testing with Perfecto mobile through web interface is surprisingly fast. Difference in delays between Perfecto mobile and SeeTest, where the device is connected directly to the computer, is minimal.

Manual testing offers various functions like calling on the testing phone or receiving SMS. There is also network virtualization with a basic settings and for devices with SIM card, mobile network data connection can be used to access the Internet.

Unfortunately, Perfecto mobile trial version contains only manual testing. Sales representative, which I contacted, did not permit me the access to trial version with automation capabilities.

6.5 IBM Mobile Quality Assurance IBM offers solution for cloud mobile application development called Bluemix. Mobile Quality Assurance (MQA) is part of Bluemix, provided as Software- as-a-Service capability for testing and validating mobile applications.

The main features of MQA are in-application bug reporting, crash reporting and sentiment analysis. MQA has pre-production and post- production libraries used for instrumenting the application. Adding these libraries to application is very easy almost without any code modification (around ten lines of code). Pre-production libraries allow in-application reporting – testers can easily report issues right from the device, which simplify whole issue reporting process (e.g. uploading pictures to issue tracking system or record device information).

Post production library serves mainly for reporting detailed information about crashes to the Bluemix portal. Advantage against some of the other mobile analytic services is that it does not require nearly any code modifications.

32

Sentiment analysis provides detailed information from the application store. It measures the application quality attributes like performance, UX elegance, pricing or satisfaction based on user reviews, which is not very accurate (in case of performance for example). On the other hand, user reviews and ratings are the only metrics that matters and it shows the quality of an application.

MQA sentiment analysis can provide the same information even about other applications. Therefore, product management can compare their application with others.

IBM MQA does not provide any test automation capabilities and cannot be connected with any application lifecycle management software which I considered as biggest disadvantage.

6.6 Testing tools overview It cannot be determine which testing tool is the best because it always depends on tested application. For a small low budget application for Android or iOS I would use IBM MQA, leverage the beta testing and completely avoid the UI automated tests, which are difficult to maintain due to the fast changes in the application UI.

For the large application projects with high budget and high demands on reliability and transferability, I would choose between Experitest and Perfecto mobile depend on what type of a device cloud I would need. Furthermore, I would not consider IBM MQA for these large projects because of its inability to integrate with the application life management tools, lot of manual tasks and no continuous integration support.

On the other hand, if I decided to develop hybrid application with Xamarin framework, I would consider only the Xamarin testing solution which is sufficient for most of the application projects and it is difficult to test Xamarin application with other mentioned tools because of technical restrictions.

33

The following tables summarize information about testing tools.

Experitest Ranorex Xamarin Perfecto IBM

Test cloud mobile MQA

Supported Android, Android, iOS, Android, Android, Android, mobile OS iOS, WP Windows iOS, WP iOS, WP iOS

Supported Native, Native, web, Hybrid Native, Native, application web, hybrid* web, web, types hybrid* hybrid* hybrid*

Test UI tests UI test API tests UI tests No automation

Device test Private Private Public Public, No cloud private

CI support Yes Yes Yes Yes No

Table 1: Test tools overview.

*It depends on what cross-platform framework is used.

Price

Experitest $3,500/year for automation + $1,000/year for cloud

Ranorex $2,330 with one year maintenance

Xamarin Test cloud $5 per device hour

Perfecto Mobile $15 per device hour

IBM MQA $168 per application + $0.17 per device for monitoring

Table 2: Test tool pricing.

6.7 Automated validation tools Windows and iOS provide automated validation tools (Windows Certification Kit27, iTunes Connect Validation Tests28) to speed up approval process. In case of Windows, application will not pass Windows Store approval process if it fails to pass this automated validation tests and iOS application cannot even proceed to approval process on the App Store.

27 https://msdn.microsoft.com/library/windows/apps/hh694081.aspx 28 https://www.apple.com/itunes/working-itunes/sell-content/connect/ 34

Windows Certification Kit and iTunes Connect Validation Tests are both running on the developer’s computer. Their purpose is to increase the number of successful approval requests by testing the application before submitting.

Windows Certification Kit tests security and performance, application capabilities, resources and contains many others static and dynamic tests.

Android has similar tool called Lint. This tool is not intended to validate application before submitting to the application store, but contains static code analysis which is similar to some tests from Windows Certification Kit and iTunes Connect.

35

7 Testing guidelines

There are lot of aspects that should be considered at the beginning of application development lifecycle. No matter what software is being developed, questions like “What is the purpose of the software product?” or “In what environment our software will be running?” remains the same. However, some of them are special or closely related with mobile applications.

Answers on these questions should be clarified before test activities started (based on [26], [52]):

• How the application was developed? (native, web, hybrid) • What are the targeted operating systems? • What is the range of supported OS versions? • Is the user coverage public or enterprise? • For enterprise applications, is there any list of devices approved by the company? • For public applications, what is the targeted audience and application store category?

Comprehensive checklist with similar questions can be found at [53]. It is a first step in mobile testing because testers must know what they are going to test.

To summarize the previous chapters, I suggest the following mobile application testing guidelines (inspired by [54]):

1. Define test conditions (scope and priorities) and prepare test suites • Start by preparing test cases based on functional and non- functional requirements and test design specification. • Decide which tests will be running on the emulators and which on the real devices.

2. Determine what test cases will be automated • Identify repetitive testing tasks suitable for automation. • Identify test cases which are required to be executed on multiple devices. • Identify test cases which cover basic functionality and will be easy to automate (e.g. smoke tests covering installation and login).

36

3. Prepare test environment and tools • Acquire test devices and emulators. . Select popular devices. • Set up private test cloud or acquire public test cloud access (if device test cloud is needed). • Setup devices and emulators. . For emulators – setup parameters (e.g. OS version, screen size, pixel density, memory, architecture, size of SD card memory) to reflect targeted devices. . For real device – update or downgrade OS to desired version. Clear the device from unneeded applications or install widely used application (e.g. Messenger or Twitter) based on what you want to test29. • Acquire software for support mobile testing (e.g. tools for UI automation, security or performance testing). • Setup build server for continuous integration.

4. Functional testing • Run automated tests on the build server. • Perform manual tests on the selected devices. . Support manual testing by in-application bug reporting tools. • Consider how often the automated tests will be executed on the device cloud.

5. Usability testing • Do not forget that usability is key factor to application success. • Start testing usability with sketches and prototypes in early project phases. • Test during the whole development process, test often.

6. Performance testing • Test application for its responsiveness, resource usage and launch time.

7. Reliability testing • Test if the application handles interruptions (7.1.1) correctly.

29 E.g. for reliability testing is better to test application on “messy” device running several background applications. 37

• Test if the application handles network errors, fluctuations, airplane mode and switching between Wi-Fi and cellular networks.

8. Security testing • Test application for data leakage and transport security. • Test how hard is to decompile the application binaries.

9. Transferability testing • Test if the application is able to run on supported devices and operating systems versions – consider testing on public device cloud. • Use developer previews of new operating systems and test application on new OS version before it goes public.

10. Other non-functional testing • Test localization, accessibility and other tests related to the application.

11. Application store policies compliance testing • Make sure that application complies with all platform specific guidelines and store policies. • Prepare your application for release. • Use automated validation tools.

12. Application monitoring • Monitor application crashes, downloads and user behavior. • Track user reviews and ratings on the application store. • Measure application quality characteristics.

This guideline is not comprehensive but captures the most important steps. Every application is different and requires different testing tools and approaches.

38

8 SafeQ Terminal Demo

8.1 Motivation YSoft SafeQ is server-based print management system for multifunctional printers (MFP). Functionality of SafeQ for a common user is shown the best on the MFP’s terminal (display) itself. The problem is that MFPs are big, heavy and it is difficult to take them to the customer. SafeQ Terminal Demo is a mobile application that solves this problem. This mobile application is simply imitating the behavior of the real printer.

8.2 Analysis

8.2.1 Stakeholders The main stakeholders of SafeQ Terminal Demo are: thesis supervisor and technical consultant (both from Y Soft), the developer (author of this thesis), Y Soft sales professionals and partners, management, CEO, employees, YSoft SafeQ users, potential Y Soft customers and MFPs vendors. 8.2.2 Requirements This part describes general requirements for the application, which should bring the unique values to the stakeholders.

These initial requirements were: R1: Application shall mimic the YSoft SafeQ system on the Konica Minolta MFP. R1.1: Application shall contain Konica Minolta native and browser based terminal. R1.1: Application shall support print and scan workflows. R1.1: Application shall support billing codes and payment system. R2: Application shall be able to run on mobile devices.

Mentioned requirements came from the thesis supervisor and technical consultant. Another requirement on application was originated from the author of this thesis who required Windows as a target platform. This requirement was not inconsistent with any other requirements or other stakeholder’s expectations.

39

8.2.3 Solution design The first important thing is that MFPs have seven-inch display (or more) and hardware buttons around, which are also important for operating the MFP and should be presented in mobile application. This eliminates all smartphones with screen size smaller than six inches diagonally because controls would be too small to touch and poorly visible. Thus, the application should target mainly tablets.

According to all requirements, SafeQ Terminal Demo was implemented as native Windows Store application, which is convenient because it can run on wide range of devices from PCs to low cost tablets.

Konica Minolta (KM) MFP was chosen as a first vendor for implementing mobile demo application. Y Soft provides two versions of terminal for KM – native YSoft SafeQ embedded terminal and browser-based YSoft SafeQ embedded terminal.

An additional requirement emerged in the end of the development lifecycle in relation with decision to implement application for KM. R3: Application shall be available for all Y Soft supported MFPs vendors. It shall not be publicly available otherwise.

This requirement was originated from the Y Soft management who was concerned that other MFPs vendors take this like a competitive advantage for Konica Minolta and violation of partnership agreement.

But this requirement, which would have been a blocker, was changed in the end on this: R3 (modified): Application functionality shall be provided only for authorized persons and for others, nothing shall indicate that application is only for Konica Minolta.

8.2.4 Functional requirements specification Functional requirements for the application were written as a user stories with acceptance criteria in form of given-when-then. Example user story of the SafeQ Terminal Demo application is: 3 Story: Print all waiting jobs As a user I want to simulate print all waiting jobs In order to show how easy and quick is to print all waiting jobs with YSoft SafeQ 40

3.1 Scenario: Print all waiting jobs directly after login to the terminal Given user is on the login screen and has enabled Print All option When user logs in to the terminal And select billing code Than print progress window should appear with the list of printing jobs And all compatible jobs should be moved from waiting folder to the printed folder

All user stories can be found in the appendix A. 8.2.5 Non-functional requirements specification N1: Transferability N1.1: Application must be able to run on Windows 8.1 and Windows RT 8.1. N1.2: User must be able to install application from the application store. N1.3: Application shall be installable from the Windows Store. N1.4: Application shall be updateable from the Windows Store N1.5: Application shall be uninstallable. N2: Compatibility N2.1: Application shall support these screen resolutions: 1280x800, 1366x768, 1920x1080. N2.2: Application shall be able to minimalized and maximized. N2.3: Application shall show the notifications from OS and other applications correctly. N2.4: Application shall comply with all Windows store policies.

N3: Performance N3.1: Application shall launch in less than three seconds on device with 1.6GHz mobile processor. N3.2: Delays between different application screens shall be less than 300ms on device with 1.6GHz mobile processor. N3.3: Application shall use less than 15% of CPU on device with 1.6GHz mobile processor to maximize battery life. N3.4: Application shall consume less than 100MB of memory. N4: Usability N4.1: Application shall be controlled by the mouse or touch screen complementary. N4.2: Application shall offer remembering the login password. N4.3: Application shall support only the landscape orientation.

41

N4.4: Network errors, which cause loss of monitoring data, should not be presented to the user.

N5: Security N5.1: Application shall transfer monitoring data with SLL. N5.2: Application shall require user authentication when the application is launched. N5.3: The authentication shall require only the password. N6: Reliability N6.1: Network errors, fluctuations and airplane mode shall not have any effect on the application. N6.2: Windows 8.1 update shall not have effect on the application

N7: Maintainability N7.1: Application shall contain help page. N7.2: Terminal browser shall be reusable in applications for other vendors. N7.3: Application shall contain manual for side-loading installation (installation without Windows Store).

N8: Other non-functional requirements N8.1: Terminal browser shall be prepared for localization (extensibility). N8.2: Application shall contain login screen (functional suitability).

8.3 Architecture Application architecture is divided into several assemblies. Core assembly is Windows portable library containing entity classes and majority of business logic. This portable library is written in C# and can be used on Android or iOS as well, with multiplatform framework Xamarin described in paragraph 2.3.

Another part is shared project which contains views and visual assets for the browser-based terminal. Shared project allows code and assets sharing across the compatible platforms. This shared project behaves like a common library but does not have its own assembly and then, it must be used within Windows or Windows Phone application project. Moreover, browser-based terminal is customizable – it allows turn on or off payment system, billing codes or scan application.

Therefore, implementation of Terminal Demo application for another MFP vendor (e.g. Xerox or Ricoh) can use this project, since the other vendors has only browser-based terminals.

42

Next two assemblies belong to the Windows and Windows Phone application. Dependencies between all assemblies are shown in the Figure 6.

Figure 6: Component Diagram.

8.3.1 MVVM Model-View-ViewModel (MVVM) is architectural pattern separating user interface, application state and data. This pattern derives from Model-View- Controler (MVC). MVVM was developed to simplify event-driven programming. One of the biggest disadvantages of classic event-driven style was code behind the view, closely coupled with GUI.

As the name suggest, MVVM consist of the three main parts:

 View is the user interface.  Model represents data and business objects.  View-Model connect view with model and holds application state.

Model does not have to represent data directly. In many cases, model represents service layer which contains business logic and mediate data access. This approach is convenient if the application holds data in the

43 database or consumes them from web services. That is not a case of Terminal Demo. Therefore, the model holds data directly.

In case of SafeQ Terminal Demo application, models and view-models are both written in C# and views in markup language XAML.

GUI controls properties from view are declaratively bound to view-model properties. Every model implements interface INotifyPropertyChanged, which defines event OnPropertyChanged. Changes in model property are immediately propagated to view after rising that event with a property name. Changes in GUI are also propagated into model (e.g. user text inputs into string property of view-model). ObservableCollection type serves as source for binding collections.

Piece of a class diagram from SafeQ Terminal Demo application for illustrating MVVM:

Figure 7: MVVM.

The biggest advantage of MVVM is testability. View-model code can be covered by unit tests; model (e.g IPrintJob from the Figure 7) can be simply mocked to separate the view-model.

Most of the functionality in SafeQ Terminal Demo is handled in view- models, but not all. As it is apparent from description above, MVVM is mainly used for data manipulation. If the developer sticks with Windows design guidelines and uses standard patterns, it is easy to have clean views without code behind. SafeQ Terminal Demo is very specific application because it has to imitate the real software in MFP, which is not compliant with Windows Store design guidelines and there are no navigation patterns common for Windows Store or Windows Phone application.

Therefore, I decided to handle navigation in view’s code behind. It is easier and understandable with only few lines of code in each view, but it is hard to test. Solution with only MVVM would be problematic with difficult structure and more lines of code. On the other hand, it would be easier to test. The last point is that testing navigation from page to page in view-model would not be enough due to possible complexity of the design. View-model state does

44 not have to correspond to what user can see on the monitor, if the binding is incorrect. Thus, navigation should be tested manually or with UI automated scripts anyway.

8.4 Functional testing Most unit tests were designed to test view-models and some model classes like PrintJob.cs. Mocking framework MoqaLate30 was used to isolate the code under the test.

Automated UI tests were used because testing view-models cannot cover complex workflows. Ranorex (8.2) seemed to be very helpful and speeded up the creation of UI test for Windows Store applications. However, Ranorex trial license expires after 30 days, so the tests were developed in Microsoft test framework UI Coded tests31.

During the creation of UI tests by several tools (chapter 6) or with Microsoft Coded UI, I have observed several things:

 More bugs were found during the test creation than during the test execution.  UI tests were hard to maintain.  If the UI test fails, it was due to bug in the test (not in the application) in more than half of all cases.

These findings are known already, but I also observed, that passed UI tests give me more subjective confidence in a product than unit or integration tests.

8.5 Non-functional testing

8.5.1 Performance Most of the performance tests were conducted by the performance tools in Microsoft Visual Studio.

8.5.1.1 CPU usage Test steps:

 Set build configuration to Release mode (attached debugger can distort results).

30 https://moqalate.codeplex.com/ 31 https://msdn.microsoft.com/en-us/library/dd286726(v=vs.100).aspx 45

 Run the simulator.  Deploy application to the simulator with CPU usage tool from Debug menu.  Perform acceptance testing scenarios or identify sections when could be excessive CPU usage. o Possible improvement is to wait a second between each action to isolate events handle routines (it will be easier to determine which action cause CPU workload).  Exit the application, stop measurement.

Results are shown in the interactive chart. Under the chart is a list of methods running in selected time window and their CPU utilization. Self CPU refers to time spend in the particular method and total CPU refers to overall time to perform the method.

Analysis Figure 8 shows the CPU usage in scenario 2.1 Print from waiting folder. First peaks in the CPU utilization chart were caused by startup routine. Next and the biggest peak, which is selected on the figure 8, is caused by initializing view-models after clicking on the “go to the terminal” button. Next peaks are small and keep CPU usage under the 10%.

Improvements View-models initialization is already performed asynchronously by the method CreateVMs. Possible improvement could be to run this method immediately after application is launched to spread the peak when the application is waiting for the user action.

Typical improvements implemented during the development were related with asynchronous method calls with this simple pattern: public async void ResetAllAppSetting() { await ResetAsync(); } private Task ResetAsync() { return Task.Run(() => { CreateVMs(); }); }

46

Figure 8: CPU usage.

This improvement keeps the UI thread responsive. First application prototype without asynchronous calls freezes and sometimes the process was killed by the OS.

Notes Tests were performed on two different PCs and simulator settings. First, tests run on clean simulator and computer where most unnecessary processes were disabled. Second, the same PC and the same simulator with several running applications and playing video. Test results were very similar, some actions were a bit faster on the clean computer but some of them were faster on the unclean one. Thus, other applications do not affect CPU usage tests conducted by Visual Studio Performance tools.

8.5.1.2 UI Responsiveness UI responsiveness tests were performed as the CPU tests, but with UI responsiveness tool.

Figure 9 shows part of UI responsiveness test when user print from browser terminal. Meaning of the color bars in top chart is following:

 Blue bars show percentage of time spent on parsing XAML files.  Orange bars show percentage of time spent in layout.  Green bars show percentage of time spent in application code.  Gray bars show percentage of time spent in other XAML services. 47

Figure 9: UI responsiveness before improvement.

Selected part on the chart represents moment, when the user tap on print application button in browser-based terminal and browser with waiting jobs starts loading. This was identified as a place for improvement.

Improvement Method, which causes slow UI response, also creates page with waiting jobs. Creating the view has to be performed in the UI thread.

Chosen approach runs the problematic code in asynchronous task and the page creation is asynchronously dispatched to the UI thread. await this.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Low, () => { waitingJobsPage = new JobListPage(…); }); This improvement results in nearly two times faster UI response. Figure 10 illustrates the same scenario after changes in code.

48

Figure 10: UI responsiveness with improvements.

8.5.1.3 Energy consumption Energy consumption tool for Visual Studio was not working properly, because after several tests, tool did not recognize that application uses Wi-Fi adapter for sending monitoring data. Nevertheless, energy consumption is computed mainly from CPU and network utilization, which are tested separately.

8.5.1.4 Memory Visual Studio Memory usage tool was used for tracing application heap. Test was performed as follow:

1) Deploy application to local machine in release mode with Memory usage tool. 2) Perform actions to complete chosen scenario (manually or with UI automation). a) Take a memory snapshot after each user action. b) When the number of allocated objects increase, try to force garbage collector.

49

Results Memory used by application was around 60 MB (Figure 11) and remained almost unchanged during several tested scenarios. Garbage collector seems to work properly and increasing or decreasing of allocated memory was always reasonable due to application nature.

Figure 11: Memory snapshots.

8.5.1.5 Launch time Launch time was measured with a simple coded UI test and can be run from console by vstest.console.exe utility. Tests were performed fifteen times on two different devices. Results are shown in the following table 3.

Device Processor memory average launch time Umax Intel BayTrail-T Quad- 1 GB 1.91 s VisionBook 8Wi, Core 1.8 GHz tablet Lenovo Flex2, Intel Core i5 4200U 8 GB 1.35 s laptop 1.6 GHz

Table 3: Launch times.

50

8.5.1.6 Package size Unpacked APPX file should be tested on presents of unnecessary files. In the case shown on Figure 12, library Microsoft.Diagnostic was unintentionally leaved, because the source code was not properly cleaned from diagnostic methods used for testing.

Figure 12: Package size.

Package application size was 9,952 KB without any optimization and most space was occupied by visual assets. Possible improvement was to compress images because most of them were in PNG format. After converting most images to JPEG with compression ratio that was acceptable (images quality deterioration were not recognizable), package size shrinks to 4,292 KB.

8.5.1.7 Network performance Because Terminal Demo application contains monitoring service that uses the Internet connection, it must be tested on the network usage.

Tests were performed with ARO and data collector for Windows Store applications and consistent with AQuA performance tests ID 1.2, 1.3 [55]. Test steps were following:

1) Set up test environment (x86 Windows 8 tablet, Java6, WinPcap, Netmon). 2) Open ARO and data collector for Windows 8. 3) Kill all tasks. 4) Start the application. a) Navigate to home screen (for test ID 1.2 periodic transfers) b) Navigate through the application for 20 minutes (for test ID 1.2 periodic transfers). 5) Stop ARO Data collector. 6) Open capture traffic (.pcap file) in ARO Data Analyzer and wait for the report.

51

8.5.2 Transferability T1: Test design specification for transferability T1.1: Test that the application can be installed on the Windows 8.1 and 8.1 RT. T1.2: Test that the application can be downloaded and installed from the Windows Store. T1.3: Test that the application can be downloaded and installed from the Windows Store. T1.4: Test that the application can be updated from the Windows Store. T1.5: Test that the application can be uninstalled.

Application was installed on several different devices (tablets, PCs, virtual computers and simulators). Installation, uninstallation and update from Windows Store were smooth. 8.5.3 Security T5: Test design for security

T5.1: Test that the application transfers data only by SLL T5.2: Test that the application requires password after every application launch.

Application passed through the basic network security test, which contained capturing and analyzing of network packages sent by the application from the simulator. Google Analytics service, which is used for basic application monitoring, is properly configured to use the HTTPS and it is the only one component using the network adapter.

Securing the application package was unnecessary because application code does not contain any sensitive information and reverse engineering will not have any business impact. 8.5.4 Reliability T6: Test design for reliability T6.1: Test that the network errors, fluctuations or airplane mode do not affect the application.

Network interruptions does not have affect to the application. When the Google Analytics service cannot transfer monitoring data or some error occurs during the transfer, it will fail silently without any effect to the application and monitoring data are not transferred which does not matter.

52

Application was also tested on the interruptions from various sources mentioned in 7.1. As the vulnerable spots were identified moments, when the application switches to the new page and must load the data (e.g. start print application – print jobs must load). However, this exploratory approach did not reveal any bug. 8.5.5 Usability T4: Test design for usability T4.4: Test that the application can be controlled by the mouse and touch screen. T4.5: Test that the application offers password remembering. T4.6: Test that the application supports only the landscape orientation. T4.7: Test that network errors are not shown to the user.

Application’s look and control were given by the real YSoft SafeQ application. When the project was starting, it was necessary to decide which buttons and terminal features will be implemented and which will not. Some features, which are not directly bind with the YSoft SafeQ (e.g. help page), were prototyped and discussed with the consultant. 8.5.6 Windows Store compliance Before the upload to the Windows Store, application was prepared according to pre-publish procedure described in 5.6. SafeQ Terminal Demo application passed all the Windows Certification Kit tests and was approved to the Windows Store. Whole approval process took approximately one hour.

53

9 Conclusion

Mobile application testing brings the lot of challenges. Often OS updates, platform or device fragmentation and many other aspects contribute to the low quality of many mobile applications. This diploma thesis investigated these traits and described testing environment, techniques and tools for improving mobile application quality.

During the thesis elaboration, I have learned a lot about mobile application platforms, development and testing. I developed several small applications using the native languages or Xamarin framework to be able to investigate the testing specifics on each platform. All tools I have tried and depict in chapter 6 were tested with a native, web and hybrid applications separately on each mobile OS. I had available devices with Android, iOS and Windows Phone all the time I was working on the thesis. I used analytical tools and manually browsed the application stores to reveal users expectations. Furthermore, I was testing many common mobile applications to find common mobile application weaknesses. There was a lot to discover and the main findings were presented in this thesis.

SafeQ Terminal Demo application was successfully approved to the Windows Store and the application project is designed and prepared for the implementation of other MFP vendor. Moreover, the application code in portable library can be reused in the Android or iOS SafeQ Terminal Demo application with the Xamarin framework.

When the SafeQ Terminal Demo was already released, the management from Y Soft ordered to remove it from the store, because it would be competitive advantage for Konica Minolta. It was a real life lesson that the mistakes in stakeholder analysis and requirements design phase are very expensive to repair when the product is already released. Fortunately, we found the solution and application was released after several changes and is now available on Windows Store.

54

10 Bibliography

[1] "Mobile Experience Benchmark," [Online]. Available: pages.crittercism .com/rs/crittercism/images/crittercism-mobile-benchmarks.pdf. [Accessed 28 1 2015].

[2] Ratnakar, N., "Software Testing Help: Beginner’s Guide to Mobile Application Testing," 15 2 2015. [Online]. Available: http://www .softwaretestinghelp.com/beginners-guide-to-mobile-application-testing/. [Accessed 1 5 2015].

[3] Gillenwater, Z. M., "Examples of flexible layouts with CSS3 media queries," 15 12 2010. [Online]. Available: http://zomigi.com/blog/examples- of-flexible-layouts-with-css3-media-queries/. [Accessed 29 4 2015].

[4] "Foraker labs: Choosing between responsive web design and a separate mobile site to improve mobile visitor’s experience.," 17 4 2013. [Online]. Available: http://www.foraker.com/choosing-between-responsive-web-design- and-a-separate-mobile-site-to-improve-mobile-visitors-experience/. [Accessed 20 3 2015].

[5] W3C, "Standards for Web Applications on Mobile: current state and roadmap," 1 2015. [Online]. Available: http://www.w3.org/Mobile/mobile- web-app-state/. [Accessed 1 5 2015].

[6] Cowart, J., "Developer Economics: Pros and Cons of the Top 5 Cross- Platform Tools," 12 11 2013. [Online]. Available: http://www. developereconomics.com/pros-cons-top-5-cross-platform-tools/. [Accessed 6 4 2015].

[7] "Apache Cordova," [Online]. Available: https://cordova.apache.org/. [Accessed 5 4 2015].

55

[8] "Mono Project," [Online]. Available: http://www.mono-project.com/. [Accessed 20 1 2015].

[9] Reynolds, C., "Appindex: Ten of the Best Cross-Platform Mobile Development Tools for Enterprises," 16 10 2014. [Online]. Available: http://appindex.com/blog/ten-best-cross-platform-development-mobile- enterprises/. [Accessed 1 5 2015].

[10] Dalmasso, I.; Datta N.; Christian, B.; Nikaein N., "Survey, Comparison and Evaluation of Cross Platform Mobile Application Development Tools," [Online]. Available: http://www.academia.edu /6486265/Survey_Comparison_and_Evaluation_of_Cross_Platform_Mobile_ Application_Development_Tools. [Accessed 10 4 2015].

[11] "Smartphone OS Market Share, Q3 2014," IDC, [Online]. Available: http://www.idc.com/prodserv/smartphone-os-market-share.jsp. [Accessed 29 12 2014].

[12] "Open Handset Alliance," [Online]. Available: http://www. openhandsetalliance.com/. [Accessed 20 3 2015].

[13] Rajapakse, D. C., "Fragmentation of mobile applications," 28 4 2008. [Online]. Available: http://www.comp.nus.edu.sg/~damithch/df/device- fragmentation.htm. [Accessed 1 5 2015].

[14] "Android: Launch Checklist," Google, [Online]. Available: http://developer.android.com/distribute/tools/launch-checklist.html. [Accessed 12 2 2015].

[15] Oberheide, John, "Disecting the Android Bouncer," 21 6 2012. [Online]. Available: https://jon.oberheide.org/blog/2012/06/21/dissecting-the-android- bouncer/. [Accessed 5 2 2015].

[16] "HTG Explains: Does Your Android Phone Need an Antivirus?," [Online]. Available: http://www.howtogeek.com/129896/htg-explains-does- your-android-phone-need-an-antivirus/. [Accessed 8 2 2015].

[17] Sherman J., "Examiner: Apple iPhone app vs. Google Android App submission," 1 27 2014. [Online]. Available: http://www.examiner.com

56

/article/apple-iphone-app-vs-google-android-app-submission. [Accessed 4 30 2015].

[18] "FreeBSD.org: FreeBSD Handbook – Who uses BSD?," [Online]. Available: https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook /nutshell.html. [Accessed 4 5 2015].

[19] "Apple: App Store Distribution," Apple, 16 2 2015. [Online]. Available: https://developer.apple.com/support/appstore/. [Accessed 18 2 2015].

[20] Bradley, T. "Forbes: Apple iOS Is More Fragmented Than It Seems," 21 6 2013. [Online]. Available: http://www.forbes.com/sites/tonybradley /2013/06/21/apple--is-more-fragmented-than-it-seems/. [Accessed 20 2 2015].

[21] "Daily Tech: Microsoft Smartphone Sales Up 28 Percent as Lumia Budget Models Gain Ground," 27 1 2015. [Online]. Available: http://www.dailytech.com/Microsoft+Smartphone+Sales+Up+28+Percent+as+ Lumia+Budget+Models+Gain+Ground/article37106.htm. [Accessed 25 3 2015].

[22] "Microsoft: 71 percent of Windows Phone apps are installed on low memory devices," Windows Central, 23 12 2014. [Online]. Available: http://www.windowscentral.com/microsoft-71-percent-windows-phone-app- downloads-go-low-memory-devices. [Accessed 3 3 2015].

[23] Mobile Developer's Guide To The Galaxy, Enough Software, 2015.

[24] "MSDN: Certify your app," Microsoft, [Online]. Available: https://msdn.microsoft.com/en-us/library/windows/apps/hh694079.aspx. [Accessed 29 4 2015].

[25] Blair, H. F., "GeekWire: Apps in an hour? Windows Phone certification just got a lot faster," 17 2 2014. [Online]. Available: http://www.geekwire.com/2014/windows-phone-app-certification-just-got-lot- faster/. [Accessed 30 4 2015].

[26] Jeesmon, J., Beginner's Guide for Mobile Applications Testing, 2015, Kindle edition.

57

[27] Keynote, "Testing Strategies and Tactics for Mobile Applications," [Online]. Available: http://www.keynote.com/resources/white-papers/ testing-strategies-tactics-for-mobile-applications. [Accessed 20 4 2015].

[28] Kumar, M.; Chauhan, M., "Infosys: Best Practices in Mobile Application Testing," [Online]. Available: http://www.infosys.com /flypp/resources/Documents/mobile-application-testing.pdf. [Accessed 14 3 2015].

[29] Craig, R. D.; Jaskiel, S. P., Systematic Software Testing, Norwood, MA: Artech House Publishing, 2002.

[30] "ISTQB: Foundation Level Syllabus," 2011. [Online]. Available: http://www.istqb.org/downloads/finish/16/15.html. [Accessed 29 4 2015].

[31] Carver, D., "Unit Test Android Without Going Bald," [Online], 2015. Available: http://www.slideshare.net/davidcarver7798/unit-test-android- without-going-bald. [Accessed 29 4 2015].

[32] "ISTQB: Standard Glossary of Terms Used in Software Testing Version 3.0," 26 3 2015. [Online]. Available: http://www.istqb.org/downloads /finish/20/194.html. [Accessed 3 5 2015].

[33] Kolawa, A; Huizinga, D, Automated Defect Prevention: Best Practices in Software Management, Wiley, 2007.

[34] Cohn, M., Succeeding with Agile, Addison-Wesley, 2009.

[35] Fowler, M., "Continuous Integration," 6 5 2006. [Online]. Available: http://www.martinfowler.com/articles/continuousIntegration.html. [Accessed 2 4 2015].

[36] Budhabhatti, M., "MSDN: Test-Driven Development and Continuous Integration for Mobile Applications," 2008. [Online]. Available: https://msdn.microsoft.com/en-us/library/bb985498.aspx. [Accessed 1 5 2015].

[37] "Xamarin: Introduction to Continuous Integration," [Online]. Available: http://developer.xamarin.com/guides/cross-platform/ci/intro_to_ci/. [Accessed 20 4 2015].

58

[38] "ISO/IEC 25010: Software engineering – Software product Quality Requirements and Evaluation (SQuaRE) Quality model," 2008.

[39] "Compuware: What Users Want from Mobile," 2011. [Online]. Available: http://e-commercefacts.com/research/2011/07/what-usrs-want- from-mobil/19986_WhatMobileUsersWant_Wp.pdf. [Accessed 5 5 2015].

[40] Arbon, J, App Quality: Secrets for Agile App Teams, Kindle edition, 2015.

[41] "Mobile Apps: What Consumers Really Need and Want," Dynatrace, 2012.

[42] "Smashingmagazine: What Every App Developer Should Know About Android," 2 10 2014. [Online]. Available: http://www.smashingmagazine.com/2014/10/02/what-every-app- developer-should-know-about-android/. [Accessed 30 4 2015].

[43] "Hewlett Packard: HP Research Reveals Nine out of 10 Mobile Applications Vulnerable to Attack," 13 11 2013. [Online]. Available: http://www8.hp.com/us/en/hp-news/press-release.html?id=1528865#. VTNXFCGqqkr. [Accessed 16 4 2015].

[44] "OWASP Mobile Security Project: Top 10 Mobile Risks," 2014. [Online]. Available: https://www.owasp.org/index.php/OWASP_Mobile_Security_ Project#tab=Top_10_Mobile_Risks. [Accessed 3 4 2015].

[45] Kalra, G., "Mobile Application Security testing," [Online]. Available: http://www.mcafee.com/us/resources/white-papers/foundstone/wp-mobile- app-security-testing.pdf. [Accessed 5 4 2015].

[46] "FireEye: SSL Vulnerabilities: Who listens when Android applications talk?," 20 8 2014. [Online]. Available: https://www.fireeye.com/blog/threat- research/2014/08/ssl-vulnerabilities-who-listens-when-android-applications- talk.html. [Accessed 10 4 2015].

[47] "OWASP: Lack of Binary Protections," 25 3 2015. [Online]. Available: https://www.owasp.org/index.php/Mobile_Top_10_2014-M10. [Accessed 6 5 2015].

59

[48] Pavlov, N., "Non-Functional Testing on Mobile devices," Nordic Testing Days, 2012. Available: http://www.nordictestingdays.eu/sites/default/files/ NTD2012%20Presentations/Non-functional%20testing%20on%20mobile %20devices%20-%20Nikolai%20Pavlov.pdf. [Accessed 4 4 2015]

[49] "Android Developers: Preparing for Release," [Online]. Available: http://developer.android.com/tools/publishing/preparing.html. [Accessed 3 5 2015].

[50] "iOS Developer Library: App Distribution guide," [Online]. Available: https://developer.apple.com/library/ios/documentation/IDEs/Conceptual/App DistributionGuide/Introduction/Introduction.html#//apple_ref/doc/uid/TP40 012582-CH1-SW1. [Accessed 1 5 2015].

[51] "MSDN: App submission checklist," [Online]. Available: https://msdn.microsoft.com/en-us/library/windows/apps/hh694062.aspx. [Accessed 1 5 2015].

[52] "ISTQB Exam Certification: Mobile application development and testing checklist," [Online]. Available: http://istqbexamcertification.com/mobile- application-development-and-testing-checklist/. [Accessed 5 5 2015].

[53] "SmartBear: Tips to Approach Mobile testing," [Online]. Available: http://www.ministryoftesting.com/wp-content/uploads/2014/01/SmartBear- Mobile-Testing-Checklist-V2.pdf. [Accessed 5 5 2015].

[54] "RapipValue: Mobile Application Testing: Step-by-Step Approach," 12 12 2012. [Online]. Available: http://www.rapidvaluesolutions.com/mobile- application-testing-step-by-step-approach/. [Accessed 6 5 2015].

[55] "AQuA: Performance Testing Criteria for Android and iOS Applications," 2015. [Online]. Available: http://www.appqualityalliance .org/files/AQuA_performance_testing_criteria_FINAL_july_7_2014.pdf. [Accessed 2 4 2015].

[56] "OpenSignal: Android Fragmentation Visualized," OpenSignal, 1 8 2014. [Online]. Available: http://opensignal.com/reports/2014/android- fragmentation/. [Accessed 15 1 2015].

60

[57] "Apple developer: App review," Apple, [Online]. Available: https://developer.apple.com/app-store/review/. [Accessed 15 2 2015].

[58] Wroblewski, L., Mobile First, Ingram, 2011

[59] "Android: Testing Fundamentals," [Online]. Available: http://developer.android.com/tools/testing/testing_android.html. [Accessed 1 3 2015].

[60] "MSDN: Using the Windows App Certification Kit," [Online]. Available: https://msdn.microsoft.com/library/windows/apps/hh694081.aspx. [Accessed 29 4 2015].

[61] "Mobile Developer's Guide to The Galaxy," Enough Software, 14 2 2014. [Online]. Available: http://www.enough.de/fileadmin/uploads/dev_guide _pdfs/MobileDevGuide_14th.pdf.

[62] "MSDN: Common Language Runtime (CLR)," [Online]. Available: https://msdn.microsoft.com/en-us/library/8bs2ecf4(v=vs.110).aspx. [Accessed 29 4 2015].

61

Appendix A – User stories

1 Story: Choose SafeQ features

As a user I want to choose if terminal will have billing codes and payment system enabled or disabled In order to show different functionality of SafeQ to various customers

1.1 Scenario: Enable both payment system and billing codes Given user is on the login screen When: login with pin “1111” Than billing codes screen should appear And billing codes should be presented in scan application And current balance should be presented scan application

1.2 Scenario: Enable payment system and disable billing codes Given user is on the login screen When: login with pin “2222” Than printer main menu should appear And current balance should be presented in print application

1.3 Scenario: Enable billing codes and disable payment system Given user is on the login screen When: login with pin “3333” Than billing codes screen should appear And billing codes should be presented in scan application And credit information are not presented in scan application

1.4 Scenario: Disable billing codes and disable payment system Given user is on the login screen When: login with pin “4444” Than printer main menu should appear And billing codes are not presented in scan application And credit information are not presented in scan application

62

2 Story: Printing

As a user I want to perform print, print all and delete actions from different print folders in the same way as on actual printer In order to show print functionality of SafeQ to various customers

2.1 Scenario: Print jobs from waiting folder Given user is on the waiting jobs folder screen and has some jobs selected When tap the print button Than print progress window should appear with names of jobs that are being printed And selected jobs are moved to printed folder

2.2 Scenario: Print jobs from printed folder Given user is on the printed jobs folder screen and has some jobs selected When tap the print button Than print progress window should appear with the names of jobs that are being printed And all jobs should stay in the printed folder And no new job should be added to printed folder

2.3 Scenario: Print jobs from favorite folder Given user is on the favorite jobs folder screen and has some jobs selected When tap the print button Than print progress window should appear with names of jobs that are being printed And selected jobs should be copied to printed folder

2.4 Scenario: Print all jobs from waiting, printed or favorite folder The scenarios are the same as scenarios 2.1, 2.2 and 2.3 but user does not have to have some jobs selected.

2.5 Scenario: Delete print jobs from print folder Given user is on the print folder screen and has some jobs selected When tap the delete button

63

Than confirmation window should appear with a number of jobs to delete and their names And after tapping Ok on the confirmation window, selected jobs should disappear from current folder

3 Story: Print all waiting jobs

As a user I want to perform print all waiting jobs In order to show how easy and quick is to print all waiting jobs with SafeQ

3.1 Scenario: Print all waiting jobs after login to the terminal Given user is on the login screen and has selected Print All Yes button When user logs in to the terminal And select billing code (if billing codes are enabled) Than print progress window should appear with the list of printing jobs And all compatible jobs should be moved from waiting folder to printed folder

4 Story: Scanning

As a user I want to simulate scan In order to show scanning features of SafeQ

4.1 Scenario: Simulate document scanning by the selected workflow Given user is on the scan workflow screen and has some workflow selected When user tap the start button Than scanning popup should appear

4.2 Scenario: Setup scan workflow parameters 4.2.1 Sub-scenario: Show workflow parameters screen Given user is on the scan workflow screen When user tap on the parameters button Than scan options should appear

64

4.2.2 Sub-scenario: Setup recipient Given user in on workflow parameters screen When user tap on recipient field Than input recipient window appeared with predefined email

5 Story: Copy

As a user I want to simulate copying of documents In order to show how to copy with SafeQ

5.1 Scenario: Copy jobs Given user is on the copy screen When tap the start button Than copy popup should appear And copy job should be added to jobs history

65

Appendix B – SafeQ Terminal Demo screenshots

Figure 13: SafeQ Terminal Demo – native terminal.

Figure 14: SafeQ Terminal Demo – browser based terminal.

66

67