EXAMENSARBETE INOM TEKNIK, GRUNDNIVÅ, 15 HP STOCKHOLM, SVERIGE 2017

A comparison between cross- and native development for mobile applications

THIMMY LARSSON

JONAS WEDIN

KTH SKOLAN FÖR DATAVETENSKAP OCH KOMMUNIKATION

A comparison between cross-compiler and native development for mobile ap- plications

THIMMY LARSSON & JONAS WEDIN

Bachelor’s thesis in Computer Science Date: June 4, 2017 Supervisor: Michael Schliephake Examiner: Lennart Bladgren Swedish title: En jämförelse mellan multiplattforms-kompilator och standard utveckling för mobilapplikationer School of Computer Science and Communication ii

Abstract

Developing mobile applications for several platforms is a challenge for developers today. Supporting multiple applications with seperate code bases is expensive and time con- suming. To solve this problem the technique Cross-Compiler is available for developers. This thesis investigates the performance and developer experience between native appli- cations in Android and iOS and applications created with Cross-Compiler Xamarin. An application is defined and developed in order to test multiple hardware features on the different platforms. Test results shows that Xamarin and native each have their own ad- vantages, however, the overall conclusion is drawn that the advantages with Xamarin is superior to native development. iii

Sammanfattning

Utveckling av mobilapplikationer för flera platformar är en utmaning för dagens utveck- lare. Underhållning av flera applikationer med olika kodbaser är dyrt och tidskrävande. För att lösa det problemet så används Cross-Compiler. Den här uppsatsen undersöker prestanda och utvecklar-upplevelse mellan att utveckla nativa applikationer och applika- tioner utvecklade med Cross-Compiler Xamarin. En applikation är definerad och utveck- lad som testar olika hårdvarukomponenter på dom olika platformarna. Resultatet visar att Xamarin och nativa applikationer har olika fördelar, men den övergripande slutsatsen är att Xamarin är överlägsen nativ utveckling.

v

Abbreviation Explanation OS CPT Cross Platform Tools CC Cross Compiler API Application Programming Interface HTML Hyper-text Markup Language CSS Cascading Style Sheets IDE Integrated Development Environment OHA Open Handset Alliance DVM Dalvik Virtual Machine ART IL Intermediate Language JIT Just-In-Time (compilation) AOT Ahead-Of-Time (compilation) SDK Software Development Kit APK Android Application Package UI User Interface ARM Advanced RISC Machine (processor architecture) NuGet Package manager for .NET platform GPU Graphics Processing Unit CPU Central Processing Unit JSON Javascript Object Notation URL Uniform Resource Locator (web-address) Contents

Contents vi

List of Figures viii

1 Introduction 1 1.1 Problem statement ...... 1 1.2 Scope ...... 2 1.3 Purpose ...... 2

2 Background 4 2.1 Cross-Platform Tools ...... 4 2.1.1 Web ...... 4 2.1.2 Hybrid ...... 5 2.1.3 ...... 5 2.1.4 Cross-Compiler ...... 5 2.2 Operating Systems ...... 6 2.2.1 Android ...... 6 2.2.2 iOS ...... 9 2.3 Xamarin ...... 10 2.4 Related Work ...... 11 2.5 Trepn Profiler ...... 12 2.6 Instruments ...... 12

3 Method 13 3.1 What hardware to test ...... 13 3.2 Implementation process ...... 14 3.3 Testing process ...... 16 3.4 Development experience ...... 16 3.4.1 IDE experience ...... 17 3.4.2 Documentation & Language ...... 18 3.4.3 Learning curve ...... 19

4 Result 20 4.1 Developer experience ...... 20 4.1.1 iOS - Xcode ...... 20 4.1.2 Android - Android Studio ...... 21 4.1.3 Xamarin - Xamarin Studio ...... 23

vi CONTENTS vii

4.1.4 Summary ...... 25 4.1.5 Learning curve ...... 25 4.2 Test results ...... 26 4.2.1 Android ...... 26 4.2.2 iOS ...... 34

5 Discussion 46 5.1 Future work ...... 48

6 Conclusion 49

Bibliography 50

A Test Implementations 53 A.1 Bubble sort ...... 53 A.2 Finding primarys ...... 54 A.3 Copy lists ...... 55 A.4 Http request ...... 56 A.5 Server implementation ...... 57 A.5.1 JSON response ...... 57 A.5.2 Images ...... 58

B Http results data for iPhone 6 without Instruments 59 List of Figures

2.1 OS market shares ...... 6 2.2 Android stack ...... 7 2.3 Android version distribution ...... 8 2.4 Android compile process ...... 9 2.5 iOS version distrubution ...... 10

3.1 Work flow in applications ...... 15

4.1 Comparing Android and Xamarin for Bubble Sort ...... 27 4.2 Result table for Bubble sort on Android ...... 27 4.3 Comparing Android and Xamarin for Copy lists ...... 28 4.4 Result table for copy lists on Android ...... 28 4.5 Comparing Android and Xamarin for Copy lists for all 10 runs ...... 29 4.6 Comparing Android and Xamarin for finding primes ...... 29 4.7 Result table for finding primes on Android ...... 30 4.8 Comparing Android and Xamarin for Http requests ...... 30 4.9 Result table for http requests on Android ...... 31 4.10 Comparing Android and Xamarin and the total running time ...... 31 4.11 Result table for total running times on Android ...... 32 4.12 Comparing Android and Xamarin with regard to application size ...... 32 4.13 Result table for application size on Android ...... 32 4.14 Comparing Android and Xamarin with regard to lines of code ...... 33 4.15 Result table for code size on Android ...... 33 4.16 Used memory comparison for Android and Xamarin ...... 34 4.17 Result table for memory usage on Android ...... 34 4.18 Comparing native iOS and Xamarin iOS for Bubble Sort ...... 35 4.19 Result table for Bubble sort on iOS ...... 35 4.20 Comparing iOS and Xamarin for Copy lists with instruments running in back- ground ...... 36 4.21 Result table for copy lists on iOS ...... 36 4.22 Comparing iOS and Xamarin for Copy lists for all 10 runs ...... 37 4.23 Comparing iOS and Xamarin for finding primes with instruments running in background ...... 38 4.24 Result table for finding primes on iOS ...... 38 4.25 Comparing iOS and Xamarin for finding primes for all 10 runs ...... 39 4.26 Comparing iOS and Xamarin for Http requests with instruments running in background ...... 40

viii LIST OF FIGURES ix

4.27 Result table for http requests on iOS ...... 40 4.28 Comparing iOS and Xamarin and the total running time with instruments run- ning in background ...... 41 4.29 Result table for Total running time on iOS ...... 41 4.30 Comparing Android and Xamarin with regard to application size ...... 42 4.31 Result table for application size on iOS ...... 42 4.32 Comparing iOS and Xamarin with regard to lines of code ...... 43 4.33 Result table for code size on iOS ...... 43 4.34 Used memory (dirty) comparison for iOS and Xamarin ...... 44 4.35 Used memory (residential) comparison for iOS and Xamarin ...... 44 4.36 Result table for memory usage on iOS ...... 44

A.1 Native List View for Android and iOS ...... 56

Chapter 1

Introduction

The mobile phone had the primary function of making phone calls, today on the other hand, multiple functions can be utilized with a . Today 90% of the time spent on the phone is in applications [29]. Facebook, Twitter, Instagram and Snapchat are a few of the most used applications where people can share their daily life to billions of users around the world. Applications have made the smartphone a critical companion to many people that they always keep with them. According to TechCrunch the smartphone has become more frequently used than desktops by the end of 2014 [11]. The growth in us- age of applications have increased the interest in mobile development since this is a mar- ket where capital can be earned. However, companies and developers that develop smartphone applications face a com- mon problem when deciding which platforms to support. The issue is that code and technology cannot be shared between platforms. There are a number of platforms avail- able on the market today: iOS, Android , Windows phone, Blackberry and some other minor OSs (Operating System) [16]. Developing native applications for each individual platform is time consuming and expensive for companies that want to reach all possible users have to develop specific application for each platform. Aspects such as develop- ment time, cost and maintenance, developing the same application for several platforms becomes expensive and time consuming. The need to maintain several platforms also re- quires that the developers posses a wide range of knowledge. To solve the problem numerous technologies have arisen in the last couple of years. These technologies allows development for multiple platforms simultaneously using the same code, so called Cross-platform tools (CPTs). CPTs aim to make the development process for multiple platforms easier because there is only one codebase to maintain and theoreti- cally that should lead to lesser bugs and cleaner code.

1.1 Problem statement

This thesis examines the development of an application using two different techniques, native and cross-platform tools. Development is measured by two criteria, how perfor- mance is effected using the two different techniques and how development time and ex-

1 2 CHAPTER 1. INTRODUCTION

perience differ between the techniques. Accordingly the problem statement for this thesis follows: “Is there an acceptable trade-off between performance and developer efficiency when building applications with cross-platform tools compared to native development?”

1.2 Scope

This thesis examines a comparison between Cross- (CC), which is one of the four approaches to CPT (Web, Hybrid, Interpreter and CC), and native development. The decision to compare CCs is because it is least affected by external factors (internet con- nection, abstraction layers, browser engine etc) and theoretically, the performance of an application using CC could have the same performance as an native application. The problem is investigated by collections of data sets that contains information concerning how the applications performs and by minimizing these external factors the results will be more straightforward to analyse. Xamarin is the CC tool that is examined. Data according to the following factors will be collected: • time • Memory usage • Size of application • Lines of code • Platform specific code • Http request Beside the performance comparison, the thesis investigates the CC development process and how it differs from the native development. This part of the thesis is subjective since it is personal experiences, however the results will be correlated with previous research. The following factors will be evaluated: • Documentation & Language • IDE experience • Learning curve This thesis focuses on the platforms Android and iOS.

1.3 Purpose

The potential earnings in resources by using CPT to publish applications on several mar- kets at the same time may change the outcome for companies and developers since they no longer have to use native development for their applications. CHAPTER 1. INTRODUCTION 3

Since there are many competitors on the market, developers are challenged by the users short usage time of their application. This can be critical for several companies since ap- plication development is their main source of income. The attention span of users can be as short as 8 seconds, therefore application performance and first impression becomes important [6]. Factors such as memory usage, slow interaction between the user and ap- plication, how well the application uses the phones hardware and battery usage affects how long time the user will spend in the application. These factors are manageable for the developers and are therefore essential when choosing what CPT approach to use since they affect these factors differently. Since CPTs can decrease performance [12][3] differently it is important for the companies and developers to know these decrements compared to native applications without risking their application getting uninstalled for these factors. Chapter 2

Background

This section introduces the different concepts of CPTs available and the platforms they are made for. The frameworks that this thesis focuses on are also presented in more de- tail. Finally review of previous research in the field is presented.

2.1 Cross-Platform Tools

There exists a few different approaches when developing applications for multiple mo- bile platforms and each approach has its own advantages and disadvantages. The main CPT approaches are Web, Hybrid, Interpreter and Cross-Compiler. The advantages of us- ing CPTs are: [10] • Reduction of required skills - one specific language is used for development • Reduction of code - there is only one code for multiple OSs • Reduction of development time and long term maintenance costs • Decrease of API (Application Programming Interface) knowledge - only need knowl- edge of the development tools API, not the native platforms • Increase of market share - as a natural consequence you have the possibility to push your application to a wide range of markets

2.1.1 Web

An application created with the Web approach is essentially a website that is accessed through the phones web-browser. The application is created with HTML, CSS and Javascript and is downloaded to run on the smartphone. The advantages with this approach is that the user is not required to install anything on their smartphone and all the data that the application uses are located on a remote server. This also makes the application mainte- nance fast since only the server needs to be updated. However, the disadvantages with this approach is that the user is required to have inter- net connection at all times. If the user is not connected to internet with unlimited data

4 CHAPTER 2. BACKGROUND 5

usage the application can consume a considerable amount of data depending on the ap- plication. Internet connection speed can also limit the application experience. The appli- cation can make some use of the phones hardware through various interfaces available in the web-browser but is often limited by what the browser vendor has implemented [18][12][3][7].

2.1.2 Hybrid

Hybrid uses the same concept as the Web approach, however it does not execute the ap- plication in a web-browser but instead in a native container on the device (provided by the framework). To use the device’s capabilities the application uses an abstraction layer and exposes the capabilities as JavaScript . When using the Hybrid approach it is possible to use both the advantages of the browser engine and the device’s capacity. The abstraction layer will make the application appear as an native application to the user however the content is still loaded from a web-server. An advantage with this approach is that the application is distributed to the different platforms application stores, unlike the Web approach which is just a website accessed by the browser. The Hybrid approach also makes it possible to reuse user interface across the platform while utilizing the native features. Having the abstraction layer between the application and the native also makes it possible to access device features such as camera and GPS. The disadvantage with this approach is that the application perfor- mance is limited to the browser engine because all execution takes place in that specific engine [12][3][7].

2.1.3 Interpreter

An interpreted application is downloaded to the users device and the interpreter decides at runtime what code should run depending on the current device. The Interpreter ap- proach uses an abstraction layer to make it possible to access the native features. Unlike the Hybrid approach where the abstraction layer interacts between the application and the native library, the interpreter approach uses the abstraction layer between the natives API and the interpreter itself. The interpreter approach requires the user to install the ap- plication on the device and the application will appear as native. The advantages of this approach is that the application appears and perceives as a native application more than a Hybrid does. The logic and code can be reused across different platforms. The interpreter also makes it possible to use the native platforms capabilities using the frameworks API. The disadvantages for this approach is that the application performance is limited to the run time interpretation of the code and that the develop- ment process is dependent on the selected framework [12][3].

2.1.4 Cross-Compiler

Cross-Compiler takes code and outputs native code or binaries for each specific platform. The native code given are thereafter compiled again against the different platforms. The 6 CHAPTER 2. BACKGROUND

application is therefore dependent upon the efficiency of the CC. The advantages of this approach is that there is no difference to the user or device between code generated by the CC or code written specifically for the device (native). CC applications also provides all the features that native applications provide and performance is not affected to the same extent as the other approaches. Disadvantages of this approach is that the user interface, access to hardware (such as camera and GPS) and specific features cannot be reused between different platforms which makes the features platform specific when de- veloping [12][3][7].

2.2 Operating Systems

All CPTs are compatible with various different platforms (OS) and the number of oper- ating systems have varied since the smartphone was released. The popularity between the platforms has also differed depending on what smartphone brand has the substantial share of the market. Currently the market is beginning to stabilize. The most frequently used operating systems today for are Android and iOS, as seen in figure 2.1.

Figure 2.1: OS market shares[16]

2.2.1 Android

Android is an open-source operating system based on . Development is done by OHA (Open handset alliance) and is led by Google. [25]. Android is available for hand- held devices such as smartphones, tablets and watches. Applications are executed in CHAPTER 2. BACKGROUND 7

an DVM (Dalvik Virtual Machine) or ART (Android Runtime) at runtime, see figure 2.2 [22].

Figure 2.2: Details of the Android stack[22]

Native development

Android development is available on most common computer platforms (Linux, Win- dows and OSx) and uses Android Studio for development. Android is open-source and developed using Java. One of the challenges when developing for Android is the great number of different devices. According to Open Signal their application was downloaded to 24,093 different Android devices between January and August in 2015 [26]. As seen in figure 2.3 the Android versions used are also widely spread. This results in that Android developers cannot use features of the latest version if they want to reach majority of the users and therefore must decide what client base they want to reach and thereafter look at the possible features for that version. 8 CHAPTER 2. BACKGROUND

Version Distribution (%) 2.3.3 - 2.3.7 1.0 4.0.3 - 4.0.4 1.0 4.1.x 3.7 4.2.x 5.4 4.3 1.5 4.4 20.8 5.0 9.4 5.1 23.1 6.0 31.3 7.0 2.4 7.1 0.4

Figure 2.3: Distribution of OS versions among Android devices[23]

Since Android supports multiple CPU architectures it abstracts these using a virtual ma- chine, turning Intermediate Language (IL) code to native byte code used by the proces- sor [21]. This was originally handled with the Dalvik Virtual Machine (DVM). DVM was originally designed to work on older hardware with (by today’s standards) extreme re- strictions [15]. It has been improved during the different versions of Android, gaining Just-In-Time (JIT) with Android version 2.2 that could optimize applications during run- time. Since Android 5.0 DVM was replaced with a new virtual machine, Android Run- time (ART) that supports compiling Ahead-of-Time (AOT) which increases installation time for the application but decreases run time. Native compilation for Android platform is done in several phases, firstly the Java code is compiled using the Java compiler that outputs (.class) files that is used by the Java Vir- tual Machine (JVM). Theses files are parsed by another compiler that creates a Dalvik (.dex). This file is fully backwards compatible and can be executed on both DVM and ART. Depending on the virtual machine running on the Android device the code is optimized for that specific virtual machine as shown in figure 2.4. Backwards compatibility ensures that the developer does not need to know what virtual machine the device is using[13]. CHAPTER 2. BACKGROUND 9

Figure 2.4: Detailed view of the Android compile process[4]

Android Studio is based on the IntelliJ IDEA Community Edition which is an IDE for Java but with special modifications for Android development. The IDE handles the build/- compiling stage using Gradle as their package manager, which also allows for third-party plugins[35]. Bundled with the IDE are several tools that handles SDK management, sim- ulators and more. An API is supplied that allows usage of hardware components such as Bluetooth and camera. Android simulators can be executed on any of the computer platforms Android supports and are connected with Android Studio to allow debugging and other operations.

2.2.2 iOS iOS is Apples operating system developed for their own products, however not only for smartphones but also for their tablets. iOS is closed source and only available for Apple products. iOS runs on hybrid kernel. 10 CHAPTER 2. BACKGROUND

Native development

The native development for iOS exclusively uses Apple products through out the devel- opment process. The Xcode IDE (Integrated Development Environment) is only avail- able for OSx on Apple computers. All the products used for development are closed source, with the exception of the Swift language. Traditionally development for iOS uses Objective- but in recent years Apple has moved towards their new language Swift. Apples phones uses their own processor from the “Apple AXX” series, in which the lat- est is “Apple A10 Fusion”. It’s an ARM processor that is manufactured by TMHC[30]. Swift was developed as an alternative to Objective-C and is since 3rd December 2015 open source. It is still managed by Apple as the “Project Lead” but involves a lot of the community during development. Compiling Swift or Objective-C code is done with Xcode, that turns the code into native byte code used by the processor. As seen in figure 2.5 almost all the user updates and uses the latest version of the OS which results in that developers can use features of the newest version and still reach most of the users.

Version Distribution (%) 10.X 62.3 9.X 30.9 8.X 2.4 7.X 2.9 6.X 0.3

Figure 2.5: Distribution of OS versions among iOS devices[36]

2.3 Xamarin

Xamarin, which is owned by , uses their own IDE for development (Visual Stu- dio on Windows and Xamarin Studio on OSx). The language used is C# and is designed using their IDE. The code is compiled to native assembly code for iOS and a native APK (Android Application Package). Xamarin is used by 1.4 million developers around the world and has other applications in it’s suite for testing etc. [8]. Xamarin has recently become open-source [19]. Xamarin has three main APIs for development on mobile platforms (Xamarin.Forms, Xamarin.iOS, Xamarin.Android) that implements full support for all APIs provided for Android and iOS [33]. Xamarin has “day one support” so that when Android or iOS releases a new version it is supported from the same day in Xamarin. Xamarin.Forms is an cross-platform UI toolbox that allows sharing UI design over multiple platforms using native components for every platform, that in turn allows you to share all code between platforms. The design on each platform also adheres to that platforms visual guidelines [34]. This is made possible using which is a collection of tools for shar- ing code between platforms. Mono contains a compiler and frameworks for interaction CHAPTER 2. BACKGROUND 11

with each platforms native APIs. Development is made separate from Xamarin and is open sourced, however it is sponsored by Novell, Xamarin, Microsoft and the .NET foundation[17]. One-to-one mapping of native platform API’s allow developers to use Xamarin when de- veloping for a single specific platform, however, using Xamarin.Forms developers can use one code base for several platforms. The applications are built as native applications and Xamarin.Forms solves this by setting up multiple projects when creating a new ap- plication. The main project is the Xamarin.Forms where the application is built, the other ones are essentially boilerplate code (that don’t need to be modified) for running the ap- plication on that particular platform. During compilation optimizations for each platform are applied. First unused libraries are stripped out to reduce application size, then plat- form specific compilation is done. For iOS the code is compiled down to ARM assem- bly code since iOS do not allow runtime generated code. On the Android platform the code is compiled to an Intermediate Language (IL) and runs alongside the Android run- time with a virtual machine from the Mono tools (MonoVM)[32]. Both of these compiling techniques are similar to what is used for native development on that platform. that are used for each native platform is also used by Xamarin, however if you are not on an Apple platform (which is required for testing/compiling) you can ei- ther connect to a physical Apple machine or use a cloud service that provides one. Xam- arin also provides its own profiler (similar to Instruments for iOS or Device Monitor for Android) but it is only available for enterprise customers. Third-party plugins are available via NuGet which is a package-manager for the Mi- crosoft development platform. NuGet is maintained by a community of users and works closely with the .NET foundation [20]. Xamarin provides multiple products besides the IDE. Integration with cloud based ser- vices, automatic testing on actual mobile devices are provided but is not free of charge.

2.4 Related Work

There has been a lot of previous work done in the field. One of the most relevant are done by Marc Armgren where he compares native performance versus Xamarin Perfor- mance. There are three different benchmarks done (UI, Computational and Network) and they are executed on both iOS and Android platforms. In the UI benchmark a few dif- ferent components (list,image and dialog) that are available in Xamarin.Forms are tested against native components. Time is measured on how long it takes the component to be visually rendered on the screen. The tests show that there is no significant difference be- tween native rendering and Xamarin.Forms with the exception of the image component for Android that is rendered faster in the native implementation[1]. Results for computational benchmarking shows a significant difference between native execution and Xamarin. For iOS the difference is approximately 4000% faster for na- tive implementation and for Android approximately 75%. The author argues that the Xamarin Platform does not compile and optimize code as well as native compilation does[1]. Network performance is tested with a low-level implementation, using stream sockets. 12 CHAPTER 2. BACKGROUND

Benchmarking is achieved by measuring the time it takes to send and receive packages between a local server and a smartphone. Each platform has their own implementation of the network component using interfaces which is probably because the cross-platform network tools (such as HttpClient from the Mono Library) are to high level to deal with sockets. The tests shows that for iOS platform Xamarin is significantly faster than the na- tive implementation, but for Android the native and Xamarin implementation performs equally[1]. Marc Armgren concludes that he recommends developing with Xamarin Platform for ap- plications that do not require heavy computations[1]. Claes Barthelson evaluates Xamarin by creating an application and documenting the pro- cess. The overall impression of Xamarin in the paper is positive, however, the author points to a few flaws within Xamarin related to a specific component (map component) that does not have the necessary functionality when using the Xamarin.Forms implemen- tation. He also comments on a few of the good things with Xamarin including the official documentation and guides, saving time and resources by developing cross-platform and that Xamarin is constantly improving and evolving[2]. An experimental study authored by Emil Lygnebrandt and Johnathan Holm measures the UI performance for Xamarin versus native development. Although their results shows that using Xamarin (and especially Xamarin.Forms) decreases performance, it is concluded that the affect is small enough not to influence the decision whether to use Xamarin or not[9]. Previous research regarding how to evaluate Java IDEs have been done by Dujmovi´cand Nagashima where they explain how to evaluate by using a LSP method. They present a list of criterion that the LSP method look for when evaluating an IDE that they believe reflects the needs of a typical software developer of an enterprise application[5].

2.5 Trepn Profiler

Developed by Qualcomm Technologies, Trepn Profiler is an profiling application that runs on Android devices. It allows the user to record data regarding the phones status during interaction with an application. Several different data points can be recorded such as GPU and CPU frequency, network usage, battery power. By default Trepn collects data every 100ms and according to Qualcomm Technologies an estimate of 20-30% of an quad-core CPU is used during recording[27][28].

2.6 Instruments

Performance profiler for Apple devices are provided by Instruments, that comes included with the IDE Xcode. It allows monitoring and recording of different data points either from a simulator running on the same computer or from an physical Apple device. In- struments also allows monitorization of several different data points simultaneously, which can be saved and analyzed later. Information about CPU, GPU, memory alloca- tions and network activity are a few of the data points that can be measured[14]. Chapter 3

Method

This section explains the method for this thesis. Initially explained is how the criteria is set up for each hardware specification that was tested and what algorithms to implement for benchmarking these specifications. Thereafter, explanation on how the design and structure for the implementations was decided so that the different IDEs follow the same strict structure and work exactly the same. Lastly described is how the tests were carried out for each platform.

3.1 What hardware to test

An applications performance is decided by several factors, however, hardware perfor- mance is limited by the phones CPU, GPU, utilization of cache, storage and other hard- ware components. The GPU was not considered due to testing graphical performance is specific depending on the what the application will display to the user. Nonetheless, the CPU and cache are components that are consistently used regardless of the graphics since they are utilized in every instruction by the core. Nowadays it is common for ap- plications to use internet and fetch down information to the application that resides on servers. Therefore, network hardware and drivers on the phone can be a critical factor for the performance perceived by the user. Since the hardware specifications are static for each individual phone, these factors will be evaluated by implementations of algorithms designed to stress the components. To be able to stress the utilization of the cache and CPU algorithms with heavy calculations that can force the cache to swap content and work troublesomely. The utilization of net- work hardware will be benchmarked by an algorithm that uses http requests and there- after download information. The used algorithms that have been implemented are either recognized or constructed algorithms to suit the situation. The CPU is used in every operation executed by the device. By implementing an algo- rithm with heavy and many instructions to the core, the utilization level on the CPU can be evaluated. To be sure of this phenomenon, Bubble sort will be implemented. Bubble sort is a sorting algorithm with time complexity O(n2) that executes a lot of instructions

13 14 CHAPTER 3. METHOD

forcing the CPU to work. Since Bubble sort is a sorting algorithm we can also utilize its potential to cooperate with the cache by using large sorting objects. For technical imple- mentation information about Bubble sort, read appendix A.1. To be able to examine the CPU workflow while isolating the cache involvement an al- gorithm with only heavy calculations will be used. The Sieve of Eratosthenes algorithm is used to find prime numbers with includes small but many instructions to the CPU, forcing it to work. For technical implementation information about Finding primes, read appendix A.2. To be able to stress the cache an algorithm for summarize lists have been designed with the given name Copy list. The purpose of this algorithm is to summarize elements in two separate lists and put the results in a third list. If the lists are allocated and summarized in a specific way we can make sure that the cache will miss, theoretically. For technical implementation information about Copy list, read appendix A.3. Networks hardware within the phone can be benchmarked in several ways. However, one of the most common ways to use the internet connection through applications is to download JSON objects or object such as pictures. Therefore an algorithm to stress the network hardware through http request have been designed. The algorithm does a re- quest to server to download a JSON object with further information about what picture to download. The information is displayed in a list view on the phones display. For tech- nical implementation information about Http request, read appendix A.4.

3.2 Implementation process

The implementations for the algorithms will be carried through by using three different IDEs to be able to publish the application to Android and iOS. Android implementation will be accomplished by using Androids native environment, Android Studio, and by Mi- crosoft’s CC Xamarin Studio. Alike Android, iOS implementations will also be developed in iOS native environment, Xcode, and by Xamarin Studio. Before the development phase for each individual IDE, criteria on how the application should run and look were set, making sure that the applications look and execute in the exact same way. To make sure the algorithms work and executes in the same way, strict pseudo code were constructed, read appendix A for specification. CHAPTER 3. METHOD 15

Figure 3.1: The work flow that all applications follow

The layout for the applications (as seen in figure 3.1) were designed so that all the test were executed sequentially after starting the test by pressing a start button on the main screen. When the button is pressed the main screen is transitioned over to Bubble sort screen where the settings and lists to sort is set up. Thereafter the Bubble sort is executed 10 times for each list where the execution time for each individually sorting execution is collected and stored. For Bubble sort psuedo code, read appendix A.1. Afterwards Bub- ble sort screen is transitioned over to Sieve of Eratothenes screen (Finding primes) where the test objects are set up before execution. The test is executed 10 times and for each test execution time is collected and stored. For Sieve of Eratosthenes psuedo code, read ap- pendix A.2. Sieve of Eratothenes screen (Finding primes) are then transitioned over to Copy list screen which follows the same procedure. For Copy list psuedo code, read appendix A.3. To minimize deviating measurement data all these tests are executed 10 times. Copy list screen is thereafter transitioned over to Http request screen (the screen with list items seen in figure 3.1). Http request test is only executed once, however the test involves seven stages of http requests. Each stage consists firstly a request for a JSON object download from a server, thereafter request to download picture from the server that the JSON object contains in- formation about. For server implementation specification, read appendix A.5. When all information and pictures are downloaded the information contained in the JSON object 16 CHAPTER 3. METHOD

together with the pictures are displayed in a list view, see Http request screen in figure 3.1. The difference between the stages it the amout of data requested and downloaded from the server, first stage requests for one picture, thereafter 10, 50, 100, 200, 400, 500 in acending order. Execution time for each stage is collected and stored, the execution time start recording before the JSON object request and end when all the list items is dis- played in the list view. For Http request psuedo code, read appendix A.4. When the 7 stages are executed the Http request screen is transitioned to Sending results to server screen, see figure 3.1. At Sending results to server screen all collected data (execution times) are sent remotly to server for analysing purpose.

3.3 Testing process

The testing process includes two phases. Firstly the test process described in section 3.2 is executed 20 times with Android native application, thereafter 20 times with Xamarins Android application for each Android phone. The same process is repeated for each iPhone, 20 executions for Xcode native iOS application and Xamarin iOS application. Second phase is performed exactly the same, however with each platform specific data collector in the background, Trepn for Android and Instruments for iOS. The following procedures are taken to ensure that the application runs with as few other applications and/or processes interfering as possible. Network is also isolated for the same purpose. A release version of the application is built so that all optimizations used by compilers are also used. 1. Close all other applications running on the phone using the standard method for the phone (double press on home button for iOS and display running applications button for Android). 2. Activate flight-mode with WiFi running. 3. Connect to an controlled network (that only has the server and phone active, no outgoing internet connection). 4. Build a release version of the application and deploy to the phone 5. If there is to be additional monitoring (Trepn or Instruments) start it now. 6. Run the tests for each platform 20 times to minimize to minimize deviating mea- surement data.

3.4 Development experience

The development experience is measured by numerous criteria for each individual IDE/pro- gramming language to be able to make a fair and subjective assessment. Criteria are cho- sen to reflect on how challenging the IDE and language is for someone who has not used it before. The criteria is based on personal opinion on what a IDE should provide corre- lated with Dujmovi´cand Nagashima’s criteria. CHAPTER 3. METHOD 17

3.4.1 IDE experience

Evaluation of the IDE for each platform is done through these criteria. All points will be summarized.

Explanation Scale Syntax Standard colourscheme: 1-5 1 - No obvious distinction between different parts of code 5 - Excellent keyword support with contrast between colors. Linter 1 - No automatic linter 1-5 5 - Linter that runs in background and is not intrusive. Debugger - Variables Is there an variable list to track variables Y/N when debugging? Debugger - Steps Does the debugger allow step in/out/over? Y/N Creating projects 1 - Few options with poor descriptions, hard 1-5 to know for a beginner what to choose. 5 - Clear descriptions for all options, easy to know what to choose. Adding more views 1 - IDE does not handle everything auto- 1-3 matically, intervention from developer is required. 3 - IDE connects all pieces (XML/XAML) automatically from a wizard or guide. Completeness of error messages 1 - Non specific messages with no row num- 1-5 ber or file. 5 - IDE provides row number, column, file and suggests an automatic fix. Importing libraries 1 - No help with importing missing libraries, 1-3 only an error message. 3 - IDE gives error message an offers quick fix (to import library) Refactoring 1 - IDE offers no refactoring. 1-3 3 - Non-breaking refactoring across whole project. Platform compatibility (Platforms available are OSX, Windows and 1-3 Linux) 1 - One platform. 2 - Two platforms. 3 - All three platforms. Build time 1 - Builds/compiling can happen at any time 1-5 and interferes with workflow. 5 - Builds do not interfere with normal workflow. 18 CHAPTER 3. METHOD

Stability 1 - Crashes or freezes more than two times 1-5 during an normal workday (8h). 5 - Didn’t crash during the entire project. Auto backup In case of crash, are any (not manually Y/N saved) data lost? Package Manager 1 - No package manager. 1-5 3 - No GUI, requires manual text editing of configuration file. 5 - IDE provides GUI and automatically notifies about updates available. Testing - Unit tests Does the IDE support unit testing? Y/N Testing - UI tests Does the IDE support ui testing? Y/N Visual design and code (Connecting an component from design to 1-5 running code) 1 - Requires manual editing in both design (XML/XAML) file and . 5 - Created automatically from menu in IDE.

3.4.2 Documentation & Language

Evaluation documentation and language for each platform is done with these criteria.

Explanation Scale Getting started 1 - No official tutorial or guide (easily found). 1-5 5 - Tutorial/guide that shows how to create a application with multiple views, connection from visual design to source code and with links to more advanced tutorials. API Reference 1 - Not updated, hard to find information if devel- 1-5 oper don’t know what to search for. 5 - Finding useful information is easy even if devel- opers don’t know the exact name and API reference is always up to date Examples in docs 1 - No code examples in documentation. 1-3 3 - All functions/methods has code examples in documentation Language activity 1 - Community has no way of influencing the devel- 1-5 opment of the language. 5 - Active discussions from maintainers with commu- nity when introducing new features. Backward compatibility 1 - Major version numbers means backwards break- 1-5 ing changes 5 - No versions break backwards compatibility CHAPTER 3. METHOD 19

Language maturity 1 - “Young”, implementations change with new ver- 1-5 sions of the language 5 - “Mature”, the of the language never changes, implementations are “set in stone”.

3.4.3 Learning curve

A general overview for how steep the learning curve was during development of the ap- plication. Discussion about guides and tutorials, along with pitfalls and other properties that is encountered during the development of the application is provided for each plat- form. Chapter 4

Result

4.1 Developer experience

4.1.1 iOS - Xcode

IDE Experience

Xcode was run on a Mac Book Air (13-inch, mid 2012) with macOS Sierra.

Comment Score Syntax Standard syntax highlight makes it difficult 2 to distinguish between comments and code Linter Works in background, but it is to fast. Er- 2 ror messages about variables being unused before there is a chance to use them Debugger - Variables - Y Debugger - Steps - Y Creating projects Options are divided into categories depend- 4 ing on device, with descriptive images for each project template Adding more views Created via menu, but requires manual 1 intervention to connect to a source code file Completeness of error messages Trivial errors such as wrong variable name 3 or runtime errors is displayed with good information, however build errors are not informative and requires investigation. Importing libraries IDE does not recommend library for import, 2 however it auto-completes when importing libraries Refactoring Not available for Swift language 1 Platform compatibility Only available for OSX 1 Build time Fast compiling when debugging (Around 8s) 5

20 CHAPTER 4. RESULT 21

Stability Occasional crash and freeze 4 Auto backup - Y Package Manager Editing text files is required, several alterna- 3 tives available Testing - Unit tests - Y Testing - UI tests - Y Visual design and code Drag and drop to create connections, not 3 intuitive Total 31 p / 52 p , 5/5 Y

Documentation & Language

Comment Score Getting started Official tutorial from Apple 5 API Coverage Reference has good coverage, but hard to search in 3 Examples in docs Has code examples, however not on all function- 2 s/methods Language activity Active community group collaboration with Apple 4 Inc. Backward compatibility Changes from Swift 3 to Swift 2 1 Language maturity Swift is a young language that is still changing 1 Total 16 p / 28 p

4.1.2 Android - Android Studio

IDE Experience

Android Studios was run on ASUS UX32L with Ubuntu 16.04. 22 CHAPTER 4. RESULT

Comment Score Syntax The IDE provides 3 different colors for syn- 2 tax highlighting, highlights only the most important key words Linter Highlights every syntax error in non intrusive 5 way Debugger - Variables Y Debugger - Steps Y Creating projects Has many different activities to choose from, 3 however no description about them are avail- able Adding more views Accessed through many different ways and 3 XML is linked to activity automatically Completeness of error messages Generally well defined error messages when 3 exceptions occur, however does display any error message for certain run time errors. Build errors are not informative and requires investigation Importing libraries Highlights missing library and offers auto- 3 matic include command Refactoring Refactoring is accessed by right click 3 Platform compatibility 3 Build time Extremely slow on Linux, release build time 1 on around 4 minutes and debug around 25 seconds Stability Crashes around 3 times in average on a work 1 day. Freezes up to 100 times a day Auto backup Y Package Manager No GUI 3 Testing - Unit tests Y Testing - UI tests Third part library, official recommendation by Y Android Visual design and code Manual editing of source code, "on click" 3 selection for buttons in visual design Total 33 p / 52 p , 5/5 Y

Documentation & Language CHAPTER 4. RESULT 23

Comment Score Getting started Official tutorial by Android describing and showing 4 by video how to create a simple application API Reference The API is easily accessed and up to date, Java doc 5 also available Examples in docs Has code examples, however not on all methods/- 2 functions Language activity Active discussions from maintainers with community 5 when introducing new features Backward compatibility Have not always been backwards compatible 4 Language maturity Mature language, does some minor changes 4 Total 24 p / 28 p

4.1.3 Xamarin - Xamarin Studio

IDE Experience

Xamarin was run on a Mac Book Air (13-inch, mid 2012) with macOS Sierra. 24 CHAPTER 4. RESULT

Comment Score Syntax Default theme has few colours but good key- 3 word support Linter Runs in background, informative error mes- 4 sage Debugger - Variables - Y Debugger - Steps - Y Creating projects Different categories with informative descrip- 5 tions for each option Adding more views Done from menu and connects source code to 3 XAML. Completeness of error messages Trivial and runtime messages are very good, 3 build messages requires further investigation Importing libraries IDE recognizes missing libraries and suggests 3 a quick fix for importing library Refactoring Accessible from menu and previews changes 3 Platform compatibility Works on OSX and Windows 2 Build time Debug building is faster for Android than 4 Android Studio, iOS similar Stability - 5 Auto backup - Y Package Manager NuGet has GUI implementation and is config- 5 urable for each platform Testing - Unit tests - Y Testing - UI tests - Y Visual design and code Naming in XML code and calling from code 2 with auto-completion Total 42 p / 52 p , 5 / 5 Y

Documentation & Language

Comment Score Getting started Good tutorial and webcasts for beginners 5 API Coverage Good compiled information about all platforms and 4 API’s Examples in docs Has code examples, however not on all methods/- 2 functions Language activity C# is well used and has active community 4 Backward compatibility Subtle breaking changes between versions 4 Language maturity - 5 Total 24 p / 28 p CHAPTER 4. RESULT 25

4.1.4 Summary

IDE Experience Score (Points) Score (Y/N) Xcode 31p / 52p 5p / 5p Android Studios 33p / 52p 5p / 5p Xamarin 39p / 52p 5p / 5p

Documentation & Language Score (Points) Xcode 16p / 28p Android Studios 24p / 28p Xamarin 24p / 28p

4.1.5 Learning curve iOS - Xcode

The learning curve for iOS is quite steep. Basics are covered through official tutorials, but more advanced or specialized implementation requires further information gathering. Connecting visual components to source code is explained in tutorials, however, many of the options available are ignored in the tutorial and the developer is told not to pay attention to them. More advanced subjects such as working with threads, or implementing specialized so- lutions because the standard iOS API does not fit the project, the official sources are not very informative and community resources (Stack Overflow, etc.) has to be investigated. Because of breaking differences in the Swift language between versions, solutions found in such resources are often outdated and needs to be updated.

Android - Android Studio

The learning curve for Android Studios was mediocre since Android provides a infor- mative get started tutorial and Android has a wide community support that provides solutions. However the IDE made the developing process frustrating because of its slow interaction, freezes and crashes which took great deal of time. Working with more complex solutions and interaction interfaces the learning curve flat- tens out since Android does not provide these kind of solutions (for example List View with items containing pictures and text together) so the developer has to customize these by its own which requires great amount of time learning. This approach opens up for more interface possibilities, however this requires the user to have a profound under- standing about the options. 26 CHAPTER 4. RESULT

Xamarin - Xamarin Studio

The C# language is similar to other object oriented languages (i.e Java) so if a developer has experience in such languages the transition is smooth as the syntax is very similar. Official tutorials provides a good foundation of knowledge for creating simple applica- tions. Passing variables between views is done via class constructors which is recognized from object oriented programming. Asynchronous operations such as threads are abstracted by the C# language and has in- formative official tutorials on how to implement. For more specialized implementations when the Xamarin API’s are not a good fit for the project, the developer has to investi- gate other sources and solutions than official ones.

4.2 Test results

All tests are run 20 times, and are there after summarized and the average run time is calculated together with the standard deviation (σ). Smartphones are used for the tests (in contrast to emulators) so that measurements reflect real-world use.

4.2.1 Android

All Android tests where run on a Sony Xperia X Performance F8131 which in the fig- ure is marked with F8131. Each test is run with four different settings. Two settings with Android native implementation, marked Android, one setting with Trepn and the other without. Same process was carried out by the Xamarin implementation, marked Xamarin. The reason to run the same implementation twice, with and without Trepn is to evaluate if there is any overhead when using Trepn in the background. CHAPTER 4. RESULT 27

Bubble Sort

Figure 4.1: Comparison between Android and Xamarin for running Bubble sort

Bubble Sort Random array σ Worst array σ Android - F8131 (No Trepn) 177.06 ±14.91 133.41 ±10.65 Xamarin - F8131 (No Trepn) 339.85 ±20.24 306.45 ±18.02 Android - F8131 170.46 ±1.64 128.70 ±3.23 Xamarin - F8131 374.25 ±28.45 337.90 ±24.82

Figure 4.2: Result table for Bubble sort on Android

The Bubble sort algorithm was executed with two separate arrays containing integers, random array and worst case array, sorting the worst case array should in theory have the longest execution time. The 4.1 figure displays that using Trepn for native Android increases performance, however Trepn with Xamarin Android decreases performance, but looking at the standard deviation in figure 4.2 there is no significant difference. Fig- ure 4.2displays the significant performance difference between Xamarin and Android, ≈ 163 milliseconds (Android blue - Xamarin red). 28 CHAPTER 4. RESULT

Copy Lists

Figure 4.3: Comparison between Android and Xamarin for running Copy lists

Copy lists Execution time σ Android - F8131 (No Trepn) 3.45 ±1.40 Xamarin - F8131 (No Trepn) 1.81 ±0.22 Android - F8131 3.26 ±1.58 Xamarin - F8131 2.09 ±0.30

Figure 4.4: Result table for copy lists on Android

Execution time for Copy lists test for a native Android application and the equivalent Xamarin application is shown in figure 4.3. It shows that Xamarin has a shorter execu- tion time than native Android. The mean is lower during the Trepn tests but when tak- ing standard deviation into account the tests are equivalent as seen in figure 4.4. Even though the relative execution time between the two implementations (Xamarin without Trepn is around 50% faster) the actual difference is around ≈ 1.6 ms. CHAPTER 4. RESULT 29

Figure 4.5: Comparison between Android and Xamarin for running Copy lists to see ms for each run

In each execution of Copy list the same algorithm is run 10 times. The figure 4.5 displays the average execution time in ms for each individual run when Copy list is executed 20 times. As shown in the figure Android has a decreasing execution time for each run ex- cept for the tenth run where it increases to nearly the same as the fourth run.

Finding Primes

Figure 4.6: Comparison between Android and Xamarin for finding primes 30 CHAPTER 4. RESULT

Finding primes Execution time σ Android - F8131 (No Trepn) 2.18 ±0.22 Xamarin - F8131 (No Trepn) 2.69 ±0.67 Android - F8131 1.83 ±0.33 Xamarin - F8131 2.86 ±0.48

Figure 4.7: Result table for finding primes on Android

Execution time for finding primes on an native Android application and the equivalent Xamarin application is seen in figure 4.6. There is no significant difference for the test when running without Trepn between Xamarin and Android when taking the standard deviation into account and the difference between the tests with Trepn is about 0.22 ms as seen in figure 4.7

Http Requests

Figure 4.8: Comparison between Android and Xamarin for Http requests CHAPTER 4. RESULT 31

Images A-F8131(No Trepn) X-F8131(No Trepn) A-F8131 X-F8131 191.19 1439.60 159.99 1664.95 1 ±128.44 ±157.55 ±61.28 ±171.72 285.45 378.30 284.67 363.20 10 ±209.27 ±114.20 ±150.44 ±139.57 1093.40 1118.35 995.38 1219.75 50 ±577.91 ±302.11 ±842.20 ±771.25 1868.66 2360.95 1742.42 2071.20 100 ±488.44 ±663.64 ±967.38 ±933.39 3888.00 4284.60 2741.40 3599.20 200 ±1179.47 ±663.64 ±1387.45 ±1203.12 8401.61 8188.65 5994.83 7460.10 400 ±2922.89 ±1796.44 ±1627.86 ±2370.41 10822.18 10522.05 8731.24 9331.40 500 ±2185.08 ±4051.41 ±2012.70 ±2565.43 Figure 4.9: Result table for http requests on Android

Execution time for performing http requests on native Android application and the equiv- alent Xamarin application is shown in figure 4.8. In the figure 4.9 A stands for Android and X for Xamarin. In the first test the Xamarin implementations are significantly slower, however, in the following tests there is no significant difference between any of the im- plementations.

Total Running Time

Figure 4.10: Comparison between Android and Xamarin and the total running time 32 CHAPTER 4. RESULT

Total running time Time σ Android - F8131 (No Trepn) 30027.43 ±5609.52 Xamarin - F8131 (No Trepn) 35382.06 ±7096.44 Android - F8131 23996.70 ±4912.58 Xamarin - F8131 33616.39 ±6063.59

Figure 4.11: Result table for total running times on Android

Combined execution time for all tests is displayed in figure 4.10. When taking standard deviation into account there is no significant difference between any of the tests as seen in figure 4.2.1.

Size of application

Figure 4.12: Comparison between Android and Xamarin and the application size

Platform Size (Mb) % increase Android 4.98 11% Android - Blank project 4.48 Xamarin 25.73 13% Xamarin - Blank project 22.72

Figure 4.13: Result table for application size on Android

The difference in application size is shown in figure 4.12. The native Android applica- tion is significantly smaller by almost 20mb when compared to Xamarin. However the relative increase in application size is 11% for Android and 13% for Xamarin as seen in figure 4.13. CHAPTER 4. RESULT 33

Lines of code

Figure 4.14: Comparing Android and Xamarin with regard to lines of code

Platform Code XML/XAML Total Android 504 146 650 Android - Blank project 10 16 26 Xamarin 436 (Platform specific: 0) 167 (Platform specific: 0) 603 Xamarin - Blank project 32 (Platform specific 0) 68 (Platform specific: 0) 100

Figure 4.15: Result table for code size on Android

The difference in lines of code when comparing a newly generated project to the finished application is shown in figure 4.14. Even though Xamarin has more lines of code from start, it has fewer lines of code than the native Android project when finished, as seen in figure 4.2.1. Line count does not include comments and only files that are edited when developing the application is counted. 34 CHAPTER 4. RESULT

Memory usage

Figure 4.16: Used memory comparison for Android and Xamarin

The difference in memory usage when running the applications for Xamarin and An- droid is shown in figure 4.16. Trepn records the average memory consumption for the entire OS when interacting with the application. As seen in figure 4.2.1 the results show no significant difference between Xamarin and Android.

Platform Average memory usage σ Android 2397.78 ±66.64 Xamarin 2347.47 ±54.30

Figure 4.17: Result table for memory usage on Android

4.2.2 iOS

All iOS tests where run on iPhone 6 and iPhone 7 which in the figure is marked with iPhone 6 and respectively iPhone 7. Native iOS tests, marked iOS are run with two dif- ferent settings on each phone, one setting with Instruments in the background and the other without. Xamarin tests, marked Xamarin, are also run with and without Instru- ments. However, Instruments made Xamarin crash at the Http request test so no exe- cution time data is available, only Instruments data. Therefore Xamarin Instruments are not displayed in the figures. The reason to run the same implementation twice, with and without Instruments, is to evaluate if there is any overhead when using Instruments in the background. CHAPTER 4. RESULT 35

Bubble Sort

Figure 4.18: Comparison between native iOS and Xamarin iOS for running Bubble sort

Bubble Sort Random array σ Worst array σ iOS - iPhone 7 118.44 ±1.18 63.89 ±0.15 iOS - iPhone 7 with instruments 121.28 ±22.72 64.54 ±1.07 Xamarin - iPhone 7 243.70 ±1.26 243.25 ±0.44 iOS - iPhone 6 482.67 ±3.56 231.84 ±1.08 iOS - iPhone 6 with instruments 686.44 ±199.92 221.23 ±73.57 Xamarin - iPhone 6 804.95 ±139.69 785.15 ±135.52

Figure 4.19: Result table for Bubble sort on iOS

Figure 4.19 displays that native iOS is statistically significant faster than Xamarin when executing the Bubble sort on both iPhone 6 and iPhone 7. Figure 4.19 shows that the stan- dard deviation increases significantly when using Instruments in the background for the algorithm and making it more unstable. The standard deviation for iPhone 7 with Instru- ments shows that it can be faster than the regular execution on iPhone 7. However, using Instruments on iPhone 6 makes it significantly slower. The figure 4.18 displays that Worst case is significantly faster than Random case using iOS however not using Xamarin. Xam- arin have nearly the same result for Random case as for Worst case. 36 CHAPTER 4. RESULT

Copy Lists

Figure 4.20: Comparison between iOS and Xamarin for running Copy lists

Copy Lists Execution Time σ iOS - iPhone 7 5.02 ±0.55 iOS - iPhone 7 with instruments 2.49 ±0.71 Xamarin - iPhone 7 3.25 ±0.38 iOS - iPhone 6 4.85 ±0.28 iOS - iPhone 6 with instruments 3.69 ±0.81 Xamarin - iPhone 6 4.14 ±0.23

Figure 4.21: Result table for copy lists on iOS

Execution times for Copy lists test for a native iOS application and the equivalent Xam- arin application is shown in 4.20. Displayed in figure 4.21 executing Copy list on iOS, for both iPhone 6 and iPhone 7, is significantly slower than executing on Xamarin. As seen in figure 4.20 executing with Instruments is faster than both Xamarin and iOS without Instrument, however the standard deviation in figure 4.21 shows that the difference be- tween Xamarin and iOS with Instruments is not statistically significant. CHAPTER 4. RESULT 37

Figure 4.22: Comparison between iOS and Xamarin for running Copy lists to see ms for each run

In each execution of Copy list the same algorithm is run 10 times. The figure 4.22 displays the average execution time in ms for each individual run when Copy list is executed 20 times. As shown in the figure 4.22 iOS - iPhone 7 has a decreasing execution time for each run, however the same concept applies to all of the different series but is not as signifi- cant as the iOS - iPhone 7. 38 CHAPTER 4. RESULT

Finding Primes

Figure 4.23: Comparison between iOS and Xamarin for finding primes

Finding Primes Execution Time σ iOS - iPhone 7 4.07 ±0.31 iOS - iPhone 7 with instruments 2.54 ±1.45 Xamarin - iPhone 7 4.27 ±0.35 iOS - iPhone 6 6.32 ±1.02 iOS - iPhone 6 with instruments 3.77 ±0.99 Xamarin - iPhone 6 5.68 ±0.43

Figure 4.24: Result table for finding primes on iOS

Execution times for Finding primes on native iOS application, both iPhone 6 and iPhone 7, and equivalent Xamarin is seen in figure 4.24. As seen in figure 4.23 the standard de- viations results is that there is no significant difference for the test when running with or without Instruments for iOS. There is also no different significant difference in execution time between iOS and Xamarin on each individual iPhone. CHAPTER 4. RESULT 39

Figure 4.25: Comparison between iOS and Xamarin for running finding primes to see the distribution of ms for each run

In each execution of Finding primes the same algorithm is run 10 times. The figure 4.25 displays the average execution time in ms for each individual run when Finding primes is executed 20 times. As shown in the figure 4.25 iOS - iPhone 7 has a decreasing execu- tion time for each run, however the same concept applies to all of the different series but is not as significant as the iOS - iPhone 7. Nonetheless Xamarin - iPhone 6 execution time increases quite significantly at the tenth run. 40 CHAPTER 4. RESULT

Http Requests

Figure 4.26: Comparison between iOS and Xamarin for Http requests

Images iOS-iPhone 7 iOS-I-iPhone 7 X-iPhone 7 iOS-Iphone 6 iOS-I-iPhone 6 X-iPhone 6 255.47 458.21 329.95 224.43 292.54 455.05 1 ±360.73 ±406.11 ±181.55 ±482.75 ±85.04 ±141.92 92.26 224.97 288.10 113.34 1041.10 331.60 10 ±84.06 ±150.94 ±111.38 ±48.09 ±2037.16 ±503.37 273.76 375.13 870.05 3035.24 1104.00 1001.30 50 ±292.31 ±54.58 ±219.36 ±3753.47 ±164.42 ±745.18 487.05 781.75 1472.05 576.37 2368.01 1677.80 100 ±605.31 ±281.67 ±497.56 ±127.22 ±1313.68 ±753.75 1007.81 1663.21 2735.85 1244.02 5001.89 3365.80 200 ±662.68 ±550.62 ±1040.42 ±465.56 ±1559.05 ±1632.25 1803.62 2975.79 5320.20 2879.54 9580.74 6073.25 400 ±922.97 ±683.59 ±2162.07 ±849.35 ±3117.48 ±1796.96 2358.74 3765.29 6333.20 3092.91 12849.52 7652.40 500 ±1419.67 ±1153.23 ±1769.60 ±1253.01 ±4942.23 ±2972.59 Figure 4.27: Result table for http requests on iOS

Execution times when executing Http requests are displayed in figure 4.26. As seen in the figure 4.27 by comparing iOS - iPhone 7 and Xamarin - iPhone 7 there is a significant dif- ference in execution time at 10 image request and forward. The same concept applies to comparison on iPhone 6. The slowest execution time for 500 images on iOS - iPhone 7 is ≈ 3778ms and the fastest for Xamarin - iPhone 7 is ≈ 4564ms which means that iOS - iPhone 7 is ≈ 1s faster than Xamarin. Therefore iOS is significantly faster than Xamarin on Http requests, same concept applies on iPhone 6. As seen in figure 4.26 running Instruments on iPhone 6 makes the algorithm unstable CHAPTER 4. RESULT 41

and increases the execution time significantly at 100 images. At 50 images iOS - iPhone 6 shows a huge anomaly compared to the other tests where the standard deviation shown in figure 4.27 is greater than the average execution time. This is due to flaw in execution 3 where the execution time for 50 images are ≈ 72 times greater than the average of the other 19 executions, see appendix B

Total Running Time

Figure 4.28: Comparison between iOS and Xamarin and the total running time

Total running time Execution Time σ iOS - iPhone 7 8384.33 ±1632.01 iOS - iPhone 7 with instruments 12343.65 ±1946.99 Xamarin - iPhone 7 22765.78 ±5407.72 iOS - iPhone 6 19135.92 ±4088.96 iOS - iPhone 6 with instruments 42194.22 ±12398.38 Xamarin - iPhone 6 38152.55 ±5708.96

Figure 4.29: Result table for total running time on iOS

Combined execution times for all tests is displayed in figure 4.29. When taking standard deviation into account shown in figure 4.28 and σ in figure 4.29 there is a significant dif- ference between native iOS and Xamarin on both iPhone 6 and iPhone 7. 42 CHAPTER 4. RESULT

Size of application

Figure 4.30: Comparison between Android and Xamarin and the application size

Platform Size (Mb) % increase iOS 26.5 1% iOS - Blank project 26.2 Xamarin 51.2 32% Xamarin - Blank project 38.9

Figure 4.31: Result table for application size on iOS

The difference in application size is shown in figure 4.30. The native iOS application is significantly smaller then the Xamarin application by almost 25mb. The relative increase in application size is also significantly greater for Xamarin with 32% and for iOS 1%, seen in figure 4.31. CHAPTER 4. RESULT 43

Lines of code

Figure 4.32: Comparing iOS and Xamarin with regard to lines of code

Platform Code XML/XAML Total iOS 328 256 584 iOS - Blank project 27 64 91 Xamarin 436 (Platform specific: 0) 167 (Platform specific: 0) 603 Xamarin - Blank project 32 (Platform specific 0) 68 (Platform specific: 0) 100

Figure 4.33: Result table for code size on iOS

The difference in lines of code when comparing a newly generated project (Blank project) to the finished application is shown in figure 4.32. Xamarin has more lines of code and XML/XAML from start, however when finished the Xamarin application has more lines of code but fewer lines of XML/XAML, as seen in table 4.33. The total amount is nearly the same both Blank and finished application. Line count does not include comments and only files that are edited when developing the application is counted. 44 CHAPTER 4. RESULT

Memory usage

Figure 4.34: Used memory (dirty) comparison for iOS and Xamarin

Figure 4.35: Used memory (residential) comparison for iOS and Xamarin

Dirty size Residential size Platform Min σ Max σ Min σ Max σ iOS 18.21 ±1.81 53.63 ±0.61 30.00 ±1.46 138.46 0.82 Xamarin 47.95 ±0.38 61.72 ±1.84 77.51 ±0.87 155.37 ±1.96

Figure 4.36: Result table for memory usage on iOS CHAPTER 4. RESULT 45

Memory usage during runtime for Xamarin and iOS are displayed in figures 4.34 and 4.35. Dirty size is memory used by the application and that needs to be written to sec- ondary storage in order to be used again. Residential size is the total memory used by the application. Measurements are taken with Instruments which caused the Xamarin ap- plication to crash before completing all Http requests and therefore the Max column for Xamarin might not be the total amount of memory the application would use. Neverthe- less, the as seen in figure 4.36 shows that Xamarin uses more memory than native iOS application by approximately ≈ 10 − 30mb. Chapter 5

Discussion

Test data for Android shows that there is only a significant difference between Xamarin and native Android for Bubble Sort, Copy lists and the first Http request. This correlates with the results from Marc Armgren which also shows that for heavy computations Xa- marin performed worse than native implementations. Xamarin handles the Copy lists al- gorithm better than native Android, where Xamarin performs more stable than native with a standard deviation of ≈ 0.22ms and native ≈ 1.4ms and Xamarin executes faster than native, which tells us it handles lists better than the native implementation. Tests on iOS platform have similar performance, native iOS is faster during Bubble Sort, a few of the Http requests and Xamarin has better performance during Copy lists, however, Find- ing primes shows no significant difference between iOS and Xamarin. Relative difference in these test is quite large (Bubble sorts execution time for iOS is ≈ 50% faster than Xam- arin), however, the actual difference is ≈ 125ms which is hardly noticeable to the user. Some tests (Finding Primes and Copy lists)show that execution times are shorter while pro- filing with Instruments, this has no logical explanation and would require further inves- tigation to explained in future work. However, this means that when profiling the per- formance of the application with Instruments before the release the profiler could give false results on how it actually performs. It should be noted that if this relative perfor- mance difference between native and Xamarin scales the same way for more complex algorithms and calculations, the performance difference can be a critical factor. However, it can be debated how important heavy calculations is. We argue that running heavy computations on an local device with limited processing power and battery is a waste of energy and should be done on the server side instead, if possible. Native applications shows an optimizing pattern for a few tests, iOS for Copy lists (figure 4.22) and Finding primes (figure 4.25). Android shows the same pattern during Copy lists (figure 4.5). Android execution could be due to a JIT optimization done by ART. iOS ex- ecution is harder to explain, since it is uncertain if the iOS platform has a JIT compiler. For the Android test, Xamarin run times are stable throughout the tests. iOS implemen- tation for Xamarin displays the same optimizing pattern as native, but not as significant. This could mean that there is a difference in implementation between iOS and Android. In all tests that showed this optimizing pattern, except 2 out of 30, Xamarin performs better or equal than native implementations, even though Xamarin does not optimize as much as native. For the tests where optimization occurred the native optimization did

46 CHAPTER 5. DISCUSSION 47

not matter since it still performed worse or equal than Xamarin. This could very well be because of the nature of the tests and requires further investigation. One of the biggest difference between native and CC implementation is the file size of the finished application. However, on more modern mobile connection (such as 4G) where the average download speed is ≈ 17.4Mbps[24] file sizes are not as big of a problem as they used to be a few years ago. Available space of the device is also increasing mak- ing this less of a problem for developers. Lines of code is often disregarded as a mea- surement because many lines of code does not necessary mean that it is poor quality and vice versa. However, one interesting observation is all the finished applications contains approximately the same amount of code, though Xamarin runs on both platforms. This implies that using Xamarin produced the same result with half the amount of code. The result is that there is less code to produce and maintain. Looking at the developer experience, Xamarin turns out to be the better alternative. The score for Xamarin is 39 vs Android 33 and iOS 31. Android development stands out among the alternatives since it is the only one that is free of charge and available for all platforms. Although performance of the IDE is a disadvantage, running on Ubuntu 16.04 on a ASUS UX32L the experience was almost unbearable. Research was done to improve performance, nothing was found from official sources and none of the solutions from the community made any noticeable impact. The reason that Xcode scored lower than Android Studio is mostly due to that it only runs on one platform and features such as refactoring is missing. When developing for Xamarin on Windows, Visual Studio is used as the IDE (instead of Xamarin Studio), however, this thesis have not tested it and cannot not speak for the developer experience in Visual Studio. Average memory consumption for Android shows that the mean for Xamarin is less than Android, however, taking standard deviation into account there is no significant difference between the two. Considering that measurements are taken at different times another application could have left data in memory before the test was started, which could explain the difference in mean memory usage for Android. iOS however (using Instruments) enables measurement of one single application, and the results shows that the native application uses significantly less memory than Xamarin.Forms. However, as the application progresses, memory does not increase as much for Xamarin as for native implementation. Residential size at the start of the application shows an significant dif- ference between Xamarin and iOS, which could be interpreted as the “overhead” (addi- tional libraries and binaries that Xamarin needs to function). To validate this assumption deeper knowledge about the Xamarin architecture is required. When the application is finished the relative increase in memory is greater for iOS (462%) then Xamarin (200%) which could mean that Xamarin is more effective during runtime. However, this con- clusion can not be drawn since the Xamarin application did not finish all Http requests. Although there where a couple of hundred pictures left to download when the applica- tion crashed, it is not probable that the increase in memory usage would near level of iOS (this would mean that Xamarin would use more than ≈ 357.7 mb). Even though the relative increase during runtime is probably greater for iOS, the total residential size is significantly smaller than Xamarin, see figure 4.36. One of the main factors when deciding whether or not to use Xamarin could be the ex- isting knowledge of the developers. However, with C# being a classic object oriented 48 CHAPTER 5. DISCUSSION

language (and very similar to Java which is the language with most activity in the in- dustry in May 2017[31]) and the learning curve being very reasonable this could be over- looked if other advantages will be gained. Android and iOS development requires a lot of domain knowledge that can only be gained by actually working and developing ap- plications, but as seen, that domain knowledge is (to some extent) also required when working with Xamarin. Another thing to consider is what services is being used around the application itself. For example, if web services such as API’s or REST’ful services from Azure (Microsofts Cloud service) is being used, that could be used as an argument to use Xamarin, since it is in the same product group. However this thesis has not examined any of these ser- vices closer and cannot speak for the quality or usefulness of these products. Algorithms chosen for this thesis could have been investigated further, as well as other approches for measurement. Although the algorithms show differences in CPU, memory usage, network components. it may not be the most relevant for real world usage in an application. Other measuring tools could also been investigated that would have given other or more accurate results. In the Google I/O 2017 conference a new profiler was presented along with other features such as support for another language, that would have been relevant for this thesis. Something that should be considered as more smartphones get multi-core processors is how easy it is to use multiple threads and run the application on several cores. Both Swift and C# feels very modern in this regard. With features such as descriptive key- words and thread groups. Java however use classes that implement interfaces to run asynchronous which means there is substantially more code.

5.1 Future work

Research that want to expand this study should complete the application and publish on the market, and see how that process differs between platforms and how the users expe- rience the application regarding aspects such as first impression. There are many external tools (such as testing, automatic UI testing, automatic releases) that needs to be consid- ered when choosing frameworks. It is not recommended to choose framework from this study alone, but further research is needed to see if it fits the needs of the application. In this study the code is 100% shared between platform, however it would be interesting to research on how complicated it is to implement platform specific components as this could be valuable information. New versions of Android, iOS and Xamarin also mean new features that could improve both performance and developer experience and should be investigated. Studies that want to investigate further with regard to developer experience should do an survey that they could get multiple developers opinions regarding the different IDEs and APIs. This could be difficult since developers that are experienced in all different platforms would be needed. Chapter 6

Conclusion

The tests in this thesis shows that there is a performance difference between Xamarin and native development for both Android and iOS. The performance difference differs between tests, however, overall statistics favours native development. Although, we ar- gue that the performance difference is not significant enough to disregard Xamarin as an alternative. Considering the amount of code written in native applications, Xamarin produced the same functionality with approximately equal amount of code. This means that Xamarin required the half amount of code written compared to native to produce the same result. In larger projects this could be crucial for developing as well as maintaining and sup- applications. The personal experience and the results regarding the ”IDE experience” and ”Documen- tation & Language” shows that the development environment is superior in Xamarin, combined with the reduction in written code these advantages exceeds the performance disadvantages. Therefore, there is an acceptable trade-off between performance and de- veloper efficiency when developing applications with Cross-Compiler compared to native development with regards to Xamarin.

49 Bibliography

[1] M. Armgren. Mobile cross-platform development versus native development - a look at xamarin platform. 2015.

[2] C. Barthelson. Paddelappen. 2015.

[3] A. Z. Charkaoui Salma and B. E. Habib. Cross-platform mobile development ap- proaches. pages 188–191, 2014.

[4] W. Commons. File:art view.png, 2017. https://commons.wikimedia.org/ wiki/File:ART_view.png [Accessed: 2017-06-04].

[5] J. J. Dujmovic and H. Nagashima. Lsp method and its use for evaluation of java ides. International Journal of Approximate Reasoning, 41(1):3–22, 2006.

[6] A. Gausby. Attention spans. 2015.

[7] S. G. Hartmann Gustavo and D. Asi. Cross-platform mobile development. pages 1–18, 2011.

[8] X. HQ, X. Inc., X. Ltd., X. APS, X. S.R.L., and X. Ltd. We knew there had to be a better way to build mobile apps - xamarin, 2017. https://www.xamarin.com/ about [Accessed: 2017-03-30].

[9] E. Lygnebrandt. Prestande av användargränssnitt i cross-platform-appar. 2015.

[10] S. I. Palieri Manuel and C. Antonio. Comparison of cross-platform mobile develop- ment tools. pages 179–186, 2012.

[11] S. Perez. Majority of digital media consumption now takes place in mobile apps, 2017. https://techcrunch.com/2014/08/21/ majority-of-digital-media-consumption-now-takes-place-in-mobile-apps/ [Accessed: 2017-04-03].

[12] R. C. Rahul and T. S. Babu. A study on approaches to build cross-platform mobile applications and criteria to select appropriate approach. pages 625–629, 2012.

[13] Andrei Frumusanu. A closer look at android runtime (art) in an- droid l, 2014. http://www.anandtech.com/show/8231/ a-closer-look-at-android-runtime-art-in-android-l/ [Accessed: 2017-05-06].

50 BIBLIOGRAPHY 51

[14] Apple Inc. About instruments, 2016. https://developer.apple.com/ library/content/documentation/DeveloperTools/Conceptual/ InstrumentsUserGuide/index.html [Accessed: 2017-05-08].

[15] Dan Bornstein. Dalvik vm internals (google i/o 2008), 2008. https://www. youtube.com/watch?v=ptjedOZEXPM [Accessed: 2017-05-05].

[16] IDC. Smartphone os market share, 2016 q3, 2017. http://www.idc.com/promo/ smartphone-market-share/os [Accessed: 2017-04-03].

[17] Mono project. About mono, 2017. http://www.mono-project.com/docs/ about-mono/ [Accessed: 2017-05-05].

[18] Mozilla Developer Network. Using geolocation, 2017. https://developer. mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation [Ac- cessed: 2017-04-03].

[19] Nat Friedman, 2017. https://blog.xamarin.com/xamarin-for-all/ [Ac- cessed: 2017-03-30].

[20] NuGet. Nuget roles and responsibilities, 2017. https://docs.microsoft.com/ en-us/nuget/policies/governance [Accessed: 2017-05-05].

[21] Open Handset Alliance. Abi managment, 2017. https://developer.android. com/ndk/guides/abis.html [Accessed: 2017-05-06].

[22] Open Handset Alliance. The andriod source code, 2017. Source:https:// source.android.com/source/ [Accessed: 2017-04-03].

[23] Open Handset Alliance. Dashboards, 2017. https://developer.android.com/ about/dashboards/index.html [Accessed: 2017-06-04].

[24] Open Signal. The state of lte (november 2016), 2016. https://opensignal.com/ reports/2016/11/state-of-lte [Accessed: 2017-05-11].

[25] OpenOpenhandsetalliance, 2017. http://www.openhandsetalliance.com/ oha_faq.html [Accessed: 2017-04-03].

[26] Opensignal, 2017. https://opensignal.com/reports/2015/08/ android-fragmentation/ [Accessed: 2017-04-03].

[27] Qualcomm Technologies. Trepn power profiler, 2017. https://developer. qualcomm.com/software/trepn-power-profiler [Accessed: 2017-05-08].

[28] Qualcomm Technologies. Trepn power profiler - faq, 2017. https://developer. qualcomm.com/software/trepn-power-profiler/faq [Accessed: 2017-05-08].

[29] Smart Insights. Mobile marketing statistics, 2017. http://www. smartinsights.com/mobile-marketing/mobile-marketing-analytics/ mobile-marketing-statistics/ [Accessed: 2017-04-03].

[30] Stacy Wegner et al. Apple iphone 7 teardown, 2017. http://www.techinsights. com/about-techinsights/overview/blog/apple-iphone-7-teardown/ [Accessed: 2017-05-06]. 52 BIBLIOGRAPHY

[31] TIOBE software AB. Tiobe index for may 2017, 2017. https://www.tiobe.com/ tiobe-index/ [Accessed: 2017-05-11].

[32] Xamarin. Xamarin - building cross platform application, 2017. https: //developer.xamarin.com/guides/cross-platform/application_ fundamentals/building_cross_platform_applications/part_1_-_ understanding_the_xamarin_mobile_platform/ [Accessed: 2017-05-05].

[33] Xamarin. Xamarin faq, 2017. https://www.xamarin.com/faq [Accessed: 2017- 05-05].

[34] Xamarin. Xamarin whitepaper - xamarin white paper anatomy of a native mobile app, 2017. http://cdn1.xamarin.com/webimages/assets/ Xamarin-White-Paper-Anatomy-of-a-Native-Mobile-App.pdf [Accessed: 2017-05-05].

[35] K. C. Xavier Ducrohet, Tor Norbye. Android studio - ide built for android, 2013. https://android-developers.googleblog.com/2013/05/ android-studio-ide-built-for-android.html [Accessed: 2017-05-06].

[36] D. Smith, 2017. https://david-smith.org/iosversionstats/ [Accessed: 2017-04-03]. Appendix A

Test Implementations

A.1 Bubble sort

Purpose

Bubble sort is a sorting algorithm with O(n2) in time complexity and because of this it is not often used in production. For long lists the algorithm will probably have many cache-misses and therefore not perform well. This is used to see if there is any difference in handling memory between native code and that generated by Xamarin.

Specification

The test is executed 10 times and the execution time for each execution as well as the av- erage execution time is collected. Each execution will consist of sorting two lists. First list is a pre-randomized list with 8000 integers. Second list is a worst case for the algorithm with [n...1].

Psuedo code

worst−case = [list with worst−case integers] pre−random = [list with pre−randomized integers]

timeBeforeWorst = currentTime() bubbeSort(worst−case ) timeAfterWorst = currentTime() bubbleSort(pre−random ) timeAfterRandom = currentTime()

//Return these timeWorstCase = timeBeforeWorst − timeAfterWorst timeRandom = timeAfterWorst − timeAfterRandom

53 54 APPENDIX A. TEST IMPLEMENTATIONS

//Assuming A is zero−indexed bubbleSort( A : [list with sortable integers]) { n = A. length while (swapped) swapped = false f o r i = 1 => n−1 (inclusive): i f A[ i −1] > A[ i ] : swap (A[ i −1] , A[ i ] ) swapped = true; end i f end f o r n = n −1 end while }

A.2 Finding primarys

Purpose

The purpose of this test is to stress the CPU of the device in order to determine if there is any significant overhead from the Xamarin framework when doing simple calcula- tions.

Specification

The test will run 10 times testing finding all primes between 2 and 300000. Using Sieve of Eratosthenes to find all the primes. Execution time for each test as well as the average from the 10 runs.

Psuedo code n = 300000 A = An array of booleans, where the index represents the number. From 2 to 300 000. Initial value is true timeBefore = currentTime() for i = 2,3,4...n: i f A[ i ] : j = i ^2 while j < n : A[ j ] = f a l s e j = j + i end while end i f end f o r APPENDIX A. TEST IMPLEMENTATIONS 55

timeAfter = currentTime()

return timeBefore − timeAfter

A.3 Copy lists

Purpose

The intention with this test is to stress the cache of the device and see how that is han- dled native versus Xamarin. Copying between evenly spaced lists in memory will gener- ate a lot of cache misses.

Specification

The test will run 10 times in total and return the execution time for each run as well as the average execution time. Firstly we generate 103 lists with 2087 integers. List i will contain [i ∗ 2087, i ∗ 2087 + 1, i ∗ 2087 + 2, . . . , i ∗ 2087 + 2086] and then each entry in the list on position i is summarized and put in a another list at index i.

Psuedo code

l i s t s = [ ] [ ] sums = [ ]

for i = 0 to i = 103 (exclusive): sums [ i ] = 0 for j = 0 to j = 2086 (exclusive): lists = [i][j] = i ∗ 2087 + j end f o r end f o r

timeBefore = currentTime() for j = 0 to j = 2086 (exclusive): for i = 0 to i = 103 (exclusive): sums[i] += lists[i][j] end f o r end f o r timeAfter = currentTime()

return timeBefore − timeAfter 56 APPENDIX A. TEST IMPLEMENTATIONS

A.4 Http request

Purpose

A common task for an application is to fetch data from a server and present to the user. We designed the test to measure the time from the initial request until the interface has been manipulated and presented to the user. This will show if there is any significant overhead from the framework when performing this action.

Specification

The test will run seven times in total and return the execution time for each execution. The seven test will fetch 1, 10, 50, 100, 200, 400, 500 objects from the server individually in ascending order and then display the fetched result in a list view. The application should use a native list view (Table View for iOS and List View for An- droid) with the image displayed next to the title. See figure A.1.

Figure A.1: Android native List View and iOS native List View

This should be run 10 times and the average time for each sub test calculated and re- turned.

Psuedo code APPENDIX A. TEST IMPLEMENTATIONS 57

runs = [1,10,50,100,200,400,500] times = [ ] for (i = 0, i < 7; i++) { timeBefore = currentTime() jsonEntries = loadJsonFromServer("server:port/json?limit=" + runs[i ]) foreach(entry in jsonEntries) { image = loadImageFromServer(entry[ ’imageUrl ’]) title = entry[’title ’] //This can also be done as a ’bulk’ operation //whatever is appropriate for the platform addToTableView(image, title ) } //Everything should be loaded and the user interface //updated before time measured again timeAfter = currentTime() times[i] = timeAfter − timeBefore //in milliseconds }

A.5 Server implementation

In order to test the http-request portion of the application we use our own implementa- tion of a simple web server. The own implementation makes it possible to minimizing external factors that could affect the results otherwise. By using our own implementation we can ensure that no latency in the tests are coming from error on the server-side. The server is implemented in NodeJS and delivers a JSON array of objects that contains text informations and to images for the application to parse and fetch. It is config- ured to never assume that the requesting end point has a cached version of the data and will always return a json response no matter what headers are set.

A.5.1 JSON response

A request to http://serverip:serverport/json?limit=2 will return a JSON ar- ray with 2 entries on the following form [ { "id":1, "title":"Enterprise −wide incremental service −desk", "imgUrl":"http://192.168.1.191:3000/images/1.png", },{ "id":2, "title":"Switchable upward −trending strategy", "imgUrl":"http://192.168.1.191:3000/images/2.png" } ] 58 APPENDIX A. TEST IMPLEMENTATIONS

The parameter ?limit=2 can either be any integer between 0 ≤ limit ≤ 500 or not supplied at all that will make the server return the full list (500 entries). The mocked data is provided by (mockaroo.com) using two fields (id = Row Number, title = Catch Phrase) and exporting 500 rows in the JSON format.

A.5.2 Images

The server provides 500 images of the size 128px x 128px and are accessable with a re- quest to http://serverip:serverport/images/i.png where 1 ≤ i ≤ 500. Images where provided from http://sixrevisions.com/freebies/icons/free-icons-1000/ Appendix B

Http results data for iPhone 6 without In- struments

59 60 APPENDIX B. HTTP RESULTS DATA FOR IPHONE 6 WITHOUT INSTRUMENTS tp tp0ht5 tp0 tp0 tp0 http500 6163.61302137375 1625.19299983978 2172.94901609421 2321.47198915482 834.277033805847 2233.81996154785 1683.46899747849 416.826009750366 793.788015842438 http400 2330.55996894836 931.026041507721 206.210970878601 590.970039367676 1759.07903909683 2283.08802843094 424.018979072571 80.7690024375916 386.795997619629 2092.14901924133 1031.44800662994 1835.64502000809 222.795009613037 282.819032669067 88.6470079421997 1664.43401575089 591.485977172852 838.452994823456 310.208022594452 2487.17600107193 72.344958782196 838.301002979279 232.537984848022 414.534986019135 2405.64900636673 1760.26302576065 229.619979858398 http200 67.1910047531128 209.657967090607 458.64999294281 824.369013309479 1676.7629981041 252.528965473175 2172.94400930405 68.6590075492859 217.083036899567 431.473970413208 2068.28600168228 825.491011142731 1721.58896923065 270.366013050079 2294.69799995422 82.7099680900574 224.545001983643 1627.88897752762 434.795022010803 1026.08901262283 2342.69803762436 139.476001262665 246.398031711578 236.063003540039 430.015981197357 822.03197479248 2336.52395009995 821.749985218048 264.680027961731 http100 63.9299750328064 2122.03800678253 255.246996879578 421.210050582886 419.813990592957 1814.3550157547 139.646053314209 1690.98001718521 90.7520055770874 220.902979373932 2204.53494787216 1013.31102848053 211.7800116539 288.120031356812 67.5520300865173 831.88396692276 661.499977111816 1852.7290225029 151.270031929016 2290.01796245575 412.893056869507 83.076000213623 201.883018016815 2850.5380153656 824.427008628845 2065.19401073456 226.463973522186 270.792007446289 73.1390118598938 478.192031383514 http50 853.213965892792 107.542037963867 1943.7700510025 233.484983444214 2222.38796949387 218.513965606689 435.877978801727 2530.86495399475 842.939019203186 2985.61197519302 2363.5750412941 72.4999904632568 221.266984939575 608.318984508514 883.029997348785 1846.1799621582 105.678975582123 6485.35704612732 134.319961071014 236.342012882233 468.451976776123 848.608016967773 3205.32494783401 325.375020503998 2178.30801010132 71.3590383529663 223.208010196686 444.014012813568 1108.94900560379 5231.17101192474 115.665018558502 http10 73.7060308456421 17019.9540257454 573.430001735687 2927.35600471497 222.020030021667 98.9930033683777 265.488982200623 928.099989891052 290.368974208832 209.656000137329 245.205998420715 227.871000766754 92.0180082321167 165.264010429382 http1 www.kth.se