DESIGN AND IMPLEMENTATION OF AN ANDROID COMPATIBLE DISTRIBUTED MULTI-ACCESS EDGE

Alexei Belikov Mogilny

Master’s Thesis presented to the Telecommunications Engineering School Master’s Degree in Telecommunications Engineering

Supervisor Felipe Gil Castiñeira

2020

Abstract While mobile device performance continues to increase, they are still behind servers or desktop computers in terms of computational capacity, storage or memory. Broad availability of high-bandwidth wireless communications and computational resources allows mobile devices to leverage the features of cloud computing by using remote systems to perform the most complex operations. Nevertheless, there are different use cases where the time to provide results is critical, but the multi-access edge computing paradigm proposes to move the cloud resources closer to the user (for example, by installing a small data center near the cellular base station). This master thesis covers the design and implementation of a system capable of running Android applications in a cloud environment, and more particularly at the edge of the network, in order to reduce the latency between user interaction and results, and provides a brief analysis of the performance.

Key words: Google Android, Virtualization, 5G, Multi-access edge computing

iii

Contents

Abstract iii

List of figures vii

1 Introduction1 1.1 Motivation...... 1 1.2 Objectives...... 2

2 State of the art3 2.1 Android internal architecture...... 3 2.1.1 Android graphics...... 3 2.1.2 Input devices...... 5 2.2 Running Android on a PC...... 6 2.2.1 Anbox...... 7 2.2.2 Anbox Cloud...... 8 2.3 Related work...... 9

3 Android Edge Platform 11 3.1 Architecture overview...... 11 3.2 Remote rendering...... 12 3.3 Event injection...... 13

4 Results 15 4.1 Prototype client...... 15 4.2 Benchmarks...... 15 4.3 Latency...... 17

5 Conclusions and future work 21 5.1 Future work...... 22

Bibliography 24

v

List of Figures

2.1 Render process diagram...... 4 2.2 SurfaceFlinger managing layers for a display...... 5 2.3 Android Input stack components...... 7

3.1 Android Edge Platform architecture...... 11 3.2 Remote rendering architecture...... 12 3.3 Event forwarding...... 14

4.1 Prototype application...... 16 4.2 RealPi benchmark results...... 16 4.3 Geekbench 5 results...... 17 4.4 Latency measurements...... 18

vii

1 Introduction

1.1 Motivation

Wireless communications have been increasing in capacity and reliability at a steady pace for years and it is expected that the arrival of 5G will be the next step forward. One of the core ideas of this next generation of communications is the multi-access edge computing (MEC) paradigm: to bring the compute, storage and networking resources closer to the users [1].

5G networks offer flexibility for accommodating different applications with varied quality- of-service profiles. This flexibility stems from a new architecture based on the virtualization of most of the components of the network. This allows increased efficiency in distributing the resources to optimise the throughput taking into account the requirements of the data [2].

Furthermore, cloud computing has great success because it offers on-demand resources with reliability and cost savings, for example in terms of maintenance. Many applications developed nowadays leverage the cloud capacity to perform tasks that are not practical to be executed locally (because of the lack of resources in terms of computing power, memory, storage, etc.). However, there are few large cloud providers and their data centres are located in a few select locations. This is a limiting factor for certain low latency applications. Nevertheless, 5G networks plan to include distributed clouds in locations near the users. This is the edge computing paradigm, which aims to provide most of the advantages of the cloud with the pros of physical proximity of the hardware to the users. Apart from that, it has other advantages such as a reduction of the back-haul transmissions as the data has to travel across less networks [3].

New use cases for high mobility devices such as smartwatches, glasses, and phones are getting traction. Among those it is worth mentioning AR/VR applications used in industrial scenarios and in gaming [4]. Such applications have strong computing

1 Chapter 1. Introduction requirements to generate images in real time. Even though mobile devices have increased in performance in the last years they’re still behind more stationary equipment such as desktops and servers where power limits are less of a concern [5].

These new uses call for a shift in the usual client-server style of programming previously used to offload computationally heavy tasks such as speech analysis or image recognition. A new concept of offloading is being developed where even the user interface is executed in the remote server, and the local device is just used for displaying the rendered user interface and to capture interaction. This approach requires low latency, so that users don’t perceive any lag between their interaction and the results displayed on the screen. By using edge computing locations, it is possible to achieve this objective.

In this thesis, we propose to offload the computing of Android applications by directly executing them in cloud or edge locations, and using the phone just for showing the rendered interface and capturing the user interaction. Most cloud servers use Intel or AMD processors with the x86-64 architecture, thus to achieve this objective we take advantage of the possibility to execute Android in different architectures, ranging from ARM smartphones to the mentioned x86-64 servers.

1.2 Objectives

The main objective of this thesis is the development of an architecture for the virtualization of Android applications in a cloud/edge environment and the remote execution from a simple Android smartphone. This architecture will be validated with a “Proof of Concept” implementation.

This work will lay the foundations for a platform that will be able to allow applications to be typically executed in a smartphone or a wearable, but transparently migrated to a cloud platform when the requirements increase (for example, when the application requires hardware acceleration, more memory or CPU power).

Thus, in this thesis we develop a system that will enable Android smart devices, such as phones or smart glasses, to allow users to interact with a applications that are really running at the edge. To build this solution we used an open source project, Anbox, as the basis to run Android applications at the cloud.

2 2 State of the art

This project uses functions and ideas from existing software and proposes modifications to explore a new use of those technologies to build an architecture for the remote execution of Android applications. For a better understanding of the solution it is important to establish a common ground and know the limitations and possibilities of this components as well as to mention alternatives that partially had influence in the design of the architecture.

2.1 Android internal architecture

Android is an running a kernel based on , that was adapted to run on mobile devices, initially phones and later other hardware such as watches or TVs. This project uses this OS because of its widespread adoption and its open nature making it possible to implement modifications at the operating system level.

To achieve the goals of this project some key components of Android have been modified. In order to capture the application output a section of the rendering process has been altered. To forward the interaction from the client to the service, some changes were needed in parts of the input stack.

The sections that follow provide a brief overview of how the graphics and input stacks work on an Android device to understand where the capture and injection occurs.

2.1.1 Android graphics

The way that Android handles the user interface has evolved throughout the different versions of the operating system. These changes introduced performance enhancements in forms of render flow optimisations, hardware acceleration from the GPU to draw using low level APIs or the composition of the elements coming from different sources using

3 Chapter 2. State of the art hardware acceleration [6]. This section describes how UI elements are generated on a modern Android version.

The process of generating output on a display starts at the choreographer that periodically generates VSync events. At this point in the UI (user interface) thread, the input events get processed. This might trigger some changes at the UI such as starting animations, changing dimensions, etc. [7].

The components that have changed in the content hierarchy are marked as “dirty” in a process called invalidation. This goes through all the containers and other elements that are implicated (for example the entire list when one item of it is pressed). This later triggers a traversal code that measures, creates the layout and calculates how it should be drawn [8].

The draw method generates a DisplayList for each element with the operations needed to draw that object. A DisplayList is a container to save a series of graphics related operations to be reproduced at a later time. For elements nested inside it combines information stored in the DisplayLists below in the hierarchy obtained through getDisplayList() method.

From here the work is taken over by the RenderThread, which is implemented as native code. Apart from the DisplayList, the render thread also receives the new textures and the changed area bounds.

Optimisations such as reordering and batching same operations also occur at this stage. Examples of optimisations are bundling same item draws in one command, alpha calcu- lations to avoid redrawing items invisible or are outside of the changed area (damage area).

Figure 2.1: Render process diagram

At this point the operations are translated to to a graphics library commands (OpenGL) and are executed on the buffer. At the end, swap buffer is called, which indicates that

4 2.1. Android internal architecture the drawing has been completed. This swap buffer request arrives to the SurfaceFlinger. The sequence of the operations is summarised in figure 2.1.

SurfaceFlinger is a component of Android that handles compositing the different layers to the resulting “picture” to show on each display.

At the end every component of the UI resides on a Surface which is considered a Layer from the SurfaceFlinger point of view. The content of each layer is backed by a graphic buffer. SurfaceFlinger acquires the buffer to get access to the data and releases it after the operation is completed. Composition happens in the Compositor class that can be backed by hardware compositor (HWC) or OpenGL. This process is managed by the SurfaceFlinger through a common interface as the hardware compositor implementation is device-specific. This is the last step, as the compositor communicates with the display to show the resulting image. A summary of this operation is shown in figure 2.2.

Figure 2.2: SurfaceFlinger managing layers for a display

In this project we use Anbox, an environment to run Android applications in a PC, presented in section 2.2. Anbox does not have a HWC and all the operations are performed using SwiftShader that takes the role of the GPU. This project captures the image data when the individual layers have been combined at the SurfaceFlinger before leaving to the display which appears as a window of the computer.

2.1.2 Input devices

A user interacts with an Android application through what we call HID (Human Interface Device) from which the most common is the touch screen. This section is an overview of what happens every time a user taps on a screen or presses a key on a (physical) keyboard [9].

The action that has been generated goes through three different stages: Kernel stage, Android System and Android App [10].

5 Chapter 2. State of the art

Kernel

In most peripheral devices (displays, keyboards..) a user action results in different electrical signals that are read by a micro-controller which ultimately results in an interrupt active at the CPU.

Before that, the device driver has been loaded and it has registered a function which is called by the kernel to service the interrupt. This routine has device specific commands that translate the received data to the Linux input model.

The driver registers the device as an input device which creates a device file located at /dev/input/eventXX. Each input device is defined the struct linux/input.h. The device driver reports each event to the kernel input stack with a event struct. This is an abstraction layer that allows peripherals from different manufacturers to operate in a similar way. The variables contained in those structs can be seen in the source file linux/input.h.

Android System

The component in charge of reading the events from the Linux kernel is the InputFlinger. It has three main components or services. First the EventHub interfaces directly with the file exposed by the Linux kernel at /dev/input/eventXX. It converts the Linux events to the equivalent structure that is used in Android. It is also in charge of removing and adding devices as they appear in the directory.

Having the raw events in the Android format, the InputReader builds complex events from individual actions. For example, a swipe on a screen is a complex event composed from a push, drag and release events that only make sense together.

After that the InputDispatcher forwards the event to the correct user application.

User apps

The app receives the event from the InputDispatcher in an InputChannel. From now on it is passed through a hierarchy of callbacks that queue it for the input stage. Later, it will be dispatched to the correct view and the corresponding on-event callback will be executed. A summary of this stack can be found in figure 2.3.

2.2 Running Android on a PC

The possibility of running Android applications on a desktop computer isn’t new. The most common way is by using an emulator. The official SDK provides one and there are

6 2.2. Running Android on a PC

Figure 2.3: Android Input stack components other alternatives such as the android emulator from GenyMotion1. It is also possible to install the Android operating system on a computer or a virtual machine thanks to the android-x86 project. A bare-metal approach can have compatibility problems but the direct access to hardware can lead to better performance.

Other projects offer a middle ground between a completely isolated emulator-based Android and running the OS on bare-metal hardware providing a reasonable a compromise between isolation, security and performance. Anbox is analysed with special attention because it’s the base of the project.

2.2.1 Anbox

Anbox is an open source project that promises the ability to run Android applications on any GNU/Linux operating system seamlessly.

According to the documentation [11], it accomplishes this by putting the Android operating system into a container, and abstracting hardware access to integrate core system services with a GNU/Linux system.

For the integration with the operating system Anbox uses the SDL (Simple DirectMedia Layer)2 library. This library abstracts access to audio, peripherals and graphics.

Anbox ships with a custom Android image based on AOSP 7.1. This version has been

1Genymotion desktop https://www.genymotion.com 2Simple DirectMedia Layer - https://www.libsdl.org/

7 Chapter 2. State of the art modified for a better integration with the operating system. Changes to this image include the removal of threading priority operations and energy management operations such as suspend/wake-up calls, the addition of a service to synchronise the clipboard and the installed apps.

The graphics system was also modified. For the desktop integration all apps run in free-form mode without any shadows that automatically appear by default. Anbox uses the same graphical acceleration stack as the AOSP emulator: android-emugl [12].

Android-emugl is an emulation layer that provides access to the host system libraries through a series of translation libraries and a communication protocol that uses high performance QEMU pipes to minimise the performance impact. The host system library is SwiftShader. SwiftShader is a high-performance CPU-based implementation of several graphics APIs with the goal to provide hardware independence for advanced 3D graphics3. This allows keeping every element in a container without worrying about proprietary graphics libraries.

The Anbox software runs in two instances:

• One of them runs as as a privileged service and is in charge of managing the Android container, configure LXC environment, mount points and network.

• The other runs with user level privileges and interacts both with the container and with the container manager. This service is called the Session Manager (SM). The SM spawns the process that generates the application window and the surface (or surfaces for multiple apps simultaneously) onto which the apps are rendered. Internally each application is mapped onto one layer in a custom implementation of the HWC.

2.2.2 Anbox Cloud

Anbox cloud is a platform offered by Canonical to offload mobile workloads to the cloud. It was announced after work has started on this project.

This Cloud PaaS is marketed towards enterprise deployments with a great emphasis on scalability. According to their landing page4 it allows game streaming, virtual devices, enterprise mobile applications and application testing.

3SwiftShader - https://swiftshader.googlesource.com/SwiftShader 4Anbox cloud - https://anbox-cloud.io/

8 2.3. Related work

2.3 Related work

Perhaps the most relevant project related to the proposed system is SVMP (Secure Virtual Mobile Platform), developed under the MITRE IR&D program. It is described as a system for running virtual smartphones in the cloud. Behind the scenes it uses technologies such as KVM, OpenStack and WebRTC. In contrast to the per-application approach of this project, this system virtualises the entire smartphone, similarly in concept to a on-demand remote desktop server. The last release (SVMP 2.0.0) was in 2014.

This system has been explored in several studies such as Mobiplay [13] or Fusion [14] accomplishing their objective in their own way. Most of the advantages of the full virtualization approaches revolve around the BYOD (Bring Your Own Device) use cases and security [15].

There are general studies which aren’t attached to a specific implementation that attempt to model a convergence between the cloud and the mobile phone where the mobile device acts as a thin-client to display graphically rich service. A notable example is a publication from Microsoft [16] from the year 2011 where the idea of a cloud-assisted mobile browser that renders the web pages on a remote server and sends the data via RTP to be displayed on the mobile device or the Cloud Phone concept which takes the previous idea and extends it to mobile applications and services.

9

3 Android Edge Platform

This master thesis designs and implements the Android Edge Platform (AEP), an architecture that allows applications to be executed in a remote cloud location (an edge location, in order to operate with low latency) while their output and input is respectively displayed and captured in an Android device. The platform was designed with scalability in mind. It can support multiple applications and users simultaneously and the load can be distributed among different servers. In the future this approach could allow seamless migration of running applications between servers to improve the latency.

3.1 Architecture overview

Figure 3.1: Android Edge Platform architecture.

This application virtualization system requires an app installed on the client smartphone. This Remote Anbox Client (RAC) acts as a launcher and client for remote apps. It

11 Chapter 3. Android Edge Platform displays a list of available applications and when one is selected it displays the remote interface of the virtual application and takes care of forwarding inputs from the sensors and HID devices to the remote system.

The service that handles the communications with the RAC is the Remote Anbox Service (RAS). This service handles listing and starting applications as well as the forwarding of input events. When an application starts, a StreamingEncoder is spawned with the right parameters to capture and send the graphics to the RAC.

3.2 Remote rendering

When a user selects an app on the RAC, that app is started on the server by the RAS. Then, the user receives the output of the application on his device. A summary of what is happening behind the scenes can be found on figure 3.2.

Figure 3.2: Remote rendering architecture.

The application starts and runs on the server as a normal Android app. The only difference is in the rendering process. Instead of rendering on the device using its GPU, the graphics commands are sent to a render process that resides on the Anbox service on the host side.

The process and the middle layer is described in the Anbox section of the State of the art (section 2.2).

Anbox implements its own compositor that organises the stream in layers and composes them obtaining a list of renderables that can be drawn onto a surface. For the integration with the host operating system, Anbox uses the SDL library to generate a window and obtain the surface where it will render the received data.

Each application that is running will get a separate SDL Window and Surface. At the end of the rendering function, after all the renderables have been drawn and before the

12 3.3. Event injection call to eglSwapBuffers, it is safe to assume that the buffer that is currently bound to the current GL context will contain the next frame of the app displayed on this surface of this window. This assumption is safe because the SDL library uses double buffering where one surface uses two buffers, one that is currently displayed and the other is being rendered. This is done so that the user is always presented with a complete picture.

This buffer is passed to the StreamingEncoder as a new frame. There it performs a series of operations to adapt the format of the data to what is expected by the encoder. Those transformations include:

• Flip vertically the image

• Swap red and blue channels

• Convert from RGB to YUV

For this operation the library swscale is used.

Now the buffer is ready to be fed to the x264 encoder. The encoder is configured with the “utrafast” preset tuned for “zerolatency” mode to minimise the processing delay as recommended in the H.264 Video Encoding Guide1. The names of the presets (superfast, veryfast, faster, fast, medium, slow...) reflect the time that each frame is processed. When encoding a video it’s a way of specifying the trade-off between quality and bit rate because the longer a frame is processed the better the compression. The tuning setting configures additional parameters. In this case “zerolatency” disables B-frames (frames that depend on the previous and the next frame to be encoded) so that it’s not necessary to wait for the next frame thus reducing latency, enables slice based threading so that several threads can process sections of the same frame and other settings that contribute to reduce the processing time.

The data from the x264 encoder sent in a RTP stream using the FFMPEG library.

On the RAS side after sending the request to the RAC, the client initialises a compatible GStreamer pipeline and waits for the start of the steam coming from the server.

3.3 Event injection

The interaction between a mobile devices is mostly unidirectional and reactive to user input (a pressed volume key or a tap on a touchscreen).

Exceptions to this are the events generated by the sensors that measure acceleration, geomagnetic field strength or angular change. These events even if they behave similarly

1H.264 Video Encoding Guide https://trac.ffmpeg.org/wiki/Encode/H.264

13 Chapter 3. Android Edge Platform

Figure 3.3: Event forwarding. at the kernel level, Android treats them differently. Another exception to this rule are the events generated at the application, and those mostly have to do with haptic feedback or vibration. For now we will focus on human interaction events originated at the phone leaving the kinaesthetic interactions for future versions.

In this architecture the events originate at the RAC and are sent to the RAS. This is a purely reactive behavior where requests originate at one end. A RPC style approach suits well to communicate the components. The gRPC framework was chosen because of its good performance and easy inter-operation between C++ and Android-Java.

The RAS exposes a gRPC endpoint which expects to receive calls with messages serialized using protocol buffers. When an event such a screen tap is detected on the client application, the event is sent to the corresponding endpoint in the RAS.

Every event is translated to a series of native events generated by a device mimicking an actual HID device. At this stage the events are picked up by the EventHub on the Android side and follow the standard Android stack until they reach the callbacks on their views.

14 4 Results

The Android Edge Platform was deployed on a virtual machine running Ubuntu 18.04 LTS with access to 6 CPU threads and 6GB of RAM. Two different phones were used as clients: a Nokia 6 (TA-1021) with a Qualcomm MSM8937 soc (Snapdragon 430) and 3GB of RAM representing a budget oriented device and a Essential PH-1 equipped with a MSM8998 (Snapdragon 835) representing the higher end spectrum. Both devices run the latest official update of the operating system, with the TA-1021 running Android 9 and PH-1 running Android 10.

4.1 Prototype client

The prototype RAC application allows connecting to a RAS endpoind to list the available apps and to run one of those using the remote rendering system. The main activities can be seen in the figure 4.1. This app covers the use case of a proof-of-concept but it lacks some comfort features that the end user expects, such as automatic discovery of the remote server without the need to manually enter the IP address where the service is running, or a better designed UI.

4.2 Benchmarks

One of the expected advantages of this platform is the increase in performance. To validate this claim we ran a benchmark of the system running in a remote mode against the same benchmark running directly on the low and high end smartphones. To better illustrate the performance advantage, the low en device is used as the client.

RealPi Benchmark

The RealPi benchmark is a simple way of measuring single thread performance that consists in calculating the first million (1 000 000) of digits of the number pi. The amount

15 Chapter 4. Results

Figure 4.1: Prototype application. of time that it takes is measured in seconds, the less time the better. The results of this benchmark are displayed in the figure 4.2. In this case it’s clear that without the Android Edge Platform the higher-end device is more than 6 times faster than the Nokia. Whereas when the Nokia phone uses this system the benchmark runs 166% faster than the other device resulting in a performance increase in more than 10 times in this benchmark.

Geekbench 5

Geekbench 5 is a benchmarking application used by multiple smartphone reviewers which measures the CPU performance in various scenarios that simulate real use cases.

Seconds TA-1021 12.36

PH-1 1.96

TA-1021+AEP 1.18

0 2 4 6 8 10 12 time

Figure 4.2: RealPi benchmark results.

16 4.3. Latency

Single-core Multi-core 310 TA-1021 86

1,642 PH-1 386

4,972 TA-1021+AEP 1,119

0 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000 4,500 5,000 Geekbench 5 score

Figure 4.3: Geekbench 5 results.

These benchmarks include text and image compression, HTML5, SQLite, PDF and text rendering, Clang, Face Detection, Ray Tracing and Speech Recognition among others. When a benchmark run is completed this app outputs a single core and a multi-core score which is a result of combining the results of the individual results of each use. The higher the score, the better.

The results displayed in the figure 4.3 show similar results as the RealPi benchmark, where the lower end device is outperformed several times when running standalone, but when the work is performed at the edge the results are several times better than a high end smartphone.

4.3 Latency

The delay between an action and the result of that action is what we call latency. Ad-hoc changes and another purpose-built application was required to measure this time period. The testing application consists of a large button that changes color when pressed. The process to calculate the system latency is as follows:

First an event is detected and the corresponding callback is executed in Remote Anbox Client app. Due to how the Android input stack processes the events, this event could be delayed up to 16.7ms after it enters the stack. At this moment we save a timestamp T1 using the call System.nanotime().

This event is serialized and sent to the Remote Anbox Service. There, it’s translated to

17 Chapter 4. Results

600

500

400

300 End-to-end latency (ms)

200

Figure 4.4: Latency measurements the native events and picked up by the remote Android input stack. In the next frame that is rendered by the AEP the color of the button will be changed. That frame is captured and the image is sent to the StreamingEncoder to be transmitted to the RAC where it’s decoded and placed on a surface. On the next VSync event the content of that surface is read back and and if the color of the button has changed another timestamp T2 is recorded. Finally both timestamps are compared and the difference is converted to milliseconds is logged.

Due to the synchronization of the framework, it is estimated that at least two frame periods of 1/60s ( 16.7ms) could be lost due to the synchronization at the Android stack on both ends, whereas the rest is attributed to the processing overhead and network latency.

The test is performed in a LAN where the server is on a wired network and the mobile clients connect via Wi-Fi to an access point located on the same network.

From 60 samples taken to measure the latency, the median is 237ms and the average is 277ms.

One possible reason for the high latency values depicted as outliers in figure 4.4 is the device going to power-saving mode (doze state in the IEEE 802.11 standard)[17]. Android supports disabling this mode from Android 101 as long as the manufacturer’s hardware abstraction layer supports it. It this case it wasn’t possible to enable the low latency

1Wi-Fi Low-Latency Mode https://source.android.com/devices/tech/connect/wifi-low-latency

18 4.3. Latency mode on any of the devices used for testing.

The values of latency obtained are consistent with other latency results collected in similar scenarios [18]. While it should be possible to distinguish an application running locally from an app running at the edge by exploiting latency-sensitive interactions such as dragging quickly an element, the benefits of this system compensate the interface delay for demanding applications.

19

5 Conclusions and future work

A proof of concept for an architecture that allow simple devices to run complex applications was designed and implemented. This architecture executes such applications in a remote server and uses the user device just to render the interface and capture the user interactions. The remote rendering system was developed using Anbox as a base platform.

The system was deployed in a conventional server, and validated in a LAN environment, but the results can be extrapolated to 5G edge networks where the latency between the user device, the radio access network and the edge server are very small.

We completed different tests to measure the performance gain achieved with the more powerful CPU available in x86-64 servers, and to determine the latency that introduces the remote execution and rendering.

With our proposal, even low range or highly embedded devices (wearables such as smart watches or augmented reality glasses) can “execute” complex Android applications. Nevertheless, the latency introduced may cause problems for high demanding applications that require very fast reactions to user interactions (games or applications that interact with the environment.

Therefore, it will be necessary to find new mechanisms to reduce the latency. For example, according to the new announcement about the upcoming version of the Android operating system1 special effort is being put on improving low latency streaming, with the addition of specific low latency features to the native Media Codec. Another reasonable expectation is that additional reductions will come from network improvements as 5G becomes ubiquitous (5G radio is being specifically designed to provide low latency communications).

1New features of Android 11 - https://developer.android.com/about/versions/11/features

21 Chapter 5. Conclusions and future work

5.1 Future work

With the architecture implemented in this master thesis, users can have “dumb smart- phones” that just provide the interface to control remote applications running in a server. This approach is adequate for “static” users and devices, but when we include mobility it may be necessary to relocate the application to a new server near the user or even to the local smartphone (although, probably with a poorer performance).

The Android operating system has mechanics to quickly resume from paused states and to separate the user data from the application code, but additional research is needed to validate the possibility of seamless migrations (without interrupting or affecting the operation) of Android applications between different edge servers while the user is physically moving, or the migration of an app from the user’s device to the edge or cloud to continue running without consuming local resources.

22 Bibliography

[1] “5G At The Edge,” 5GAmericas.org, 2018.

[2] Q. Pham, F. Fang, V. N. Ha, M. J. Piran, M. Le, L. B. Le, W. Hwang, and Z. Ding, “A Survey of Multi-Access Edge Computing in 5G and Beyond: Fundamentals, Technology Integration, and State-of-the-Art,” IEEE Access, vol. 8, pp. 116 974– 117 017, 2020.

[3] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge Computing: Vision and Challenges,” IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, 2016.

[4] J. Chakareski, “VR/AR immersive communication: Caching, edge computing, and transmission trade-offs,” in Proceedings of the Workshop on Virtual Reality and Augmented Reality Network, 2017, pp. 36–41.

[5] M. Halpern, Y. Zhu, and V. J. Reddi, “Mobile CPU’s rise to power: Quantifying the impact of generational mobile CPU design trends on performance, energy, and user satisfaction,” in 2016 IEEE International Symposium on High Performance Computer Architecture (HPCA), 2016, pp. 64–76.

[6] R. G. Chet Haase, “Google I/O 2018 - Drawn out: how Android renders.” [Online]. Available: https://www.youtube.com/watch?v=zdQRIYOST64

[7] “Android Graphics Overview.” [Online]. Available: https://source.android.com/ devices/graphics

[8] C. Simmonds, “The Android graphics path in depth.”

[9] J. Levin, Android Internals: a Confectioner’s Cookbook: Volume 1: the Power Users’s View. Technologeeks. com, 2015.

[10] ——, “Android Input Architecture (AnDevCon Presentation, from Volume II),” 2015. [Online]. Available: http://newandroidbook.com/files/AndroidInput.pdf

[11] “Anbox - Android in a box.” [Online]. Available: https://anbox.io/

23 Bibliography

[12] “Android Hardware OpenGLES emulation design overview.” [On- line]. Available: https://android.googlesource.com/platform/external/qemu/+/ emu-master-dev/android/android-emugl/DESIGN

[13] Z. Qin, Y. Tang, E. Novak, and Q. Li, “MobiPlay: A Remote Execution Based Record-and-Replay Tool for Mobile Applications,” in Proceedings of the 38th International Conference on Software Engineering, ser. ICSE ’16. New York, NY, USA: Association for Computing Machinery, 2016, p. 571–582. [Online]. Available: https://doi.org/10.1145/2884781.2884854

[14] C. Wang, Y. Wu, and H. Chung, “FUSION: A unified application model for vir- tual mobile infrastructure,” in 2017 IEEE Conference on Dependable and Secure Computing, 2017, pp. 224–231.

[15] T. Chiueh, H. Lin, A. Chao, Anthony, T. Wu, C. Wang, and Y. Wu, “Smartphone Vir- tualization,” in 2016 IEEE 22nd International Conference on Parallel and Distributed Systems (ICPADS), 2016, pp. 141–150.

[16] Y. Lu, S. Li, and H. Shen, “Virtualized Screen: A Third Element for Cloud–Mobile Convergence,” IEEE MultiMedia, vol. 18, no. 2, pp. 4–11, 2011.

[17] Y. Zhu and V. C. M. Leung, “Efficient Power Management for Infrastructure IEEE 802.11 WLANs,” IEEE Transactions on Wireless Communications, vol. 9, no. 7, pp. 2196–2205, 2010.

[18] Y. Tang, “Exploring New Paradigms for Mobile Edge Computing,” 2018.

24