Unikernel As Service Units on Distributed Resource-Constrained Edge Devices
Total Page:16
File Type:pdf, Size:1020Kb
Md Towfqul Alom Unikernel as Service Units on Distributed Resource-constrained Edge Devices Faculty of Information Technology and Communication Sciences (ITC) Master’s thesis November 2020 Abstract Md Towfqul Alom: Unikernel as Service Units on Distributed Resource-constrained Edge Devices Master’s thesis Tampere University Master’s Degree Programme in Information Technology November 2020 There are almost 31 billion IoT devices in the world. Due to its widespread use the concept virtualization and cloud computing is moving towards the IoT sys- tems support. There are lots of challenges that appears in terms of deploying the computational units in IoT devices due to its resource constrained nature and the absence of high processing power. A new technology called unikernel provides the support to meet these challenges. Unikernel combines the applications stack with a specialised kernel that has the only required libraries of operating systems to run the application stack. The end result is a low memory footprint image that can be deployed in memory constrained IoT devices. In industrial server hundreds of IoT devices run on same network and share their computational units with each other. Managing and orchestrating these services running in IoT devices requires additional overhead application units which makes the whole automation process much complex. Arrowhead framework is an industrial automation tool provides the solution to this challenges. With its service oriented architecture(SOA) approach the framework provides collaborative automation to maintain inter connectivity between computational units in IoT devices in a form of System of Systems(SoS). Arrowhead framework introduces a local cloud concept that deducts the additional application stack overhead to achieve the automation processes like service discovery, registry and orchestration. Keywords: Operating system, unikernel, ARM 64, orchestration, Arrowhead frame- work. The originality of this thesis has been checked using the Turnitin Originality Check service. Contents 1 Introduction ................................... 1 1.1 Problem Statement ............................ 3 1.2 Research Questions ............................ 3 1.3 Objectives ................................. 4 1.4 Systematic methodology ......................... 5 2 Background ................................... 6 2.1 Library Operating System: A new OS concept ............. 6 2.2 Virtualization model : Hypervisor .................... 7 2.2.1 Kernel based virtual machine .................... 8 2.2.2 VMWare ESXi ............................ 9 2.2.3 Xen hypervisor ............................ 9 2.3 Containerization technology ....................... 10 2.3.1 Docker Containerization ....................... 11 2.3.2 Other technologies .......................... 12 2.4 Unikernel : A new containerization technology ............. 12 2.4.1 Why we need a new technology for containerization? ....... 12 2.4.2 What is unikernel? .......................... 13 2.4.3 Unikernel architecture ........................ 13 2.4.4 Advantages of unikernel ....................... 14 2.5 State of the art unikernel technologies ................. 15 2.5.1 Mirage OS .............................. 15 2.5.2 Rumprun ............................... 15 2.5.3 Include OS .............................. 16 2.5.4 OPS-NanoVM ............................ 17 2.5.5 OSv .................................. 17 2.6 Diferent unikernel echo system ..................... 18 2.6.1 Jitsu ................................. 18 2.6.2 Solo5 ................................. 19 2.6.3 Unik ................................. 19 2.7 Arrowhead Framework .......................... 21 2.7.1 Automation vs Orchestration .................... 21 2.7.2 Need of orchestration framework: Unikernel perspective ..... 21 2.7.3 Arrowhead Framework: The Solution ............... 22 2.7.4 Mandatory core systems and services ................ 23 2.7.5 Application services and systems .................. 25 3 Proof of Concept ................................ 26 3.1 Framework of Possible Approaches ................... 26 3.2 Environment setup: unikernel Implementation ............. 27 3.3 Experiment I: Unikernel development scope .............. 28 3.3.1 Unikernel technology solutions choices ............... 28 3.3.2 Hierarchy of unikernel build process ................ 30 3.3.3 Program Specifc Unikernel development .............. 31 3.3.1 MirageOS Unikernel development ........... 31 3.3.4 Global unikernel compiler based development ........... 32 3.4.1 OPS Nano VM Unikernel development ........ 32 3.4.2 OSv unikernel development ............... 36 3.4 Experiment II: Analysis on boot time, footprint monitoring with test cases .................................... 39 3.4.1 Test setup .............................. 39 3.4.2 Boot time test case ......................... 39 3.4.3 Memory footprint test case ..................... 40 3.5 Experiment III: Deployment of unikernel image of proof of concept solution to ARM device ......................... 41 3.5.1 Preferred unikernel technology ................... 41 3.5.2 Proof concept solution ........................ 41 3.5.3 Unikernel of Proof concept solution ................ 43 3.5.4 Deployment of unikernel to ARM 64 ................ 43 3.6 Experiment IV: Orchestration of unikernel proof of concept solution using Arrowhead ............................. 44 3.6.1 Environment Setup ......................... 44 3.6.2 Arrowhead Cloud Setup ....................... 45 3.6.3 Inter Cloud Orchestration ...................... 47 4 Evaluation .................................... 53 4.1 Evaluation of unikernel technology ................... 53 4.2 Evaluation of Arrowhead framework .................. 56 5 Conclusions ................................... 58 References ...................................... 63 1 1 Introduction The term Virtualization has become a great buzzword after its introduction by IBM in 1960s (Kochut and Beaty 2007). The technology lead the foundation to a modern day utility computing and cloud infrastructure. Before virtualization, running an abstract services required a running physical server with other required services. With virtualization it allows the users to create and use multiple environment or resources one physical system. Cloud on the other side use the technology to share the resources as a scalable resource over a network. The core concept of the cloud infrastructure is to give support to numerous operating systems without physically copying the application to diferent systems. In that scenario the hypervisor become the core delivery infrastructure. The hypervisor which is also commonly known as virtual machine monitor or VMM is used to create and run virtual machines. The virtual machines contain the guest operating systems like Windows, Linux, MacOS as a standalone system that shares the virtualized hardware resources from a single host physical machine. The hypervisor works on top of the hardware layer to isolate the guest operating system and facilitates them with the underlying hardware resources. The common example of hypervisor is Microsoft Hyper-V hypervisor, VMware ESXi, Citrix XenServer. In cloud computing infrastructure as a service is a popular resource allocation model where users can run virtual machines in providers server and consume the resources in an on demand basis without buying or deploying a whole physical infrastructure or a underlying hardware system. Usually these virtual machines contains a full-fedged operating system that has all the necessary features to run a variety of applications. After the introduction of a phenomenal concept called ”scaling” by computer scientist Dr.Douglas Engelbart (Barnes 1997), the computer hardware, CPU chips or embedded micro controller are getting smaller yet generating more computational power. In software industry the trend is quite the opposite. Because of the feature rich nature of the modern day operating systems, it consumes more resources in an IaaS environment. Also, maintaining the up time of the services is a crucial factors, but it is problematic due to the complexity and often creates a mess while downing and redeploying a virtual machines containing an entire operating system. Using container technology is the most popular solution to this problem. Virtual machines contain an entire operating system along with the application. A physical host machine running two virtual machines need a hypervisor with two standalone operating system running on top of the hypervisor. In containerization, the technol- ogy runs a single operating system and the separate container shares the operating 2 Figure 1.1 Architecture comparison of virtual machines, Linux containers and uniker- nels(CETIC 2013). system kernel with the other containers. The read only operating system along with the application are packaged to a container and deployed in a second. This makes the container more lightweight than virtual machines. A host machine can deploy more containers than virtual machines as it consumes less resources. One crucial beneft of containerization is that it takes seconds to boot while a virtual machine takes several minutes. Containers run on a general purpose operating system like Windows or Linux, which performs hundreds of system level operations which the target application running along in the container doesn’t need. These operations consumes gigabytes of data storage and computational powers which leads to an additional cost in IaaS cloud scenario. Hence