Unikernel As Service Units on Distributed Resource-Constrained Edge Devices

Total Page:16

File Type:pdf, Size:1020Kb

Unikernel As Service Units on Distributed Resource-Constrained Edge Devices Md Towfqul Alom Unikernel as Service Units on Distributed Resource-constrained Edge Devices Faculty of Information Technology and Communication Sciences (ITC) Master’s thesis November 2020 Abstract Md Towfqul Alom: Unikernel as Service Units on Distributed Resource-constrained Edge Devices Master’s thesis Tampere University Master’s Degree Programme in Information Technology November 2020 There are almost 31 billion IoT devices in the world. Due to its widespread use the concept virtualization and cloud computing is moving towards the IoT sys- tems support. There are lots of challenges that appears in terms of deploying the computational units in IoT devices due to its resource constrained nature and the absence of high processing power. A new technology called unikernel provides the support to meet these challenges. Unikernel combines the applications stack with a specialised kernel that has the only required libraries of operating systems to run the application stack. The end result is a low memory footprint image that can be deployed in memory constrained IoT devices. In industrial server hundreds of IoT devices run on same network and share their computational units with each other. Managing and orchestrating these services running in IoT devices requires additional overhead application units which makes the whole automation process much complex. Arrowhead framework is an industrial automation tool provides the solution to this challenges. With its service oriented architecture(SOA) approach the framework provides collaborative automation to maintain inter connectivity between computational units in IoT devices in a form of System of Systems(SoS). Arrowhead framework introduces a local cloud concept that deducts the additional application stack overhead to achieve the automation processes like service discovery, registry and orchestration. Keywords: Operating system, unikernel, ARM 64, orchestration, Arrowhead frame- work. The originality of this thesis has been checked using the Turnitin Originality Check service. Contents 1 Introduction ................................... 1 1.1 Problem Statement ............................ 3 1.2 Research Questions ............................ 3 1.3 Objectives ................................. 4 1.4 Systematic methodology ......................... 5 2 Background ................................... 6 2.1 Library Operating System: A new OS concept ............. 6 2.2 Virtualization model : Hypervisor .................... 7 2.2.1 Kernel based virtual machine .................... 8 2.2.2 VMWare ESXi ............................ 9 2.2.3 Xen hypervisor ............................ 9 2.3 Containerization technology ....................... 10 2.3.1 Docker Containerization ....................... 11 2.3.2 Other technologies .......................... 12 2.4 Unikernel : A new containerization technology ............. 12 2.4.1 Why we need a new technology for containerization? ....... 12 2.4.2 What is unikernel? .......................... 13 2.4.3 Unikernel architecture ........................ 13 2.4.4 Advantages of unikernel ....................... 14 2.5 State of the art unikernel technologies ................. 15 2.5.1 Mirage OS .............................. 15 2.5.2 Rumprun ............................... 15 2.5.3 Include OS .............................. 16 2.5.4 OPS-NanoVM ............................ 17 2.5.5 OSv .................................. 17 2.6 Diferent unikernel echo system ..................... 18 2.6.1 Jitsu ................................. 18 2.6.2 Solo5 ................................. 19 2.6.3 Unik ................................. 19 2.7 Arrowhead Framework .......................... 21 2.7.1 Automation vs Orchestration .................... 21 2.7.2 Need of orchestration framework: Unikernel perspective ..... 21 2.7.3 Arrowhead Framework: The Solution ............... 22 2.7.4 Mandatory core systems and services ................ 23 2.7.5 Application services and systems .................. 25 3 Proof of Concept ................................ 26 3.1 Framework of Possible Approaches ................... 26 3.2 Environment setup: unikernel Implementation ............. 27 3.3 Experiment I: Unikernel development scope .............. 28 3.3.1 Unikernel technology solutions choices ............... 28 3.3.2 Hierarchy of unikernel build process ................ 30 3.3.3 Program Specifc Unikernel development .............. 31 3.3.1 MirageOS Unikernel development ........... 31 3.3.4 Global unikernel compiler based development ........... 32 3.4.1 OPS Nano VM Unikernel development ........ 32 3.4.2 OSv unikernel development ............... 36 3.4 Experiment II: Analysis on boot time, footprint monitoring with test cases .................................... 39 3.4.1 Test setup .............................. 39 3.4.2 Boot time test case ......................... 39 3.4.3 Memory footprint test case ..................... 40 3.5 Experiment III: Deployment of unikernel image of proof of concept solution to ARM device ......................... 41 3.5.1 Preferred unikernel technology ................... 41 3.5.2 Proof concept solution ........................ 41 3.5.3 Unikernel of Proof concept solution ................ 43 3.5.4 Deployment of unikernel to ARM 64 ................ 43 3.6 Experiment IV: Orchestration of unikernel proof of concept solution using Arrowhead ............................. 44 3.6.1 Environment Setup ......................... 44 3.6.2 Arrowhead Cloud Setup ....................... 45 3.6.3 Inter Cloud Orchestration ...................... 47 4 Evaluation .................................... 53 4.1 Evaluation of unikernel technology ................... 53 4.2 Evaluation of Arrowhead framework .................. 56 5 Conclusions ................................... 58 References ...................................... 63 1 1 Introduction The term Virtualization has become a great buzzword after its introduction by IBM in 1960s (Kochut and Beaty 2007). The technology lead the foundation to a modern day utility computing and cloud infrastructure. Before virtualization, running an abstract services required a running physical server with other required services. With virtualization it allows the users to create and use multiple environment or resources one physical system. Cloud on the other side use the technology to share the resources as a scalable resource over a network. The core concept of the cloud infrastructure is to give support to numerous operating systems without physically copying the application to diferent systems. In that scenario the hypervisor become the core delivery infrastructure. The hypervisor which is also commonly known as virtual machine monitor or VMM is used to create and run virtual machines. The virtual machines contain the guest operating systems like Windows, Linux, MacOS as a standalone system that shares the virtualized hardware resources from a single host physical machine. The hypervisor works on top of the hardware layer to isolate the guest operating system and facilitates them with the underlying hardware resources. The common example of hypervisor is Microsoft Hyper-V hypervisor, VMware ESXi, Citrix XenServer. In cloud computing infrastructure as a service is a popular resource allocation model where users can run virtual machines in providers server and consume the resources in an on demand basis without buying or deploying a whole physical infrastructure or a underlying hardware system. Usually these virtual machines contains a full-fedged operating system that has all the necessary features to run a variety of applications. After the introduction of a phenomenal concept called ”scaling” by computer scientist Dr.Douglas Engelbart (Barnes 1997), the computer hardware, CPU chips or embedded micro controller are getting smaller yet generating more computational power. In software industry the trend is quite the opposite. Because of the feature rich nature of the modern day operating systems, it consumes more resources in an IaaS environment. Also, maintaining the up time of the services is a crucial factors, but it is problematic due to the complexity and often creates a mess while downing and redeploying a virtual machines containing an entire operating system. Using container technology is the most popular solution to this problem. Virtual machines contain an entire operating system along with the application. A physical host machine running two virtual machines need a hypervisor with two standalone operating system running on top of the hypervisor. In containerization, the technol- ogy runs a single operating system and the separate container shares the operating 2 Figure 1.1 Architecture comparison of virtual machines, Linux containers and uniker- nels(CETIC 2013). system kernel with the other containers. The read only operating system along with the application are packaged to a container and deployed in a second. This makes the container more lightweight than virtual machines. A host machine can deploy more containers than virtual machines as it consumes less resources. One crucial beneft of containerization is that it takes seconds to boot while a virtual machine takes several minutes. Containers run on a general purpose operating system like Windows or Linux, which performs hundreds of system level operations which the target application running along in the container doesn’t need. These operations consumes gigabytes of data storage and computational powers which leads to an additional cost in IaaS cloud scenario. Hence
Recommended publications
  • The Next Generation Cloud: the Rise of the Unikernel
    The Next Generation Cloud: The Rise of the Unikernel A Xen Project publication April 2015 xenproject.org Docker and Linux container technologies dominate headlines today as a powerful, easy way to package applications, especially as cloud computing becomes more mainstream. While still a work-in-progress, they offer a simple, clean and lean way to distribute application workloads. With enthusiasm continuing to grow for container innovations, a related technology called unikernels is also beginning to attract attention. Known also for their ability to cleanly separate functionality at the component level, unikernels are developing a variety of new approaches to deploy cloud services. Traditional operating systems run multiple applications on a single machine, managing resources and isolating applications from one another. A unikernel runs a single application on a single virtual machine, relying instead on the hypervisor to isolate those virtual machines. Unikernels are constructed by using “library operating systems,” from which the developer selects only the minimal set of services required for an application to run. These sealed, fixed-purpose images run directly on a hypervisor without an intervening guest OS such as Linux. As well as improving upon container technologies, unikernels are also able to deliver impressive flexibility, speed and versatility for cross-platform environments, big data analytics and scale-out cloud computing. Like container-based solutions, this technology fulfills the promise of easy deployment, but unikernels also offer an extremely tiny, specialized runtime footprint that is much less vulnerable to attack. There are several up-and-coming open source projects to watch this year, including ClickOS, Clive, HaLVM, LING, MirageOS, Rump Kernels and OSv among others, with each of them placing emphasis on a different aspect of the unikernel approach.
    [Show full text]
  • A Linux in Unikernel Clothing Lupine
    A Linux in Unikernel Clothing Hsuan-Chi Kuo+, Dan Williams*, Ricardo Koller* and Sibin Mohan+ +University of Illinois at Urbana-Champaign *IBM Research Lupine Unikernels are great BUT: Unikernels lack full Linux Support App ● Hermitux: supports only 97 system calls Kernel LibOS + App ● OSv: ○ Fork() , execve() are not supported Hypervisor Hypervisor ○ Special files are not supported such as /proc ○ Signal mechanism is not complete ● Small kernel size ● Rumprun: only 37 curated applications ● Heavy ● Fast boot time ● Community is too small to keep it rolling ● Inefficient ● Improved performance ● Better security 2 Can Linux behave like a unikernel? 3 Lupine Linux 4 Lupine Linux ● Kernel mode Linux (KML) ○ Enables normal user process to run in kernel mode ○ Processes can still use system services such as paging and scheduling ○ App calls kernel routines directly without privilege transition costs ● Minimal patch to libc ○ Replace syscall instruction to call ○ The address of the called function is exported by the patched KML kernel using the vsyscall ○ No application changes/recompilation required 5 Boot time Evaluation Metrics Image size Based on: Unikernel benefits Memory footprint Application performance Syscall overhead 6 Configuration diversity ● 20 top apps on Docker hub (83% of all downloads) ● Only 19 configuration options required to run all 20 applications: lupine-general 7 Evaluation - Comparison configurations Lupine Cloud Operating Systems [Lupine-base + app-specific options] OSv general Linux-based Unikernels Kernel for 20 apps
    [Show full text]
  • Governance in Collaborative Open Source Software Development Organizations: a Comparative Analysis of Two Case Studies
    Governance in Collaborative Open Source Software Development Organizations: A Comparative Analysis of two Case Studies Master’s thesis Faculty of Business, Economics and Social Sciences University of Bern submitted to Dr. Matthias Stürmer Research Center for Digital Sustainability Institute of Information Systems by Winkelmann, Rahel from Siselen 11th semester Matriculation nr.: 09-127-788 Study address: Huberstrasse 22 3008 Bern (Tel. 078 758 58 96) (e-Mail: [email protected]) Bern, 20.01.2015 Abstract While loose cooperation among several actors is common in the open source sector, companies merging into a professionally governed collaborative open source software development organization across industries is an emerging phenomenon. The purpose of this thesis is to shed light on this new approach of software development by creating a framework for building a collaborative open source software development organization. A comparative analysis examines the governance models of two different collaborative open source software development organizations from the organizational, financial and legal perspective and reveals the autonomous and the affiliated organization type and their key characteristics. Based on these findings and by means of four expert interviews a framework consisting of eight criteria that need to be considered in order to build a collaborative open source software development organization is created. Zusammenfassung In der Open Source Branche ist es gängig, dass sich verschiedene Akteure zur Softwareentwicklung zu losen Konsortien zusammenschliessen. Unternehmen, welche sich im professionellen Rahmen zu einer Organisation zusammenschliessen um gemeinsam Open Source Software zu entwickeln, sind jedoch ein neues Phänomen. Der Zweck dieser Arbeit ist es Aufschluss über diesen neuen Ansatz von Softwareentwicklung zu geben.
    [Show full text]
  • Erlang on Physical Machine
    on $ whoami Name: Zvi Avraham E-mail: [email protected] /ˈkɒm. pɑː(ɹ)t. mɛntl̩. aɪˌzeɪ. ʃən/ Physicalization • The opposite of Virtualization • dedicated machines • no virtualization overhead • no noisy neighbors – nobody stealing your CPU cycles, IOPS or bandwidth – your EC2 instance may have a Netflix “roommate” ;) • Mostly used by ARM-based public clouds • also called Bare Metal or HPC clouds Sandbox – a virtual container in which untrusted code can be safely run Sandbox examples: ZeroVM & AWS Lambda based on Google Native Client: A Sandbox for Portable, Untrusted x86 Native Code Compartmentalization in terms of Virtualization Physicalization No Virtualization Virtualization HW-level Virtualization Containerization OS-level Virtualization Sandboxing Userspace-level Virtualization* Cloud runs on virtual HW HARDWARE Does the OS on your Cloud instance still supports floppy drive? $ ls /dev on Ubuntu 14.04 AWS EC2 instance • 64 teletype devices? • Sound? • 32 serial ports? • VGA? “It’s DUPLICATED on so many LAYERS” Application + Configuration process* OS Middleware (Spring/OTP) Container Managed Runtime (JVM/BEAM) VM Guest Container OS Container Guest OS Hypervisor Hardware We run Single App per VM APPS We run in Single User mode USERS Minimalistic Linux OSes • Embedded Linux versions • DamnSmall Linux • Linux with BusyBox Min. Linux OSes for Containers JeOS – “Just Enough OS” • CoreOS • RancherOS • RedHat Project Atomic • VMware Photon • Intel Clear Linux • Hyper # of Processes and Threads per OS OSv + CLI RancherOS processes CoreOS threads
    [Show full text]
  • A Comprehensive Overview Last Updated: January
    Project ACRN: A Comprehensive Overview Last updated: January 19, 2021 Content ❖ Introduction ❖ Architecture ❖ Value Proposition ❖ Key Capabilities ❖ Scenarios Introduction ACRN™ is a flexible, lightweight reference hypervisor, built with real-time and safety-criticality in mind, optimized to streamline embedded development through an open source platform. - A Linux Foundation Project Launched in March 2018 - Version 1.0 released in May 2019 - Version 2.0 released in June 2020 Overallarchitecture ACRN is a registered trademark of the Linux Foundation. *Other names and brands may be claimed as the property of others. Value Proposition - ACRN Small Footprint • Optimized for resource-constrained devices • Small codebase: less than 40,000 vs. >156,000 lines of code for datacenter-centric hypervisors Functional Safety & Hard Real-time • Heterogeneous Workloads Consolidation • Supports also the most demanding workloads: safety-critical and real- time Open-source with Flexible License • Permissive BSD license enables proprietary Guest OS • Business-friendly to adopt, modify and integrate • True open source with a vibrant Community ACRN reduces system deployment complexity, enables heterogeneous architectures, and provide TCO advantages Key Capabilities Functional Safety Hard Real-time MISRA-C Compliance Support hard or soft RT VM FuSa certification targeted Optimized for RT, e.g. no VMExit*, cache isolation Security & Isolation Rich I/O Mediation Full isolation for mixed criticality workloads Graphics, Audio, USB… Intel VT backed isolation Industry
    [Show full text]
  • Vms, Unikernels and Containers: Experiences on the Performance of VirtualizaOn Technologies
    VMs, Unikernels and Containers: Experiences on the Performance of Virtualizaon Technologies Felipe Huici, Filipe Manco, Jose Mendes, Simon Kuenzer NEC Europe Ltd. (Heidelberg) In the Beginning… VM In the Beginning… “Tinyfied VMs” VM In the Beginning… “Tinyfied VMs” unikernels VM In the Beginning… “Tinyfied VMs” containers unikernels VM In the Beginning… “Tinyfied VMs” containers unikernels VM Virt. Technology Benchmarking • Metrics: – VM Image and memory consump=on – VM creaon =me – Delay – Throughput Virt. Technology Benchmarking • Metrics: – VM Image and memory consump=on – VM creaon =me – Delay – Throughput higher lower overhead overhead Virt. Technology Benchmarking • Metrics: – VM Image and memory consump=on – VM creaon =me – Delay – Throughput higher lower overhead overhead Virt. Technology Benchmarking • Metrics: – VM Image and memory consump=on – VM creaon =me – Delay – Throughput higher lower overhead overhead Virt. Technology Benchmarking • Metrics: – VM Image and memory consump=on – VM creaon =me – Delay – Throughput higher lower overhead overhead Virt. Technology Benchmarking • Metrics: – VM Image and memory consump=on – VM creaon =me – Delay – Throughput higher lower overhead overhead Virt. Technology Benchmarking • Metrics: – VM Image and memory consump=on – VM creaon =me – Delay – Throughput ? higher lower overhead overhead Virt. Technology Benchmarking • Metrics: – VM Image and memory consump=on – VM creaon =me – Delay – Throughput higher lower overhead overhead Virt. Technology Benchmarking • Metrics: – VM Image and memory consump=on – VM creaon =me – Delay – Throughput ? higher lower overhead overhead Virtualizaon Technology Benchmarking • Metrics: – VM image and memory consump=on: ls, top, xl – VM creaon =me: SYN flood + RST detec=on – Throughput: iperf, guest to host (TCP traffic) – RTT: ping flood • VM-based tests run on both Xen and KVM • Hardware: x86_64 server with an Intel Xeon E5-1630 v3 3.7GHz CPU (4 cores), 32GB RAM.
    [Show full text]
  • Design a System That “Just Works” the Arm Serverready Program and Beyond
    Design a System that “Just Works” The Arm ServerReady Program and Beyond November 2019 White Paper 1 Contents Page Topic 3 Executive Summary 3 1. Introduction 3 2. Arm ServerReady Program 3 2.1 Background and History 4 2.2 Arm Standards-Based Approach 6 3. Arm Server Advisory Committee (ServerAC) 6 3.1 ServerAC Development Process 8 4. Arm Server Standards 8 4.1 Server Base System Architecture (SBSA) 8 4.2 Server Base Boot Requirements (SBBR) 9 4.3 Server Base Security Guide (SBSG) 10 4.4 Server Base Manageability Requirements (SBMR) 11 5. Architectural Compliance Suites 12 5.1 Arm ServerReady Players and Responsibilities 15 6. Conclusion 16 7. Glossary 2 Executive Summary This whitepaper describes the ServerReady program, certification process, and Arm’s standards-based approach to system development. This approach is not limited to servers, it is suitable for other use cases, including edge and client devices, if standard operating systems are used. The intended audience is Arm-based processors and systems developers and integrators. By reading this paper, you will learn about Arm’s standards- based approach for development, and how to design and certify systems with the Arm ServerReady program. 1. Introduction In order to design a system that “just works” for the end user, with the ability to install and run standard operating systems (e.g. Windows, RHEL, VMWare, etc…) out-of-the-box, a set of minimum hardware and firmware requirements must be followed. For the Arm Ecosystem, such a need first surfaced in the server segment. The Arm ServerReady compliance program provides this “just works” solution for servers, enabling partners to deploy Arm servers with confidence.
    [Show full text]
  • Operator and Service Manual Standard 30 MF
    Standard 30 MF 110-700 Operator and Service Manual Raytheon Anschütz GmbH Postfach 11 66 D-24100 Kiel Germany Tel +49-4 31-30 19-0 Fax +49-4 31-30 19 464 Email [email protected] Edition: 01 www.raytheon-anschuetz.com 10000000756 Dieses Dokument sowie dessen Inhalt sind This document and its content are copyright urheberrechtlich geschützt. Die Weitergabe, protected. Distribution, reproduction and Vervielfältigung und Speicherung sowie die storage as well as translation and exploitation Übersetzung wie auch Verwendung dieses of this document and its content, in whole or Dokuments oder dessen Inhalts, als Ganzes in parts and regardless of what form, are oder in Teilen und egal in welcher Form, ist prohibited without prior express written ohne vorherige ausdrückliche schriftliche permission. Offenders will be held liable for Genehmigung nicht gestattet. the payment of damages. Zuwiderhandlungen verpflichten zu Schadenersatz. Änderungen dieses Dokuments und dessen Changes and modification to this document Inhalt bleiben vorbehalten. and its content reserved. Standard 30 MF 110-700 Operator and Service Manual This manual has been drawn up as a description and reference work. It will help answer questions and will solve problems in the quickest possible manner. Before operating the equipment read and follow the instructions and hints provided in this manual. For this purpose refer to the table of contents and read the corresponding chapters thoroughly. If you have any further questions, please contact us at the following address: RAYTHEON ANSCHÜTZ GMBH Tel. +49 431 / 3019 - 0 Zeyestr. 16 - 24 Fax +49 431 / 3019 - 291 D-24106 Kiel Germany All rights reserved.
    [Show full text]
  • User Manual Piezo Positioning Electronic ANC300
    Version: 3. 1 Modified: January 2014 Products: ANC300, ANM150, ANM200, ANM300 Firmware: 18 -04 -2012 User Manual Piezo Positioning Electronic ANC300 Open source software notice The ANC300 and its modules (ANM150, ANM200, and ANM300) include certain open source or other software originating from third parties that is subject to the GNU General Public License (GPL), GNU Library/Lesser General Public License (LGPL) and different and/or additional copyright licenses, disclaimers and notices. You may obtain a complete corresponding machine-readable copy of the source code of such software under the GPL or LGPL from attocube systems. attocube sys- tems offers to provide such source code to you on CD-ROM for a charge covering the cost of performing such distribution, such as the cost of media, shipping and handling, upon written request to: attocube systems AG Königinstrasse 11a RGB D-80539 München Germany This offer is valid for a period of three (3) years from the date of the distribution of this product by attocube. ATTOCUBE SYSTEMS AG MAKES NO REPRESENTATION ABOUT THE SUITABILITY OF THIS SOURCE CODE FOR ANY PURPOSE. IT IS PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND. ATTOCUBE SYSTEMS AG DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOURCE CODE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL ATTOCUBE SYSTEMS AG BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE, OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOURCE CODE.
    [Show full text]
  • Software Just Works on Arm-Based Devices
    Software Just Works on Arm-based Devices White paper - Version 2 Contents 1 Glossary 5 2 Executive Summary 7 3 Introduction 8 4 Arm System Architecture 10 4.1 Arm standards-based approach 10 4.2 Base System Architecture and its supplements 11 4.3 Base Boot Requirements 13 5 Architectural Compliance Suite 14 5.1 SBSA Architecture Compliance Suite 14 5.2 Enterprise Architecture Compliance Suite 14 6 Arm SystemReady Program 15 6.1 Arm SystemReady Program 15 6.2 Arm SystemReady Certification Process 17 7 Pre-Silicon Compliance Testing 18 8 Security Extension 19 9 SystemReady Compliance 20 9.1 BSA Compliance 20 9.2 BBR Compliance 20 10 Use Cases 22 11 Conclusion 23 1 Arm Non-Confidential Document Licence (“Licence”) This Licence is a legal agreement between you and Arm Limited (“Arm”) for the use of Arm’s intellectual property (including, without limitation, any copyright) embodied in the document accompanying this Licence (“Document”). Arm licenses its intellectual property in the Document to you on condition that you agree to the terms of this Licence. By using or copying the Document you indicate that you agree to be bound by the terms of this Licence. “Subsidiary” means any company the majority of whose voting shares is now or hereafter owner or controlled, directly or indirectly, by you. A company shall be a Subsidiary only for the period during which such control exists. This Document is NON-CONFIDENTIAL and any use by you and your Subsidiaries (“Licensee”) is subject to the terms of this Licence between you and Arm.
    [Show full text]
  • A Performance Benchmarking Analysis of Hypervisors, Containers and Unikernels on Armv8 and X86 Cpus Eucnc 2018, Ljubljana, Slovenia June 18-21
    A performance benchmarking analysis of Hypervisors, Containers and Unikernels on ARMv8 and x86 CPUs EuCNC 2018, Ljubljana, Slovenia June 18-21 [email protected] www.virtualopensystems.com Authorship and sponsorship Virtual Open Systems is a high-tech software company active in open source virtualization solutions and custom services for complex mixed-criticality automotive systems, NFV networking infrastructures, mobile devices and in general for embedded heterogeneous multicore systems around new generation processor architectures. This work is done in the context of the H2020 “Next Generation Platform as a Service” project (www.ngpaas.eu). 2 Virtual Open Systems Confidential & Proprietary Agenda Objectives Evaluated Solutions – Virtual Machines – Containers – Unikernels Benchmark configuration Benchmark results Conclusion 3 Virtual Open Systems Confidential & Proprietary Agenda Objectives Evaluated Solutions – Virtual Machines – Containers – Unikernels Benchmark configuration Benchmark results Conclusion 4 Virtual Open Systems Confidential & Proprietary Objectives Software Defined Network (SDN) and Network Function Virtualization (NFV) technologies are emerging in the Edge Computing Efficient virtualization technologies are becoming crucial New lightweight techniques (Containers, Unikernels) have emerged This work focuses on comparing the performance of open- source virtualization technologies on X86 and ARMv8 5 Virtual Open Systems Confidential & Proprietary Agenda Objectives Evaluated Solutions – Virtual
    [Show full text]
  • 55Hfl6014u 12 Dfu Swe.Pdf
    Användarhandbok 32HFL5014 43HFL5014 43HFL6014U 50HFL5014 50HFL6014U 55HFL6014U 65HFL6014U Innehåll 10.8 Mer 26 1 TV-pres. 3 10.9 Professionella inställningar 27 1.1 Professionellt läge 3 10.10 Google-konto 28 2 Installation 4 11 Android TV-hemskärm 29 2.1 Läs säkerhetsinstruktionerna 4 11.1 Om Android TV-hemskärmen 29 2.2 TV-stativ eller väggmontering 4 11.2 Öppna Android TV-hemskärmen 29 2.3 Tips om placering 4 11.3 Android TV-inställningar 29 2.4 Nätkabel 4 11.4 Anslut din Android TV 32 2.5 Antennkabel 5 11.5 Kanaler 35 11.6 Kanalinstallation 36 3 Fjärrkontroll 6 11.7 Internet 37 3.1 Knappöversikt 6 11.8 Smarttelefoner och surfplattor 38 3.2 IR-sensor 6 11.9 Programvara 38 3.3 Batterier 7 3.4 Rengöring 7 12 Programvara med öppen källkod 40 12.1 Öppen källkod-licens 40 4 Slå på och av 8 4.1 På eller standby 8 13 Hjälp och support 128 4.2 Knappar på TV:n 8 13.1 Felsökning 128 13.2 Onlinehjälp 129 5 Specifikationer 9 13.3 Support och reparation 130 5.1 Miljö 9 5.2 Effekt 10 14 Säkerhet och skötsel 131 5.3 Operativsystem 10 14.1 Säkerhet 131 5.4 Mottagning 10 14.2 Skötsel av bildskärmen 132 5.5 Visningstyp (6014U) 10 14.3 Radiation Exposure Statement 132 5.6 Visningstyp (5014) 10 5.7 Ingångsupplösning för skärm 10 15 Användningsvillkor 133 5.8 Ingångsupplösning för skärm (5014) 10 15.1 Användningsvillkor – TV 133 5.9 Anslutning (6014U) 11 5.10 Anslutning (5014) 11 16 Copyrights 134 5.11 Mått och vikt 11 16.1 MHL 134 5.12 Ljud 11 16.2 HDMI 134 16.3 Dolby-ljud 134 6 Ansluta enheter 13 16.4 DTS-HD Premium Sound ™ 134 6.1 Ansluta enheter 13
    [Show full text]