“A Linux in Unikernel Clothing”

Total Page:16

File Type:pdf, Size:1020Kb

“A Linux in Unikernel Clothing” Lupine Linux “A Linux in Unikernel Clothing” Dan Williams (IBM) With Hsuan-Chi (Austin) Kuo (UIUC + IBM), Ricardo Koller (IBM), Sibin Mohan (UIUC) Roadmap • Context • Containers and isolation • Unikernels • Nabla containers • Lupine Linux • A linux in unikernel clothing • Concluding thoughts 2 Containers are great! • Have changed how applications are packaged, deployed and developed • Normal processes, but “contained” • Namespaces, cgroups, chroot • Lightweight • Start quickly, “bare metal” • Easy image management (layered fs) • Tooling/orchestration ecosystem 3 But… High level of • Large attack surface to the host abstraction (e.g., system • Limits adoption of container-first architecture calls) app • Fortunately, we know how to reduce attack surface! Host Kernel with namespacing (e.g., Linux) Containers 4 Deprivileging and unsharing kernel functionality Low level of High level of • Virtual machines (VMs) abstraction abstraction • Guest kernel (e.g., virtual (e.g., system app hardware) calls) • Thin interface Guest Kernel app (e.g., Linux) Monitor Process (e.g., QEMU) Host Kernel with Host Kernel/Hypervisor namespacing (e.g., Linux/KVM) (e.g., Linux) VMs Containers 5 Deprivileging and unsharing kernel functionality Low level of High level of • Virtual machines (VMs) abstraction abstraction • Guest kernel (e.g., virtual (e.g., system app hardware) calls) • Thin interface • Userspace kernel Guest Kernel app • Performance issues (e.g., Linux) Monitor Process (e.g., QEMU) Host Kernel with Host Kernel/Hypervisor namespacing (e.g., Linux/KVM) (e.g., Linux) VMs Containers 6 But wait? Aren't VMs slow and heavyweight? • Boot time? • Memory footprint? • Especially for environments like serverless??!! 7 VMs are becoming lightweight Low level of abstraction • Thin monitors (e.g., virtual • e.g., AWS Firecracker app hardware) • Reduce complexity for performance (e.g., no PCI) Guest Kernel (e.g., Linux) Monitor Process (e.g., QEMU) Host Kernel/Hypervisor (e.g., Linux/KVM) VMs 8 VMs are becoming lightweight Low level of abstraction • Thin monitors (e.g., virtual • e.g., AWS Firecracker app hardware) • Reduce complexity for performance (e.g., no PCI) Guest Kernel (e.g., Linux) Monitor Process Host Kernel/Hypervisor (e.g., Linux/KVM) VMs Firecracker boot times as reported in Agache et al., NSDI 2020 9 VMs are becoming lightweight Low level of abstraction • Thin monitors (e.g., virtual • e.g., AWS Firecracker app hardware) • Reduce complexity for performance (e.g., no PCI) Guest Kernel (e.g., Linux) Monitor Process Host Kernel/Hypervisor (e.g., Linux/KVM) VMs Firecracker boot times as reported in Agache et al., NSDI 2020 Manco et al., SOSP 2017 10 VMs are becoming lightweight Low level of abstraction • Thin monitors (e.g., virtual • e.g., AWS Firecracker app hardware) • Reduce complexity for performance (e.g., no PCI) Guest Kernel • Thin guests? • Userspace: (e.g., Ubuntu --> Alpine Linux) Monitor Process • Kernel configuration (e.g., TinyX, Lupine) Host Kernel/Hypervisor • Unikernels (e.g., Linux/KVM) VMs KubeCon 2020 Eurosys 2020 11 Unikernels are thin guests to the extreme • An application linked with library OS components • Run on virtual hardware (like) abstraction • Single CPU VM • Language-specific • MirageOS (OCaml) • IncludeOS (C++) • Legacy-oriented • Rumprun (NetBSD-based) • Hermitux • OSv Claim binary compatibility with Linux 12 Deprivileging and unsharing kernel functionality Low level of High level of • Virtual machines (VMs) abstraction abstraction • Guest kernel (e.g., virtual (e.g., system app hardware) calls) • Thin interface • Userspace kernel Guest Kernel app • E.g., UML (e.g., Linux) • Performance issues Monitor Process • (e.g., QEMU) Library OS / unikernel Host Kernel with • Only-what-you need Host Kernel/Hypervisor namespacing • LightWeight (e.g., Linux/KVM) (e.g., Linux) VMs Containers 13 What we learned from Nabla containers • Nabla containers are unikernels as Unikernel as Process processes Application • Can achieve or exceed lightWeight HotCloud ’16, HotOS ‘17 libraries, runtimes characteristics of containers Library OS • Interfaces are What matter, not HotCloud ’18, SOCC ‘18 virtualization HW TCP/IP FS … • But we lose a lot: Generality Solo5 • Lupine Linux: applying unikernel techniques to Linux VMs Eurosys ’20 14 What we learned from Nabla containers • Nabla containers are unikernels as Unikernel as Process processes Application • Can achieve or exceed lightWeight HotCloud ’16, HotOS ‘17 libraries, runtimes characteristics of containers Library OS • Interfaces are What matter, not HotCloud ’18, SOCC ‘18 virtualization HW TCP/IP FS … • But we lose a lot: Generality Solo5 • Lupine Linux: applying unikernel Eurosys ’20 this techniques to Linux VMs talk 15 Lupine Linux Overview and Roadmap • Introduction • Lupine Linux Application (container) Unikernel-like techniques App rootfs • Specialization Application manifest Specialization • System Call Overhead via Kconfig Elimination System Call • Putting it together Overhead Elimination Lupine Linux • Evaluation via KML Linux source “Unikernel” • Discussion • Related Work 16 Unikernels are great • Small kernel size • Fast boot time • Performance • Security Unikernels are great… But • Small kernel size • Lack full Linux support • Fast boot time • Hermitux: supports only 97 system calls • Performance • OSv: • application needs to be compiled with –PIE, can’t use TLS • Security • Static-linKed applications are not supported • ForK() , execve() are not supported • Special files are not supported such as /proc • Signal mechanism is not complete • Rumprun: only 37 curated applications • Community is too small to keep it rolling Can Linux > be as small as Lupine Linux “Unikernel” > boot as fast as > outperform unikernels? Can Linux > be as small as Lupine Linux “Unikernel” > boot as fast as > outperform unikernels? • Spoiler alert: Yes! • 4MB image size • 23 ms boot time • Up to 33% higher throughput Lupine Linux Overview and Roadmap • Introduction • Lupine Linux Application (container) Unikernel-like techniques App rootfs • Specialization Application manifest Specialization • System Call Overhead via Kconfig Elimination System Call • Putting it together Overhead Elimination Lupine Linux • Evaluation via KML Linux source “Unikernel” • Discussion • Related Work 21 Lupine Linux Overview and Roadmap • Introduction • Lupine Linux Application (container) Unikernel-like techniques App rootfs • Specialization Application manifest Specialization • System Call Overhead via Kconfig Elimination System Call • Putting it together Overhead Elimination Lupine Linux • Evaluation via KML Linux source “Unikernel” • Discussion • Related Work 22 Unikernel technique #1: Specialization • Unikernels include only what is needed • Linux is very configurable • Kconfig • 16,000 options • Drivers • Filesystems • Processor features • ... Specializing Linux through configuration • Start with Firecracker microvm configuration • Assuming unikernel-like workload, can remove even more! • Application-specific options • Multiprocessing • HW management 24 Application-specific options • Example: system calls Option Enabled System Call(s) Furthermore, the kernel is usually run in a separate, more ADVISE_SYSCALLS madvise, fadvise64 privileged security domain than the application. As such, AIO io_setup, io_destroy, io_submit, io_cancel, io_getevents the kernel contains enhanced access control systems such • Kernel services BPF_SYSCALL bpf EPOLL epoll_ctl, epoll_create, epoll_wait, epoll_pwait as SELinux and functionality to guard the crossing from the • e.g., /proc, sysctl EVENTFD eventfd, eventfd2 application domain to the kernel domain, such as seccomp FANOTIFY fanotify_init, fanotify_mark lters, all of which are all unnecessary for unikernels More • Kernel library FHANDLE open_by_handle_at, name_to_handle_at importantly, security options with a severe impact on per- FILE_LOCKING ock formance are also unnecessary for this reason. For example, • Crypto routines FUTEX futex, set_robust_list, get_robust_list INOTIFY_USER inotify_init, inotify_add_watch, inotify_rm_watch KPTI (kernel page table isolation [9]) forbids the mapping • Compression routines SIGNALFD signalfd, signalfd4 of kernel pages into processes’ page table to mitigate the TIMERFD timerfd_create, timerfd_gettime, timerfd_settime Meltdown [39] vulnerability. This dramatically aects sys- • Debugging/information Table 1. Linux conguration options that enable/disable tem call performance; when testing with KPTI on Linux 5.0 system calls. we measured a 10x slowdown in system call latency. In total, we eliminated 12 conguration options due to the single security domain. 25 Linux is well equipped to run on multiple-processor sys- A Lupine kernel compiled for redis does not contain the tems. As a result, the kernel contains various options to in- AIO or EVENTFD-related system calls. clude and tune SMP and NUMA functionality. On the other In addition to the above, some applications expect other hand, since most unikernels do not support fork, the stan- services from the kernel, for instance, the /proc lesystem or dard approach to take advantage of multiple processors is to sysctl functionality. Moreover, the Linux kernel maintains run multiple unikernels. a substantial library that resides in the kernel because of its Finally, Linux contains facilities for dynamically loading traditional position as a more privileged security domain. functionality through modules. A single application facili- Unikernels do not maintain the traditional privilege separa- tates
Recommended publications
  • Intra-Unikernel Isolation with Intel Memory Protection Keys
    Intra-Unikernel Isolation with Intel Memory Protection Keys Mincheol Sung Pierre Olivier∗ Virginia Tech, USA The University of Manchester, United Kingdom [email protected] [email protected] Stefan Lankes Binoy Ravindran RWTH Aachen University, Germany Virginia Tech, USA [email protected] [email protected] Abstract ACM Reference Format: Mincheol Sung, Pierre Olivier, Stefan Lankes, and Binoy Ravin- Unikernels are minimal, single-purpose virtual machines. dran. 2020. Intra-Unikernel Isolation with Intel Memory Protec- This new operating system model promises numerous bene- tion Keys. In 16th ACM SIGPLAN/SIGOPS International Conference fits within many application domains in terms of lightweight- on Virtual Execution Environments (VEE ’20), March 17, 2020, Lau- ness, performance, and security. Although the isolation be- sanne, Switzerland. ACM, New York, NY, USA, 14 pages. https: tween unikernels is generally recognized as strong, there //doi.org/10.1145/3381052.3381326 is no isolation within a unikernel itself. This is due to the use of a single, unprotected address space, a basic principle 1 Introduction of unikernels that provide their lightweightness and perfor- Unikernels have gained attention in the academic research mance benefits. In this paper, we propose a new design that community, offering multiple benefits in terms of improved brings memory isolation inside a unikernel instance while performance, increased security, reduced costs, etc. As a keeping a single address space. We leverage Intel’s Memory result,
    [Show full text]
  • A Linux in Unikernel Clothing Lupine
    A Linux in Unikernel Clothing Hsuan-Chi Kuo+, Dan Williams*, Ricardo Koller* and Sibin Mohan+ +University of Illinois at Urbana-Champaign *IBM Research Lupine Unikernels are great BUT: Unikernels lack full Linux Support App ● Hermitux: supports only 97 system calls Kernel LibOS + App ● OSv: ○ Fork() , execve() are not supported Hypervisor Hypervisor ○ Special files are not supported such as /proc ○ Signal mechanism is not complete ● Small kernel size ● Rumprun: only 37 curated applications ● Heavy ● Fast boot time ● Community is too small to keep it rolling ● Inefficient ● Improved performance ● Better security 2 Can Linux behave like a unikernel? 3 Lupine Linux 4 Lupine Linux ● Kernel mode Linux (KML) ○ Enables normal user process to run in kernel mode ○ Processes can still use system services such as paging and scheduling ○ App calls kernel routines directly without privilege transition costs ● Minimal patch to libc ○ Replace syscall instruction to call ○ The address of the called function is exported by the patched KML kernel using the vsyscall ○ No application changes/recompilation required 5 Boot time Evaluation Metrics Image size Based on: Unikernel benefits Memory footprint Application performance Syscall overhead 6 Configuration diversity ● 20 top apps on Docker hub (83% of all downloads) ● Only 19 configuration options required to run all 20 applications: lupine-general 7 Evaluation - Comparison configurations Lupine Cloud Operating Systems [Lupine-base + app-specific options] OSv general Linux-based Unikernels Kernel for 20 apps
    [Show full text]
  • Unikernel Monitors: Extending Minimalism Outside of the Box
    Unikernel Monitors: Extending Minimalism Outside of the Box Dan Williams Ricardo Koller IBM T.J. Watson Research Center Abstract Recently, unikernels have emerged as an exploration of minimalist software stacks to improve the security of ap- plications in the cloud. In this paper, we propose ex- tending the notion of minimalism beyond an individual virtual machine to include the underlying monitor and the interface it exposes. We propose unikernel monitors. Each unikernel is bundled with a tiny, specialized mon- itor that only contains what the unikernel needs both in terms of interface and implementation. Unikernel mon- itors improve isolation through minimal interfaces, re- Figure 1: The unit of execution in the cloud as (a) a duce complexity, and boot unikernels quickly. Our ini- unikernel, built from only what it needs, running on a tial prototype, ukvm, is less than 5% the code size of a VM abstraction; or (b) a unikernel running on a spe- traditional monitor, and boots MirageOS unikernels in as cialized unikernel monitor implementing only what the little as 10ms (8× faster than a traditional monitor). unikernel needs. 1 Introduction the application (unikernel) and the rest of the system, as defined by the virtual hardware abstraction, minimal? Minimal software stacks are changing the way we think Can application dependencies be tracked through the in- about assembling applications for the cloud. A minimal terface and even define a minimal virtual machine mon- amount of software implies a reduced attack surface and itor (or in this case a unikernel monitor) for the applica- a better understanding of the system, leading to increased tion, thus producing a maximally isolated, minimal exe- security.
    [Show full text]
  • Assessing Unikernel Security April 2, 2019 – Version 1.0
    NCC Group Whitepaper Assessing Unikernel Security April 2, 2019 – Version 1.0 Prepared by Spencer Michaels Jeff Dileo Abstract Unikernels are small, specialized, single-address-space machine images constructed by treating component applications and drivers like libraries and compiling them, along with a kernel and a thin OS layer, into a single binary blob. Proponents of unikernels claim that their smaller codebase and lack of excess services make them more efficient and secure than full-OS virtual machines and containers. We surveyed two major unikernels, Rumprun and IncludeOS, and found that this was decidedly not the case: unikernels, which in many ways resemble embedded systems, appear to have a similarly minimal level of security. Features like ASLR, W^X, stack canaries, heap integrity checks and more are either completely absent or seriously flawed. If an application running on such a system contains a memory corruption vulnerability, it is often possible for attackers to gain code execution, even in cases where the applica- tion’s source and binary are unknown. Furthermore, because the application and the kernel run together as a single process, an attacker who compromises a unikernel can immediately exploit functionality that would require privilege escalation on a regular OS, e.g. arbitrary packet I/O. We demonstrate such attacks on both Rumprun and IncludeOS unikernels, and recommend measures to mitigate them. Table of Contents 1 Introduction to Unikernels . 4 2 Threat Model Considerations . 5 2.1 Unikernel Capabilities ................................................................ 5 2.2 Unikernels Versus Containers .......................................................... 5 3 Hypothesis . 6 4 Testing Methodology . 7 4.1 ASLR ................................................................................ 7 4.2 Page Protections ....................................................................
    [Show full text]
  • Jails and Unikernels
    Virtualisation: Jails and Unikernels Advanced Operating Systems Lecture 18 Colin Perkins | https://csperkins.org/ | Copyright © 2017 | This work is licensed under the Creative Commons Attribution-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Lecture Outline • Alternatives to full virtualisation • Sandboxing processes: • FreeBSD jails and Linux containers • Container management • Unikernels and library operating systems Colin Perkins | https://csperkins.org/ | Copyright © 2017 2 Alternatives to Full Virtualisation • Full operating system virtualisation is expensive • Need to run a hypervisor • Need to run a complete guest operating system instance for each VM • High memory and storage overhead, high CPU load from running multiple OS instances on a single machine – but excellent isolation between VMs • Running multiple services on a single OS instance can offer insufficient isolation • All must share the same OS • A bug in a privileged service can easily compromise entire system – all services running on the host, and any unrelated data • A sandbox is desirable – to isolate multiple services running on a single operating system Colin Perkins | https://csperkins.org/ | Copyright © 2017 3 Sandboxing: Jails and Containers • Concept – lightweight virtualisation • A single operating system kernel • Multiple services /bin • Each service runs within a jail a service specific container, running just what’s needed /bin /dev for that application /dev /etc /etc /mail /home /home /jail /www /lib • Jail restricted to see only a subset of the / /lib /local /usr filesystem and processes of the host /usr /lib /sbin • A sandbox within the operating system /sbin /sbin /tmp /tmp /share /var • Partially virtualised /var Colin Perkins | https://csperkins.org/ | Copyright © 2017 4 Example: FreeBSD Jail subsystem Jails: Confining the omnipotent root.
    [Show full text]
  • State Spill in Modern Operating Systems
    A Characterization of State Spill in Modern Operating Systems Kevin Boos, Emilio Del Vecchio, and Lin Zhong Rice University fkevinaboos, edd5, [email protected] Abstract states change and propagate throughout the system, showing that modularity is necessary but insufficient. For example, Understanding and managing the propagation of states in several process migration works have cited “residual depen- operating systems has become an intractable problem due dencies” that remain on the source system as an obstacle to to their sheer size and complexity. Despite modularization post-migration correctness on the target system [38, 55, 34]. efforts, it remains a significant barrier to many contemporary Fault isolation and fault tolerance literature has long realized computing goals: process migration, fault isolation and the undesirable effects of fate sharing among software mod- tolerance, live update, software virtualization, and more. ules as well as the difficulty of restoring a process or whole Though many previous OS research endeavors have achieved machine statefully after a failure, due to the sprawl of states these goals through ad-hoc, tedious methods, we argue that spread among different system components [28, 26, 52, 51]. they have missed the underlying reason why these goals are Live update and hot-swapping research has long sought to overcome the state maintenance issues and inter-module de- so challenging: state spill. pendencies that plague their implementations when transi- State spill occurs when a software entity’s state undergoes tioning from an old to a new version, forcing developers to lasting changes as a result of a transaction from another entity. write complex state transfer functions [7, 25, 27, 48, 5].
    [Show full text]
  • In the Context of the CS3551 Spire Class Project
    Unikernels: Library Operating Systems for the Cloud Anil Madhavapeddy, Richard Mortier, Charalampos Rotsos, David Scott, Balraj Singh, Thomas Gazagnaire, Steven Smith, Steven Hand and Jon Crowcroft ASPLOS’13 In The Context of the CS3551 Spire Class Project Brad Whitehead and Mike Boby February 25, 2020 Teaching Moment - What is a Unikernel? To answer that question, we have to take a look at the structure of a modern operating system Doesn’t matter if it’s Microsoft Windows, Linux, UNIX, Mac OS X OS X macOS, etc. All the mainstream operating systems have the same fundamental anatomy The Anatomy of an Operating System “The Kernel” Runs in Special Hardware Mode “Userland” – “Ring 0” Where Applications Run Only Program That Can Access or Allocate Resources No Privileges Copies Data Between Can Not Access Resources Programs Can Not “Talk” to Other Programs Growth of Operating Systems Linux Kernel is now 28 million lines of source code! Windows is estimated at 50 million lines of code!! With an industry average of 15-50 defects per 1000 lines of code*: Linux (Just the kernel) = 420 thousand to 1.4 million defects Windows = 750 thousand to 2.5 million defects * Steve McConnel, “Code Complete 2”, 2005 It Gets Worse The “Userland” support software is often 10 to 20 times larger than the kernel Red Hat Enterprise Linux (RHEL) Userland is approximately 420 million lines of code Try not to think about the 6.7 to 22 million defects running on the SCADA server controlling your power grid and drinking water Can It Get Even Worse?
    [Show full text]
  • Library Operating Systems for the Cloud
    Unikernels: Library Operating Systems for the Cloud Anil Madhavapeddy, Richard Mortier1, Charalampos Rotsos, David Scott2, Balraj Singh, Thomas Gazagnaire3, Steven Smith, Steven Hand and Jon Crowcroft University of Cambridge, University of Nottingham1, Citrix Systems Ltd2, OCamlPro SAS3 fi[email protected], fi[email protected], [email protected], fi[email protected] Abstract Configuration Files Mirage Compiler We present unikernels, a new approach to deploying cloud services application source code via applications written in high-level source code. Unikernels are Application Binary configuration files single-purpose appliances that are compile-time specialised into Language Runtime hardware architecture standalone kernels, and sealed against modification when deployed whole-system optimisation to a cloud platform. In return they offer significant reduction in Parallel Threads image sizes, improved efficiency and security, and should reduce operational costs. Our Mirage prototype compiles OCaml code into User Processes Application Code specialised unikernels that run on commodity clouds and offer an order of unikernel magnitude reduction in code size without significant performance OS Kernel Mirage Runtime } penalty. The architecture combines static type-safety with a single address-space layout that can be made immutable via a hypervisor Hypervisor Hypervisor extension. Mirage contributes a suite of type-safe protocol libraries, and our results demonstrate that the hypervisor is a platform that Hardware Hardware overcomes the hardware compatibility issues that have made past library operating systems impractical to deploy in the real-world. Figure 1: Contrasting software layers in existing VM appliances vs. Categories and Subject Descriptors D.4 [Operating Systems]: unikernel’s standalone kernel compilation approach.
    [Show full text]
  • Unikernels: the Next Stage of Linux's Dominance
    Unikernels: The Next Stage of Linux’s Dominance Ali Raza Parul Sohal James Cadden Boston University Boston University Boston University [email protected] [email protected] [email protected] Jonathan Appavoo Ulrich Drepper Richard Jones Boston University Red Hat Red Hat [email protected] [email protected] [email protected] Orran Krieger Renato Mancuso Larry Woodman Boston University Boston University Red Hat [email protected] [email protected] [email protected] Abstract 1 Introduction Unikernels have demonstrated enormous advantages over Linux is the dominant OS across nearly every imaginable Linux in many important domains, causing some to propose type of computer system today. Linux is running in billions of that the days of Linux’s dominance may be coming to an people’s pockets, in millions of our homes, in cars, in planes, end. On the contrary, we believe that unikernels’ advantages in spaceships [15], embedded throughout our networks, and represent the next natural evolution for Linux, as it can adopt on each of the top 500 supercomputers [2]. To accomplish the best ideas from the unikernel approach and, along with this, the Linux kernel has been continuously expanded to its battle-tested codebase and large open source community, support a wide range of functionality; it is a hypervisor, a real- continue to dominate. In this paper, we posit that an up- time OS, an SDN router, a container runtime, a BPF virtual streamable unikernel target is achievable from the Linux machine, and a support layer for Emacs. When considering kernel, and, through an early Linux unikernel prototype, this ever-growing set of requirements, and the complexity demonstrate that some simple changes can bring dramatic creep which ensues [26], it leads to the question: Can a single performance advantages.
    [Show full text]
  • Kylinx: a Dynamic Library Operating System for Simplified and Efficient Cloud Virtualization
    KylinX: A Dynamic Library Operating System for Simplified and Efficient Cloud Virtualization Yiming Zhang, NiceX Lab, NUDT; Jon Crowcroft, University of Cambridge; Dongsheng Li and Chengfen Zhang, NUDT; Huiba Li, Alibaba; Yaozheng Wang and Kai Yu, NUDT; Yongqiang Xiong, Microsoft; Guihai Chen, SJTU https://www.usenix.org/conference/atc18/presentation/zhang-yiming This paper is included in the Proceedings of the 2018 USENIX Annual Technical Conference (USENIX ATC ’18). July 11–13, 2018 • Boston, MA, USA ISBN 978-1-939133-02-1 Open access to the Proceedings of the 2018 USENIX Annual Technical Conference is sponsored by USENIX. KylinX: A Dynamic Library Operating System for Simplified and Efficient Cloud Virtualization Yiming Zhang Jon Crowcroft Dongsheng Li, Chengfei Zhang NiceX Lab, NUDT University of Cambridge NUDT Huiba Li Yaozheng Wang, Kai Yu Yongqiang Xiong Guihai Chen Alibaba NUDT Microsoft SJTU Abstract applications, while the current general-purpose OSs con- tain extensive libraries and features for multi-user, multi- Unikernel specializes a minimalistic LibOS and a tar- application scenarios. The mismatch between the single- get application into a standalone single-purpose virtual purpose usage of appliances and the general-purpose de- machine (VM) running on a hypervisor, which is referred sign of traditional OSs induces performance and security to as (virtual) appliance. Compared to traditional VMs, penalty, making appliance-based services cumbersome Unikernel appliances have smaller memory footprint and to deploy and schedule [62, 52], inefficient to run [56], lower overhead while guaranteeing the same level of and vulnerable to bugs of unnecessary libraries [27]. isolation. On the downside, Unikernel strips off the This problem has recently motivated the design of process abstraction from its monolithic appliance and Unikernel [56], a library operating system (LibOS) archi- thus sacrifices flexibility, efficiency, and applicability.
    [Show full text]
  • Iso-Unik: Lightweight Multi-Process Unikernel Through Memory Protection Keys Guanyu Li, Dong Du and Yubin Xia*
    Li et al. Cybersecurity (2020) 3:11 Cybersecurity https://doi.org/10.1186/s42400-020-00051-9 RESEARCH Open Access Iso-UniK: lightweight multi-process unikernel through memory protection keys Guanyu Li, Dong Du and Yubin Xia* Abstract Unikernel, specializing a minimalistic libOS with an application, is an attractive design for cloud computing. However, the Achilles’ heel of unikernel is the lack of multi-process support, which makes it less flexible and applicable. Many applications rely on the process abstraction to isolate different components. For example, Apache with the multi-processing module isolates a request handler in a process to guarantee security. Prior art tackles the problem by simulating multi-process with multiple unikernels, which is incompatible with existing cloud providers and also introduces high overhead. This paper proposes Iso-UniK, a new unikernel design enabling multi-task applications with the support of both functionality and isolation. Iso-UniK leverages a recent hardware feature, named Intel Memory Protection Key (Intel MPK), to provide lightweight and efficient isolation for multi-process in unikernel. Our design has three benefits compared with previous approaches. First, Iso-UniK does not need hypervisor support and is thus compatible with existing cloud computing platforms; second, Iso-UniK promises fast system calls with only 45 cycles; last, a process can be isolated with a flexible configuration. We have implemented a prototype based on OSv, a unikernel system supporting unmodified applications. Iso-UniK can achieve fast fork operation with only 66μs for multi-process applications. Our evaluation shows that the isolation and multi-process support in Iso-UniK will not damage the applications’ performance.
    [Show full text]
  • Architectures, Microkernels, IPC, Capabilities Architectures, Microkernels, IPC, Capabilities
    Architectures,Architectures, Microkernels,Microkernels, IPC,IPC, CapabilitiesCapabilities http://d3s.mff.cuni.czhttp://d3s.mff.cuni.cz/aosy Jakub Jermář [email protected] AgendaAgenda Kernel architectures Microkernels IPC Capabilities Jakub Jermář, Advanced Operating Systems, February 28th 2019 Architectures 2 Recall:Recall: CommonCommon OSOS TaxonomyTaxonomy Special-purpose operating systems Real-time operating systems Hypervisors (type 1) ... General-purpose operating systems Monolithic kernel Single-server microkernel Multiserver microkernel Hybrid kernel (?) Jakub Jermář, Advanced Operating Systems, February 28th 2019 Architectures 3 MonolithicMonolithic KernelKernel application application application unprivileged mode privileged mode monolithic kernel memory device file system user network mgmt scheduler IPC drivers drivers mgmt stack ... hardware Jakub Jermář, Advanced Operating Systems, February 28th 2019 Architectures 4 SomeSome ObviousObvious IssuesIssues Security Applications trust all kernel components Kernel components trust all other kernel components Reliability Kernel components are a single point of failure Availability Kernel components cannot be updated independently Justifiability Who says file systems, networking, device drivers, etc. belong to the kernel? Jakub Jermář, Advanced Operating Systems, February 28th 2019 Architectures 5 SomeSome ObviousObvious IssuesIssues (2)(2) Extensibility How to extend the system without modifying the kernel Too many communication mechanisms Unix: pipes, files, shared memory,
    [Show full text]