Library Operating Systems for the Cloud
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Uva-DARE (Digital Academic Repository)
UvA-DARE (Digital Academic Repository) On the construction of operating systems for the Microgrid many-core architecture van Tol, M.W. Publication date 2013 Link to publication Citation for published version (APA): van Tol, M. W. (2013). On the construction of operating systems for the Microgrid many-core architecture. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:27 Sep 2021 General Bibliography [1] J. Aas. Understanding the linux 2.6.8.1 cpu scheduler. Technical report, Silicon Graphics Inc. (SGI), Eagan, MN, USA, February 2005. [2] D. Abts, S. Scott, and D. J. Lilja. So many states, so little time: Verifying memory coherence in the Cray X1. In Proceedings of the 17th International Symposium on Parallel and Distributed Processing, IPDPS '03, page 10 pp., Washington, DC, USA, April 2003. -
High Performance Computing Systems
High Performance Computing Systems Multikernels Doug Shook Multikernels Two predominant approaches to OS: – Full weight kernel – Lightweight kernel Why not both? – How does implementation affect usage and performance? Gerolfi, et. al. “A Multi-Kernel Survey for High- Performance Computing,” 2016 2 FusedOS Assumes heterogeneous architecture – Linux on full cores – LWK requests resources from linux to run programs Uses CNK as its LWK 3 IHK/McKernel Uses an Interface for Heterogeneous Kernels – Resource allocation – Communication McKernel is the LWK – Only operable with IHK Uses proxy processes 4 mOS Embeds LWK into the Linux kernel – LWK is visible to Linux just like any other process Resource allocation is performed by sysadmin/user 5 FFMK Uses the L4 microkernel – What is a microkernel? Also uses a para-virtualized Linux instance – What is paravirtualization? 6 Hobbes Pisces Node Manager Kitten LWK Palacios Virtual Machine Monitor 7 Sysadmin Criteria Is the LWK standalone? Which kernel is booted by the BIOS? How and when are nodes partitioned? 8 Application Criteria What is the level of POSIX support in the LWK? What is the pseudo file system support? How does an application access Linux functionality? What is the system call overhead? Can LWK and Linux share memory? Can a single process span Linux and the LWK? Does the LWK support NUMA? 9 Linux Criteria Are LWK processes visible to standard tools like ps and top? Are modifications to the Linux kernel necessary? Do Linux kernel changes propagate to the LWK? -
Intra-Unikernel Isolation with Intel Memory Protection Keys
Intra-Unikernel Isolation with Intel Memory Protection Keys Mincheol Sung Pierre Olivier∗ Virginia Tech, USA The University of Manchester, United Kingdom [email protected] [email protected] Stefan Lankes Binoy Ravindran RWTH Aachen University, Germany Virginia Tech, USA [email protected] [email protected] Abstract ACM Reference Format: Mincheol Sung, Pierre Olivier, Stefan Lankes, and Binoy Ravin- Unikernels are minimal, single-purpose virtual machines. dran. 2020. Intra-Unikernel Isolation with Intel Memory Protec- This new operating system model promises numerous bene- tion Keys. In 16th ACM SIGPLAN/SIGOPS International Conference fits within many application domains in terms of lightweight- on Virtual Execution Environments (VEE ’20), March 17, 2020, Lau- ness, performance, and security. Although the isolation be- sanne, Switzerland. ACM, New York, NY, USA, 14 pages. https: tween unikernels is generally recognized as strong, there //doi.org/10.1145/3381052.3381326 is no isolation within a unikernel itself. This is due to the use of a single, unprotected address space, a basic principle 1 Introduction of unikernels that provide their lightweightness and perfor- Unikernels have gained attention in the academic research mance benefits. In this paper, we propose a new design that community, offering multiple benefits in terms of improved brings memory isolation inside a unikernel instance while performance, increased security, reduced costs, etc. As a keeping a single address space. We leverage Intel’s Memory result, -
A Linux in Unikernel Clothing Lupine
A Linux in Unikernel Clothing Hsuan-Chi Kuo+, Dan Williams*, Ricardo Koller* and Sibin Mohan+ +University of Illinois at Urbana-Champaign *IBM Research Lupine Unikernels are great BUT: Unikernels lack full Linux Support App ● Hermitux: supports only 97 system calls Kernel LibOS + App ● OSv: ○ Fork() , execve() are not supported Hypervisor Hypervisor ○ Special files are not supported such as /proc ○ Signal mechanism is not complete ● Small kernel size ● Rumprun: only 37 curated applications ● Heavy ● Fast boot time ● Community is too small to keep it rolling ● Inefficient ● Improved performance ● Better security 2 Can Linux behave like a unikernel? 3 Lupine Linux 4 Lupine Linux ● Kernel mode Linux (KML) ○ Enables normal user process to run in kernel mode ○ Processes can still use system services such as paging and scheduling ○ App calls kernel routines directly without privilege transition costs ● Minimal patch to libc ○ Replace syscall instruction to call ○ The address of the called function is exported by the patched KML kernel using the vsyscall ○ No application changes/recompilation required 5 Boot time Evaluation Metrics Image size Based on: Unikernel benefits Memory footprint Application performance Syscall overhead 6 Configuration diversity ● 20 top apps on Docker hub (83% of all downloads) ● Only 19 configuration options required to run all 20 applications: lupine-general 7 Evaluation - Comparison configurations Lupine Cloud Operating Systems [Lupine-base + app-specific options] OSv general Linux-based Unikernels Kernel for 20 apps -
Unikernel Monitors: Extending Minimalism Outside of the Box
Unikernel Monitors: Extending Minimalism Outside of the Box Dan Williams Ricardo Koller IBM T.J. Watson Research Center Abstract Recently, unikernels have emerged as an exploration of minimalist software stacks to improve the security of ap- plications in the cloud. In this paper, we propose ex- tending the notion of minimalism beyond an individual virtual machine to include the underlying monitor and the interface it exposes. We propose unikernel monitors. Each unikernel is bundled with a tiny, specialized mon- itor that only contains what the unikernel needs both in terms of interface and implementation. Unikernel mon- itors improve isolation through minimal interfaces, re- Figure 1: The unit of execution in the cloud as (a) a duce complexity, and boot unikernels quickly. Our ini- unikernel, built from only what it needs, running on a tial prototype, ukvm, is less than 5% the code size of a VM abstraction; or (b) a unikernel running on a spe- traditional monitor, and boots MirageOS unikernels in as cialized unikernel monitor implementing only what the little as 10ms (8× faster than a traditional monitor). unikernel needs. 1 Introduction the application (unikernel) and the rest of the system, as defined by the virtual hardware abstraction, minimal? Minimal software stacks are changing the way we think Can application dependencies be tracked through the in- about assembling applications for the cloud. A minimal terface and even define a minimal virtual machine mon- amount of software implies a reduced attack surface and itor (or in this case a unikernel monitor) for the applica- a better understanding of the system, leading to increased tion, thus producing a maximally isolated, minimal exe- security. -
W4118: Multikernel
W4118: multikernel Instructor: Junfeng Yang References: Modern Operating Systems (3rd edition), Operating Systems Concepts (8th edition), previous W4118, and OS at MIT, Stanford, and UWisc Motivation: sharing is expensive Difficult to parallelize single-address space kernels with shared data structures . Locks/atomic instructions: limit scalability . Must make every shared data structure scalable • Partition data • Lock-free data structures • … . Tremendous amount of engineering Root cause . Expensive to move cache lines . Congestion of interconnect 1 Multikernel: explicit sharing via messages Multicore chip == distributed system! . No shared data structures! . Send messages to core to access its data Advantages . Scalable . Good match for heterogeneous cores . Good match if future chips don’t provide cache- coherent shared memory 2 Challenge: global state OS must manage global state . E.g., page table of a process Solution . Replicate global state . Read: read local copy low latency . Update: update local copy + distributed protocol to update remote copies • Do so asynchronously (“split phase”) 3 Barrelfish overview Figure 5 in paper CPU driver . kernel-mode part per core . Inter-Processor interrupt (IPI) Monitors . OS abstractions . User-level RPC (URPC) 4 IPC through shared memory #define CACHELINE (64) struct box{ char buf[CACHELINE-1]; // message contents char flag; // high bit == 0 means sender owns it }; __attribute__ ((aligned (CACHELINE))) struct box m __attribute__ ((aligned (CACHELINE))); send() recv() { { // set up message while(!(m.flag & 0x80)) memcpy(m.buf, …); ; m.flag |= 0x80; m.buf … //process message } } 5 Case study: TLB shootdown When is it necessary? Windows & Linux: send IPIs Barrelfish: sends messages to involved monitors . 1 broadcast message N-1 invalidates, N-1 fetches . -
Multiprocessor Operating Systems CS 6410: Advanced Systems
Introduction Multikernel Tornado Conclusion Discussion Outlook References Multiprocessor Operating Systems CS 6410: Advanced Systems Kai Mast Department of Computer Science Cornell University September 4, 2014 Kai Mast — Multiprocessor Operating Systems 1/47 Introduction Multikernel Tornado Conclusion Discussion Outlook References Let us recall Multiprocessor vs. Multicore Figure: Multiprocessor [10] Figure: Multicore [10] Kai Mast — Multiprocessor Operating Systems 2/47 Introduction Multikernel Tornado Conclusion Discussion Outlook References Let us recall Message Passing vs. Shared Memory Shared Memory Threads/Processes access the same memory region Communication via changes in variables Often easier to implement Message Passing Threads/Processes don’t have shared memory Communication via messages/events Easier to distribute between different processors More robust than shared memory Kai Mast — Multiprocessor Operating Systems 3/47 Introduction Multikernel Tornado Conclusion Discussion Outlook References Let us recall Miscellaneous Cache Coherence Inter-Process Communication Remote-Procedure Call Preemptive vs. cooperative Multitasking Non-uniform memory access (NUMA) Kai Mast — Multiprocessor Operating Systems 4/47 Introduction Multikernel Tornado Conclusion Discussion Outlook References Current Systems are Diverse Different Architectures (x86, ARM, ...) Different Scales (Desktop, Server, Embedded, Mobile ...) Different Processors (GPU, CPU, ASIC ...) Multiple Cores and/or Multiple Processors Multiple Operating Systems on a System (Firmware, -
Assessing Unikernel Security April 2, 2019 – Version 1.0
NCC Group Whitepaper Assessing Unikernel Security April 2, 2019 – Version 1.0 Prepared by Spencer Michaels Jeff Dileo Abstract Unikernels are small, specialized, single-address-space machine images constructed by treating component applications and drivers like libraries and compiling them, along with a kernel and a thin OS layer, into a single binary blob. Proponents of unikernels claim that their smaller codebase and lack of excess services make them more efficient and secure than full-OS virtual machines and containers. We surveyed two major unikernels, Rumprun and IncludeOS, and found that this was decidedly not the case: unikernels, which in many ways resemble embedded systems, appear to have a similarly minimal level of security. Features like ASLR, W^X, stack canaries, heap integrity checks and more are either completely absent or seriously flawed. If an application running on such a system contains a memory corruption vulnerability, it is often possible for attackers to gain code execution, even in cases where the applica- tion’s source and binary are unknown. Furthermore, because the application and the kernel run together as a single process, an attacker who compromises a unikernel can immediately exploit functionality that would require privilege escalation on a regular OS, e.g. arbitrary packet I/O. We demonstrate such attacks on both Rumprun and IncludeOS unikernels, and recommend measures to mitigate them. Table of Contents 1 Introduction to Unikernels . 4 2 Threat Model Considerations . 5 2.1 Unikernel Capabilities ................................................................ 5 2.2 Unikernels Versus Containers .......................................................... 5 3 Hypothesis . 6 4 Testing Methodology . 7 4.1 ASLR ................................................................................ 7 4.2 Page Protections .................................................................... -
“A Linux in Unikernel Clothing”
Lupine Linux “A Linux in Unikernel Clothing” Dan Williams (IBM) With Hsuan-Chi (Austin) Kuo (UIUC + IBM), Ricardo Koller (IBM), Sibin Mohan (UIUC) Roadmap • Context • Containers and isolation • Unikernels • Nabla containers • Lupine Linux • A linux in unikernel clothing • Concluding thoughts 2 Containers are great! • Have changed how applications are packaged, deployed and developed • Normal processes, but “contained” • Namespaces, cgroups, chroot • Lightweight • Start quickly, “bare metal” • Easy image management (layered fs) • Tooling/orchestration ecosystem 3 But… High level of • Large attack surface to the host abstraction (e.g., system • Limits adoption of container-first architecture calls) app • Fortunately, we know how to reduce attack surface! Host Kernel with namespacing (e.g., Linux) Containers 4 Deprivileging and unsharing kernel functionality Low level of High level of • Virtual machines (VMs) abstraction abstraction • Guest kernel (e.g., virtual (e.g., system app hardware) calls) • Thin interface Guest Kernel app (e.g., Linux) Monitor Process (e.g., QEMU) Host Kernel with Host Kernel/Hypervisor namespacing (e.g., Linux/KVM) (e.g., Linux) VMs Containers 5 Deprivileging and unsharing kernel functionality Low level of High level of • Virtual machines (VMs) abstraction abstraction • Guest kernel (e.g., virtual (e.g., system app hardware) calls) • Thin interface • Userspace kernel Guest Kernel app • Performance issues (e.g., Linux) Monitor Process (e.g., QEMU) Host Kernel with Host Kernel/Hypervisor namespacing (e.g., Linux/KVM) -
Design and Implementation of the Heterogeneous Multikernel Operating System
223 Design and Implementation of the Heterogeneous Multikernel Operating System Yauhen KLIMIANKOU Department of Computer Systems and Networks, Belarusian State University of Informatics and Radioelectronics, Belarus Abstract. The design of the computer system was significantly changed due to the emergence and popularization of the multicore processors. Moving to the advanced multicore processors, moving to the heterogeneous computer systems and increasing of the integrity level between computer system components are the main trends of the computer systems development. Significant changes in the computer systems design make reasonable the attempt of reviewing the operating system design to make it optimal for the new hardware platform. The proposed operation system design assume moving from monolithic centralized operating system to the decentralized network of the distributed independent nodes, each of which will play the role of the processor driver and threads container. The proposed design provide the numbers of the benefits against ordinal operating systems: dynamics in space and time, improved level of reliability and flexibility, support of the heterogeneous computer systems. Keywords. Multiprocessor computer system, heterogeneous computer system, real-time system Introduction Computer systems design is changing much faster than the operating systems design. The internal architecture of the modern computer resembles a distributed network system consisting from the mix of processor cores, caches, internal communications, I/O devices and expansion cards. The modern computer is similar to the early parallel computer systems or multiprocessor systems of the last century. Multi-core computer systems, which are essentially the same multi-processor systems localized on the processor die, occupy an increasingly strong position in most segments of the computer market. -
On the Performance and Isolation of Asymmetric Microkernel Design For
On the Performance and Isolation of Asymmetric Microkernel Design for Lightweight Manycores Pedro Henrique Penna, João Souto, Davidson Lima, Márcio Castro, François Broquedis, Henrique Freitas, Jean-François Mehaut To cite this version: Pedro Henrique Penna, João Souto, Davidson Lima, Márcio Castro, François Broquedis, et al.. On the Performance and Isolation of Asymmetric Microkernel Design for Lightweight Manycores. SBESC 2019 - IX Brazilian Symposium on Computing Systems Engineering, Nov 2019, Natal, Brazil. pp.1-31. hal-02297637v3 HAL Id: hal-02297637 https://hal.archives-ouvertes.fr/hal-02297637v3 Submitted on 30 Mar 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. On the Performance and Isolation of Asymmetric Microkernel Design for Lightweight Manycores Pedro Henrique Penna, João Souto, Davidson Lima, Márcio Castro, François Broquedis, Henrique Freitas, Jean-Francois Mehaut To cite this version: Pedro Henrique Penna, João Souto, Davidson Lima, Márcio Castro, François Broquedis, et al.. On the Performance and Isolation of Asymmetric Microkernel -
Jails and Unikernels
Virtualisation: Jails and Unikernels Advanced Operating Systems Lecture 18 Colin Perkins | https://csperkins.org/ | Copyright © 2017 | This work is licensed under the Creative Commons Attribution-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Lecture Outline • Alternatives to full virtualisation • Sandboxing processes: • FreeBSD jails and Linux containers • Container management • Unikernels and library operating systems Colin Perkins | https://csperkins.org/ | Copyright © 2017 2 Alternatives to Full Virtualisation • Full operating system virtualisation is expensive • Need to run a hypervisor • Need to run a complete guest operating system instance for each VM • High memory and storage overhead, high CPU load from running multiple OS instances on a single machine – but excellent isolation between VMs • Running multiple services on a single OS instance can offer insufficient isolation • All must share the same OS • A bug in a privileged service can easily compromise entire system – all services running on the host, and any unrelated data • A sandbox is desirable – to isolate multiple services running on a single operating system Colin Perkins | https://csperkins.org/ | Copyright © 2017 3 Sandboxing: Jails and Containers • Concept – lightweight virtualisation • A single operating system kernel • Multiple services /bin • Each service runs within a jail a service specific container, running just what’s needed /bin /dev for that application /dev /etc /etc /mail /home /home /jail /www /lib • Jail restricted to see only a subset of the / /lib /local /usr filesystem and processes of the host /usr /lib /sbin • A sandbox within the operating system /sbin /sbin /tmp /tmp /share /var • Partially virtualised /var Colin Perkins | https://csperkins.org/ | Copyright © 2017 4 Example: FreeBSD Jail subsystem Jails: Confining the omnipotent root.