UNICORE: a Toolkit to Automatically Build Unikernels
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
UNICORE OPTIMIZATION William Jalby
UNICORE OPTIMIZATION William Jalby LRC ITA@CA (CEA DAM/ University of Versailles St-Quentin-en-Yvelines) FRANCE 1 Outline The stage Key unicore performance limitations (excluding caches) Multimedia Extensions Compiler Optimizations 2 Abstraction Layers in Modern Systems Application Algorithm/Libraries CS Programming Language Original Compilers/Interpreters domain of Operating System/Virtual Machines Domain of the computer recent architect Instruction Set Architecture (ISA) computer architecture (‘50s-’80s) Microarchitecture (‘90s) Gates/Register-Transfer Level (RTL) Circuits EE Devices Physics Key issue Application Algorithm/Libraries Understand the We have to take into relationship/interaction account the between Architecture intermediate layers Microarchitecture and Applications/Algorithms Microarchitecture KEY TECHNOLOGY: Don’t forget also the lowest layers Performance Measurement and Analysis Performance Measurement and Analysis AN OVERLOOKED ISSUE HARDWARE VIEW: mechanism description and a few portions of codes where it works well (positive view) COMPILER VIEW: aggregate performance number (SPEC), little correlation with hardware Lack of guidelines for writing efficient programs Uniprocessor Performance From Hennessy and Patterson, Computer Architecture: A Quantitative Approach, 4th edition, October, 2006 - VAX: 25%/year 1978 to 1986 - RISC + x86: 52%/year 1986 to 2002 RISC + x86: ??%/year 2002 to present Trends Unicore Performance REF: Mikko Lipasti-University of [source: Intel] Wisconsin Modern Unicore Stage KEY PERFORMANCE -
Intra-Unikernel Isolation with Intel Memory Protection Keys
Intra-Unikernel Isolation with Intel Memory Protection Keys Mincheol Sung Pierre Olivier∗ Virginia Tech, USA The University of Manchester, United Kingdom [email protected] [email protected] Stefan Lankes Binoy Ravindran RWTH Aachen University, Germany Virginia Tech, USA [email protected] [email protected] Abstract ACM Reference Format: Mincheol Sung, Pierre Olivier, Stefan Lankes, and Binoy Ravin- Unikernels are minimal, single-purpose virtual machines. dran. 2020. Intra-Unikernel Isolation with Intel Memory Protec- This new operating system model promises numerous bene- tion Keys. In 16th ACM SIGPLAN/SIGOPS International Conference fits within many application domains in terms of lightweight- on Virtual Execution Environments (VEE ’20), March 17, 2020, Lau- ness, performance, and security. Although the isolation be- sanne, Switzerland. ACM, New York, NY, USA, 14 pages. https: tween unikernels is generally recognized as strong, there //doi.org/10.1145/3381052.3381326 is no isolation within a unikernel itself. This is due to the use of a single, unprotected address space, a basic principle 1 Introduction of unikernels that provide their lightweightness and perfor- Unikernels have gained attention in the academic research mance benefits. In this paper, we propose a new design that community, offering multiple benefits in terms of improved brings memory isolation inside a unikernel instance while performance, increased security, reduced costs, etc. As a keeping a single address space. We leverage Intel’s Memory result, -
A Linux in Unikernel Clothing Lupine
A Linux in Unikernel Clothing Hsuan-Chi Kuo+, Dan Williams*, Ricardo Koller* and Sibin Mohan+ +University of Illinois at Urbana-Champaign *IBM Research Lupine Unikernels are great BUT: Unikernels lack full Linux Support App ● Hermitux: supports only 97 system calls Kernel LibOS + App ● OSv: ○ Fork() , execve() are not supported Hypervisor Hypervisor ○ Special files are not supported such as /proc ○ Signal mechanism is not complete ● Small kernel size ● Rumprun: only 37 curated applications ● Heavy ● Fast boot time ● Community is too small to keep it rolling ● Inefficient ● Improved performance ● Better security 2 Can Linux behave like a unikernel? 3 Lupine Linux 4 Lupine Linux ● Kernel mode Linux (KML) ○ Enables normal user process to run in kernel mode ○ Processes can still use system services such as paging and scheduling ○ App calls kernel routines directly without privilege transition costs ● Minimal patch to libc ○ Replace syscall instruction to call ○ The address of the called function is exported by the patched KML kernel using the vsyscall ○ No application changes/recompilation required 5 Boot time Evaluation Metrics Image size Based on: Unikernel benefits Memory footprint Application performance Syscall overhead 6 Configuration diversity ● 20 top apps on Docker hub (83% of all downloads) ● Only 19 configuration options required to run all 20 applications: lupine-general 7 Evaluation - Comparison configurations Lupine Cloud Operating Systems [Lupine-base + app-specific options] OSv general Linux-based Unikernels Kernel for 20 apps -
Unikernel Monitors: Extending Minimalism Outside of the Box
Unikernel Monitors: Extending Minimalism Outside of the Box Dan Williams Ricardo Koller IBM T.J. Watson Research Center Abstract Recently, unikernels have emerged as an exploration of minimalist software stacks to improve the security of ap- plications in the cloud. In this paper, we propose ex- tending the notion of minimalism beyond an individual virtual machine to include the underlying monitor and the interface it exposes. We propose unikernel monitors. Each unikernel is bundled with a tiny, specialized mon- itor that only contains what the unikernel needs both in terms of interface and implementation. Unikernel mon- itors improve isolation through minimal interfaces, re- Figure 1: The unit of execution in the cloud as (a) a duce complexity, and boot unikernels quickly. Our ini- unikernel, built from only what it needs, running on a tial prototype, ukvm, is less than 5% the code size of a VM abstraction; or (b) a unikernel running on a spe- traditional monitor, and boots MirageOS unikernels in as cialized unikernel monitor implementing only what the little as 10ms (8× faster than a traditional monitor). unikernel needs. 1 Introduction the application (unikernel) and the rest of the system, as defined by the virtual hardware abstraction, minimal? Minimal software stacks are changing the way we think Can application dependencies be tracked through the in- about assembling applications for the cloud. A minimal terface and even define a minimal virtual machine mon- amount of software implies a reduced attack surface and itor (or in this case a unikernel monitor) for the applica- a better understanding of the system, leading to increased tion, thus producing a maximally isolated, minimal exe- security. -
Computer Architecture Techniques for Power-Efficiency
MOCL005-FM MOCL005-FM.cls June 27, 2008 8:35 COMPUTER ARCHITECTURE TECHNIQUES FOR POWER-EFFICIENCY i MOCL005-FM MOCL005-FM.cls June 27, 2008 8:35 ii MOCL005-FM MOCL005-FM.cls June 27, 2008 8:35 iii Synthesis Lectures on Computer Architecture Editor Mark D. Hill, University of Wisconsin, Madison Synthesis Lectures on Computer Architecture publishes 50 to 150 page publications on topics pertaining to the science and art of designing, analyzing, selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals. Computer Architecture Techniques for Power-Efficiency Stefanos Kaxiras and Margaret Martonosi 2008 Chip Mutiprocessor Architecture: Techniques to Improve Throughput and Latency Kunle Olukotun, Lance Hammond, James Laudon 2007 Transactional Memory James R. Larus, Ravi Rajwar 2007 Quantum Computing for Computer Architects Tzvetan S. Metodi, Frederic T. Chong 2006 MOCL005-FM MOCL005-FM.cls June 27, 2008 8:35 Copyright © 2008 by Morgan & Claypool All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in printed reviews, without the prior permission of the publisher. Computer Architecture Techniques for Power-Efficiency Stefanos Kaxiras and Margaret Martonosi www.morganclaypool.com ISBN: 9781598292084 paper ISBN: 9781598292091 ebook DOI: 10.2200/S00119ED1V01Y200805CAC004 A Publication in the Morgan & Claypool Publishers -
Installation Guide for UNICORE Server Components
Installation Guide for UNICORE Server Components Installation Guide for UNICORE Server Components UNICORE Team July 2015, UNICORE version 7.3.0 Installation Guide for UNICORE Server Components Contents 1 Introduction1 1.1 Purpose and Target Audience of this Document.................1 1.2 Overview of the UNICORE Servers and some Terminology...........1 1.3 Overview of this Document............................2 2 Installation of Core Services for a Single Site3 2.1 Basic Scenarios..................................3 2.2 Preparation....................................5 2.3 Installation....................................6 2.4 Security Settings................................. 15 2.5 Installation of the Perl TSI and TSI-related Configuration of the UNICORE/X server....................................... 18 2.6 The Connections Between the UNICORE Components............. 20 3 Operation of a UNICORE Installation 22 3.1 Starting...................................... 22 3.2 Stopping...................................... 22 3.3 Monitoring.................................... 22 3.4 User Management................................. 22 3.5 Testing your Installation............................. 23 4 Integration of Another Target System 24 4.1 Configuration of the UNICORE/X Service.................... 24 4.2 Configuration of Target System Interface..................... 25 4.3 Addition of Users to the XUUDB........................ 26 4.4 Additions to the Gateway............................. 26 5 Multi-Site Installation Options 26 5.1 Multiple Registries............................... -
Intel's High-Performance Computing Technologies
Intel’s High-Performance Computing Technologies 11th ECMWF Workshop Use of HIgh Performance Computing in Meteorology Reading, UK 26-Oct-2004 Dr. Herbert Cornelius Advanced Computing Center Intel EMEA Advanced Computing on Intel® Architecture Intel HPC Technologies October 2004 HPC continues to change … *Other brands and names are the property of their respective owners •2• Advanced Computing on Intel® Architecture Intel HPC Technologies October 2004 Some HPC History 1960s 1970s 1980s 1990s 2000s HPC Systems 1970s 1980s 1990s 2000s Processor proprietary proprietary COTS COTS Memory proprietary proprietary COTS COTS Motherboard proprietary proprietary proprietary COTS Interconnect proprietary proprietary proprietary COTS OS, SW Tools proprietary proprietary proprietary mixed COTS: Commercial off the Shelf (industry standard) *Other brands and names are the property of their respective owners •3• Advanced Computing on Intel® Architecture Intel HPC Technologies October 2004 High-Performance Computing with IA Source: http://www.top500.org/lists/2004/06/2/ Source: http://www.top500.org/lists/2004/06/5/ 4096 (1024x4) Intel® Itanium® 2 processor based system 2500 (1250x2) Intel® Xeon™ processor based system 22.9 TFLOPS peak performance 15.3 TFLOPS peak performance PNNL RIKEN 9 1936 Intel® Itanium® 2 processor cluster 7 2048 Intel® Xeon™ processor cluster 11.6 / 8.6 TFLOPS Rpeak/Rmax 12.5 / 8.7 TFLOPS Rpeak/Rmax *Other brands and names are the property of their respective owners •4• Advanced Computing on Intel® Architecture Intel HPC Technologies -
Implementing Production Grids William E
Implementing Production Grids William E. Johnston a, The NASA IPG Engineering Team b, and The DOE Science Grid Team c Contents 1 Introduction: Lessons Learned for Building Large-Scale Grids ...................................... 3 5 2 The Grid Context .................................................................................................................. 3 The Anticipated Grid Usage Model Will Determine What Gets Deployed, and When. 7 3.1 Grid Computing Models ............................................................................................................ 7 3 1.1 Export Existing Services .......................................................................................................... 7 3 1.2 Loosely Coupled Processes ..................................................................................................... 7 3.1 3 WorlqTow Managed Processes .............................................................................. 8 3.1 4 Distributed-Pipelined / Coupled processes ............................................................................. 9 3.1 5 Tightly Coupled Processes ........................................................................ 9 3.2 Grid Data Models ..................................................................................................................... 10 3.2.1 Occasional Access to Multiple Tertiary Storage Systems ..................................................... 11 3.2.2 Distributed Analysis of Massive Datasets Followed by Cataloguing and Archiving ........... -
Assessing Unikernel Security April 2, 2019 – Version 1.0
NCC Group Whitepaper Assessing Unikernel Security April 2, 2019 – Version 1.0 Prepared by Spencer Michaels Jeff Dileo Abstract Unikernels are small, specialized, single-address-space machine images constructed by treating component applications and drivers like libraries and compiling them, along with a kernel and a thin OS layer, into a single binary blob. Proponents of unikernels claim that their smaller codebase and lack of excess services make them more efficient and secure than full-OS virtual machines and containers. We surveyed two major unikernels, Rumprun and IncludeOS, and found that this was decidedly not the case: unikernels, which in many ways resemble embedded systems, appear to have a similarly minimal level of security. Features like ASLR, W^X, stack canaries, heap integrity checks and more are either completely absent or seriously flawed. If an application running on such a system contains a memory corruption vulnerability, it is often possible for attackers to gain code execution, even in cases where the applica- tion’s source and binary are unknown. Furthermore, because the application and the kernel run together as a single process, an attacker who compromises a unikernel can immediately exploit functionality that would require privilege escalation on a regular OS, e.g. arbitrary packet I/O. We demonstrate such attacks on both Rumprun and IncludeOS unikernels, and recommend measures to mitigate them. Table of Contents 1 Introduction to Unikernels . 4 2 Threat Model Considerations . 5 2.1 Unikernel Capabilities ................................................................ 5 2.2 Unikernels Versus Containers .......................................................... 5 3 Hypothesis . 6 4 Testing Methodology . 7 4.1 ASLR ................................................................................ 7 4.2 Page Protections .................................................................... -
“A Linux in Unikernel Clothing”
Lupine Linux “A Linux in Unikernel Clothing” Dan Williams (IBM) With Hsuan-Chi (Austin) Kuo (UIUC + IBM), Ricardo Koller (IBM), Sibin Mohan (UIUC) Roadmap • Context • Containers and isolation • Unikernels • Nabla containers • Lupine Linux • A linux in unikernel clothing • Concluding thoughts 2 Containers are great! • Have changed how applications are packaged, deployed and developed • Normal processes, but “contained” • Namespaces, cgroups, chroot • Lightweight • Start quickly, “bare metal” • Easy image management (layered fs) • Tooling/orchestration ecosystem 3 But… High level of • Large attack surface to the host abstraction (e.g., system • Limits adoption of container-first architecture calls) app • Fortunately, we know how to reduce attack surface! Host Kernel with namespacing (e.g., Linux) Containers 4 Deprivileging and unsharing kernel functionality Low level of High level of • Virtual machines (VMs) abstraction abstraction • Guest kernel (e.g., virtual (e.g., system app hardware) calls) • Thin interface Guest Kernel app (e.g., Linux) Monitor Process (e.g., QEMU) Host Kernel with Host Kernel/Hypervisor namespacing (e.g., Linux/KVM) (e.g., Linux) VMs Containers 5 Deprivileging and unsharing kernel functionality Low level of High level of • Virtual machines (VMs) abstraction abstraction • Guest kernel (e.g., virtual (e.g., system app hardware) calls) • Thin interface • Userspace kernel Guest Kernel app • Performance issues (e.g., Linux) Monitor Process (e.g., QEMU) Host Kernel with Host Kernel/Hypervisor namespacing (e.g., Linux/KVM) -
UNICORE D2.3 Platform Requirements
H2020-ICT-2018-2-825377 UNICORE UNICORE: A Common Code Base and Toolkit for Deployment of Applications to Secure and Reliable Virtual Execution Environments Horizon 2020 - Research and Innovation Framework Programme D2.3 Platform Requirements - Final Due date of deliverable: 30 September 2019 Actual submission date: 30 September 2019 Start date of project 1 January 2019 Duration 36 months Lead contractor for this deliverable Accelleran NV Version 1.0 Confidentiality status “Public” c UNICORE Consortium 2020 Page 1 of (62) Abstract This is the final version of the UNICORE “Platform Requirements - Final” (D2.3) document. The original version (D2.1 Requirements) was published in April 2019. The differences between the two versions of this document are detailed in the Executive Summary. The goal of the EU-funded UNICORE project is to develop a common code-base and toolchain that will enable software developers to rapidly create secure, portable, scalable, high-performance solutions starting from existing applications. The key to this is to compile an application into very light-weight virtual machines - known as unikernels - where there is no traditional operating system, only the specific bits of operating system functionality that the application needs. The resulting unikernels can then be deployed and run on standard high-volume servers or cloud computing infrastructure. The technology developed by the project will be evaluated in a number of trials, spanning several applica- tion domains. This document describes the current state of the art in those application domains from the perspective of the project partners whose businesses encompass those domains. It then goes on to describe the specific target scenarios that will be used to evaluate the technology within each application domain, and how the success of each trial will be judged. -
DC DMV Communication Related to Reinstating Suspended Driver Licenses and Driving Privileges (As of December 10, 2018)
DC DMV Communication Related to Reinstating Suspended Driver Licenses and Driving Privileges (As of December 10, 2018) In accordance with District Law L22-0175, Traffic and Parking Ticket Penalty Amendment Act of 2017, the DC Department of Motor Vehicles (DC DMV) has reinstated driver licenses and driving privileges for residents and non-residents whose credential was suspended for one of the following reasons: • Failure to pay a moving violation; • Failure to pay a moving violation after being found liable at a hearing; or • Failure to appear for a hearing on a moving violation. DC DMV is mailing notification letters to residents and non-residents affected by the law. District residents who have their driver license or learner permit, including commercial driver license (CDL), reinstated and have outstanding tickets are boot eligible if they have two or more outstanding tickets. If a District resident has an unpaid moving violation in a different jurisdiction, then his or her driving privileges may still be suspended in that jurisdiction until the moving violation is paid. If the resident’s driver license or CDL is not REAL ID compliant (i.e., there is a black star in the upper right-hand corner) and expired, then to renew the credential, the resident will need to provide DC DMV with: • One proof of identity; • One proof of Social Security Number; and • Two proofs of DC residency. If the resident has a name change, then additional documentation, such as a marriage license, divorce order, or name change court order is required. DC DMV only accepts the documents listed on its website at www.dmv.dc.gov.