Resources Utilization and Allocation – Final Report

Total Page:16

File Type:pdf, Size:1020Kb

Resources Utilization and Allocation – Final Report Ref. Ares(2019)1458230 - 04/03/2019 Resources Utilization and Allocation – Final Report Co-funded by the Horizon 2020 Framework Programme of the European Union DELIVERABLE NUMBER D5.9 DELIVERABLE TITLE Resources Utilization and Allocation – Final Report RESPONSIBLE AUTHOR Scionti A. (ISMB) OPERA: LOw Power Heterogeneous Architecture for Next Generation of SmaRt Infrastructure and Platform in Industrial and Societal Applications GRANT AGREEMENT N. 688386 PROJECT REF. NO H2020- 688386 PROJECT ACRONYM OPERA LOw Power Heterogeneous Architecture for Next Generation of SmaRt PROJECT FULL NAME Infrastructure and Platform in Industrial and Societal Applications STARTING DATE (DUR.) 01/12/2015 ENDING DATE 30/11/2018 PROJECT WEBSITE www.operaproject.eu WORKPACKAGE N. | TITLE WP5 | Optimized Workload Management on Heterogeneous Architecture WORKPACKAGE LEADER IBM DELIVERABLE N. | TITLE D5.9 | Resources Utilization and Allocation – Final Report RESPONSIBLE AUTHOR Scionti A. (ISMB) DATE OF DELIVERY (CONTRACTUAL) 30/09/2018 (M34) DATE OF DELIVERY (SUBMITTED) 15/02/2019 (Resubmission) VERSION | STATUS V2.0 | Update NATURE R(Report) DISSEMINATION LEVEL PU(Public) Scionti A. (ISMB ); Lubrano F. (ISMB ); Nider J. (IBM) ; Rapoport M. (IBM) , AUTHORS (PARTNER) Gallig R. (HPE) D5.9 | Resources Utilization and Allocation – Final Report 1 OPERA: LOw Power Heterogeneous Architecture for Next Generation of SmaRt Infrastructure and Platform in Industrial and Societal Applications VERSION MODIFICATION(S) DATE AUTHOR(S) Scionti A. (ISMB), 0.1 Initial structure and TOC 28/08/2018 Lubrano F. (ISMB) Scionti A. (ISMB), Lubrano F. (ISMB), 0.2 First draft 29/08/2018 Nider J. (IBM), Rapoport M. (IBM) Scionti A. (ISMB), Lubrano F. (ISMB), 0.3 Second draft 17/09/2018 Nider J. (IBM), Rapoport M. (IBM) Scionti A. (ISMB), Received comments Lubrano F. (ISMB), 0.4 19/09/2018 and feedbacks from HPE Nider J. (IBM), Rapoport M. (IBM), Gallig R. (HPE) Addressed comments Scionti A. (ISMB), 0.5 24/09/2018 from HPE Lubrano F. (ISMB) Addressed comments Scionti A. (ISMB), Nider 0.6 26/09/2018 from IBM J. (IBM) Final version of the 1.0 26/09/2018 Scionti A. (ISMB) document This version includes additional material 1.1 30/01/2019 Scionti A. (ISMB) covering reviewers’ comments Nider J. (IBM), Scionti A. Adding contribution 1.2 31/01/2019 (ISMB), Lubrano F. from IBM (ISMB) 2.0 Final rev iew 15/02/2019 Giulio Urlini (ST) D5.9 | Resources Utilization and Allocation – Final Report 2 OPERA: LOw Power Heterogeneous Architecture for Next Generation of SmaRt Infrastructure and Platform in Industrial and Societal Applications PARTICIPANTS CONTACT Giulio Urlini STMICROELECTRONICS SRL Email : [email protected] IBM ISRAEL Joel Nider SCIENCE AND TECHNOLOGY LTD Email: [email protected] HEWLETT PACKARD Renaud Gallig CENTRE DE COMPETENCES Email: [email protected] (FRANCE) Craig Petrie NALLATECH LTD Email: [email protected] ISTITUTO SUPERIORE Olivier Terzo MARIO BOELLA Email: [email protected] TECHNION ISRAEL Dan Tsafrir INSTITUTE OF TECHNOLOGY Email: [email protected] Vittorio Vallero CSI PIEMONTE Email: [email protected] Stéphane Gervais NEAVIA TECHNOLOGIES Email: [email protected] Frank Verhagen CERIOS GREEN BV Email: [email protected] Stefano Serra TESEO SPA Email: [email protected] DEPARTEMENT DE Olivier Latouille L'ISERE Email: [email protected] D5.9 | Resources Utilization and Allocation – Final Report 3 OPERA: LOw Power Heterogeneous Architecture for Next Generation of SmaRt Infrastructure and Platform in Industrial and Societal Applications ACRONYMS LIST DC Data Center LOA Local Optimization Algorithm GOA Global Optimization Algorithm ECRAE Efficient Cloud Resources Allocation Engine DVFS Dynamic Voltage Frequency Scaling VM Virtual Machine LC Linux Container MILP Mixed Integer Linear Programming PSO Particle Swarm Optimization ES Evolutionary Strategy GA Genetic Algorithm ISA Instruction Set Architecture Table 1 - Acronym list. LIST OF FIGURES Figure 1 - Functional view of the DC orchestrator. .................................................................................... 9 Figure 2 - OPERA orchestrator (ECRAE): logical overview of the system (see D7.7, D7.8 for OpenStack testbed installation). ............................................................................................................................... 10 Figure 3 - FPGA (Intel Arria10) linearized power model. .......................................................................... 13 Figure 4 - General structure of population and the effect of discretization. ............................................ 21 Figure 5 - Fitness trend (normalized to that of First-Fit heuristic) for the PSO-based heuristic solving a small instance of the optimisation problem (lower is better). .......................................................................... 22 Figure 6 - Flow chart representing the main steps of the proposed ES. ................................................... 23 Figure 7 - Data structure for the proposed ES heuristic. .......................................................................... 24 Figure 8 - Graphical representation of the TSWP operator. ..................................................................... 25 Figure 9 – Graphical representation of the TFFC operator. ..................................................................... 26 Figure 10 - Heuristic parallelization using the island model. .................................................................... 28 Figure 11 - Memory consumption trace for the small problem instance. ................................................ 29 Figure 12 - Memory consumption trace for the large problem instance. ................................................. 30 Figure 13 - Execution of the ES for the large instance problem. .............................................................. 30 Figure 14 - Overall power reduction of the DC (1000 hosts). ................................................................... 37 LIST OF TABLES Table 1 - Acronym list. .............................................................................................................................. 4 Table 2 - List of used variables constants in the workload optimization problem formulation. ............... 19 D5.9 | Resources Utilization and Allocation – Final Report 4 OPERA: LOw Power Heterogeneous Architecture for Next Generation of SmaRt Infrastructure and Platform in Industrial and Societal Applications EXECUTIVE SUMMARY Work Package 5 (WP5) aims at providing a mechanism to efficiently deploy applications on the data centre resources. The “efficiency” expected from the proposed mechanism concerns the energy consumed by the hardware resources and the relative performance of the various tasks composing applications running on the data center machines. Providing such a mechanism requires to consider of several elements that influence the runtime execution of such tasks. For instance, and to mention few, a model to evaluate the best mapping of tasks with specific host architectures, a mechanism to select the best host among various with the same architecture where to run the tasks, and a way of optimizing the whole data center workload should be integrated. Among the objectives, the aim of task T5.4 is to design a software component capable of deploying application tasks on the most energy-efficient hosts using an energy-aware policy, as well as to provide a way to interact with other components of a Cloud orchestrator (e.g., OpenStack modules). Also, the aim of such software component is to periodically (dynamically) optimize the whole data center workload (i.e., the optimization process aims at consolidating the workload in such way to reduce energy consumption), by reducing the energy(power) consumption. In addition, interactions with other tasks carried out in this WP is necessary to ensure the effectiveness of the whole resource allocation mechanism. Position of the deliverable in the whole project context The activities carried out within task T5.4, whose main results are described in this document, are framed in the WP5 – Optimized Workload Management on Heterogeneous Architecture. Specifically, this deliverable provides an overview of the results of the research activity and investigation done by OPERA partners, aiming at developing a software system (ECRAE) to be integrated in the OpenStack orchestration toolchain, which is responsible for the mapping of cloud application components (virtual machines – VM, and Linux Containers – LCs) on heterogeneous data center resources. The deliverable is in connection with the work done in the WP5, specifically with activities reported in D5.8, D5.7 and D5.6, as well as previous deliverables D5.4, D5.3 and D5.1. Regarding the connection with D5.8, this deliverable specifies the algorithms employed by such software system while D5.8 provides an insight of the integration with OpenStack framework, with a focus on the mechanism used to describe the application structure and all its components. D5.6 provides the analysis of the TOSCA application descriptor format used by ECRAE to allocate each application components on the most suitable hardware, with a focus on three applications that have been successfully ported on the OPERA platform (ECRAE + OpenStack) and which are described using TOSCA descriptors. Deliverable D5.7 contains a detailed description of simulation results which validate the ECRAE algorithms. This document is also in connection with other deliverables
Recommended publications
  • Adding Generic Process Containers to the Linux Kernel
    Adding Generic Process Containers to the Linux Kernel Paul B. Menage∗ Google, Inc. [email protected] Abstract Technically resource control and isolation are related; both prevent a process from having unrestricted access to even the standard abstraction of resources provided by the Unix kernel. For the purposes of this paper, we While Linux provides copious monitoring and control use the following defintions: options for individual processes, it has less support for applying the same operations efficiently to related Resource control is any mechanism which can do either groups of processes. This has led to multiple proposals or both of: for subtly different mechanisms for process aggregation for resource control and isolation. Even though some of these efforts could conceptually operate well together, • tracking how much of a resource is being com- merging each of them in their current states would lead sumed by a set of processes to duplication in core kernel data structures/routines. • imposing quantative limits on that consumption, ei- The Containers framework, based on the existing ther absolutely, or just in times of contention. cpusets mechanism, provides the generic process group- ing features required by the various different resource Typically resource control is visible to the processes be- controllers and other process-affecting subsystems. The ing controlled. result is to reduce the code (and kernel impact) required for such subsystems, and provide a common interface Namespace isolation1 is a mechanism which adds an with greater scope for co-operation. additional indirection or translation layer to the nam- ing/visibility of some unix resource space (such as pro- This paper looks at the challenges in meeting the needs cess ids, or network interfaces) for a specific set of pro- of all the stakeholders, which include low overhead, cesses.
    [Show full text]
  • Ubuntu Server Guide Basic Installation Preparing to Install
    Ubuntu Server Guide Welcome to the Ubuntu Server Guide! This site includes information on using Ubuntu Server for the latest LTS release, Ubuntu 20.04 LTS (Focal Fossa). For an offline version as well as versions for previous releases see below. Improving the Documentation If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with any page. PDFs and Previous Releases Below are links to the previous Ubuntu Server release server guides as well as an offline copy of the current version of this site: Ubuntu 20.04 LTS (Focal Fossa): PDF Ubuntu 18.04 LTS (Bionic Beaver): Web and PDF Ubuntu 16.04 LTS (Xenial Xerus): Web and PDF Support There are a couple of different ways that the Ubuntu Server edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per-server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions.
    [Show full text]
  • Performance Comparison of Linux Containers (LXC) and Openvz During Live Migration
    Thesis no: MSCS-2016-14 Performance comparison of Linux containers (LXC) and OpenVZ during live migration An experiment Pavan Sutha Varma Indukuri Faculty of Computing Blekinge Institute of Technology SE-371 79 Karlskrona Sweden This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfilment of the requirements for the degree of Master of Science in Computer Science. The thesis is equivalent to 20 weeks of full-time studies. Contact Information: Author(s): Pavan Sutha Varma Indukuri E-mail: [email protected] University advisor: Sogand Shirinbab Department of Computer Science and Engineering E-mail: [email protected] Faculty of Computing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE-371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 i i ABSTRACT Context. Cloud computing is one of the most widely used technologies all over the world that provides numerous products and IT services. Virtualization is one of the innovative technologies in cloud computing which has advantages of improved resource utilization and management. Live migration is an innovative feature of virtualization that allows a virtual machine or container to be transferred from one physical server to another. Live migration is a complex process which can have a significant impact on cloud computing when used by the cloud-based software. Objectives. In this study, live migration of LXC and OpenVZ containers has been performed. Later the performance of LXC and OpenVZ has been conducted in terms of total migration time and downtime. Further CPU utilization, disk utilization and an average load of the servers is also evaluated during the process of live migration.
    [Show full text]
  • Ubuntu Core 16 - Security
    Ubuntu Core 16 - Security Whitepaper CANONICAL UBUNTU ENGINEERING AND SERVICES Version: 2.0.0~rc9 Abstract Problem Description Background: classic Ubuntu Trust model: distro Archive integrity Upgrades Software in classic Ubuntu: trusted by the OS Challenges for distro model Staying up to date Boot security System, application and network security Logging Clock synchronization Data encryption Trusted Platform Module Snappy for Ubuntu Core Trust model: snaps and the store Store policies Staying up to date System update process App, gadget and vendor kernel update process Boot security System security System services Users App services and user commands Application security App snaps Gadget snaps Security policy ID Application launch Snappy FHS (filesystem hierarchy standard) Snappy security technologies overview Traditional permissions AppArmor Seccomp Namespaces Control Groups (cgroups) devpts newinstance Ubuntu hardening Snap security policy Default policy Devmode Interfaces Ubuntu Core interfaces Ubuntu Classic interfaces Snap interfaces Interfaces in practice Snap packaging example Access to hardware devices Network security Network interfaces Logging Clock synchronization Data encryption Trusted Platform Module Solution Developer velocity and control Safe, reliable updates for all devices and images Security provides assurances Continued flexibility Conclusion Abstract Ubuntu Core is an important revolutionary step for Ubuntu. While it builds upon Linux traditions, Snappy for Ubuntu Core sharply focuses on predictability, reliability and security while at the same time enabling developer freedom and control. Problem The Linux distribution model is an established and well understood model for computing systems: start with a Linux kernel, add some low-level software to install, manage and use applications and then add some default applications with additional applications installable by other means, usually over the internet via a distribution software archive.
    [Show full text]
  • Practical LXC and LXD
    Practical LXC and LXD Linux Containers for Virtualization and Orchestration Senthil Kumaran S. Practical LXC and LXD Senthil Kumaran S. Chennai, Tamil Nadu, India ISBN-13 (pbk): 978-1-4842-3023-7 ISBN-13 (electronic): 978-1-4842-3024-4 DOI 10.1007/978-1-4842-3024-4 Library of Congress Control Number: 2017953000 Copyright © 2017 by Senthil Kumaran S. Any source code or other supplementary material referenced by the author in this book is available to readers on GitHub via the book’s product page, located at www.apress.com/978-1-4842-3023-7. For more detailed information, please visit http://www.apress.com/source-code. Contents at a Glance ■ Chapter 1: Introduction to Linux Containers................................... 1 ■ Chapter 2: Installation .................................................................. 11 ■ Chapter 3: Getting Started with LXC and LXD ............................... 23 ■ Chapter 4: LXC and LXD Resources .............................................. 39 ■ Chapter 5: Common Virtualization and Orchestration Tools ......... 59 ■ Chapter 6: Use Cases .................................................................... 97 ■ Chapter 7: Containers and Security ............................................ 145 Index .............................................................................................. 157 Contents ■ Chapter 1: Introduction to Linux Containers................................... 1 Container Definition ��������������������������������������������������������������������������������� 2 Container History
    [Show full text]
  • Container Technologies on IBM Z and Linuxone and Their Orchestration
    Container technologies on IBM Z and LinuxONE and their Orchestration Wilhelm Mild IBM Executive IT Architect for Mobile, IBM Z and Linux IBM R&D Lab, Germany Agenda ➢ Container technologies and Ecosystem ➢ Container Orchestration Kubernetes © Copyright IBM Corporation 2017. Technical University/Symposia materials may not be reproduced in whole or in 2 part without the prior written permission of IBM. 2 IBM Z Virtualization options and Container Server virtualization. There are typically Application isolation. There are typically dozens or hundreds of Linux servers in a thousands Containers in Linux on IBM Z. LPAR virtualized using z/VM or KVM. IBM Z Linux Linux SSC Linux Linux 2 Linux Virtual Secure Linux Linux CPUs z/OS or Linux ServiceLinux (cores) z/TPF or Container KVM Linux z/VM zCX Linux (SSC) z/VSE or Virtual CPUs Server Linux (cores) virtualization KVM z/VM LPAR LPAR1 LPAR2 LPAR3 LPAR4 virtualization Logical (PR/SM or DPM) CPUs (cores) Real P1 P2 P3 P5 P6 P7 P8 P4 CPUs* (cores) P1 – P8 are Central Processor Units (CPU -> core) or Integrated Facility for Linux (IFL) Processors (IFL -> core) * - One shared Pool of cores per System only 3 Note: - LPARs can be managed by traditional PR/SM 2020 IBM Corporation Linux Containers - based on control groups and namespaces for isolation The goal was to offer a Linux distro and vendor neutral environment for the development of Linux container technologies. ⚫ To simplify: − “cgroups” will allocate & control resources in your container ⚫ CPU ⚫ Memory Container 1 Container 2 ⚫ Disk I/O throughput Kernel Kernel Namespaces Namespaces − “namespace” will isolate App App ⚫ process IDs ⚫ Hostnames cgroups cgroups ⚫ User IDs App App App ⚫ network access Kernel ⚫ interprocess communication ⚫ filesystems Linux Guest © Copyright IBM Corporation 2017.
    [Show full text]
  • OS-Level Virtualization
    OS-Level Virtualization CS-423 Project Team Members: Spring 2016 Vikas Jain (vjain11) Vibha Goyal (vgoyal5) Nitin Kundapur Bhat(nbhat4) What are containers? • Server-virtualization C1 C3 • Multiple isolated user-space instances (instead of one) on a single kernel of the C2 operating system Operating System • Separate out a user instance from another Hardware • Also called software containers, jails How are they different from VMs? VM1 VM2 VM3 C1 C3 Hypervisor C2 Operating System Operating System Hardware Hardware Virtual Machines Containers How are they different from VMs? Virtual Machines Containers • Abstracts the hardware • Abstracts the operating system • Different guest operating Systems • Single operating system • Heterogeneous (high versatility) • Homogeneous • Low Density • High Density • Overhead as it is not light weight • Less overhead as it is light weight Advantages over whole-system virtualization As all containers use the same OS underneath, it leads to the following benefits: • They use less CPU and Memory for running the same workloads as virtual machines. • The time to initiate a container is smaller as compared to VMs. • On a machine, we can have many have many more user containers (~100) as compared to a small number of user VMs (~10) Limitations • Less Flexibility: Cannot host a guest OS different from the host, or a different guest kernel • Security: As containers run on top of same OS, security issues exist from adjacent containers Use Cases • Virtual hosting environments, where it is useful for securely allocating finite hardware resources amongst a large number of mutually-distrusting users. • Operating-system-level virtualization implementations capable of live migration can also be used for dynamic load balancing of containers between nodes in a cluster.
    [Show full text]
  • Linux Containers Basic Concepts
    Linux Containers Basic Concepts Lucian Carata FRESCO Talklet, 3 Oct 2014 Underlying kernel mechanisms cgroups manage resources for groups of processes namespaces per process resource isolation seccomp limit available system calls capabilities limit available privileges CRIU checkpoint/restore (with kernel support) cgroups - user space view low-level filesystem interface similar to sysfs (/sys) and procfs (/proc) new filesystem type “cgroup”, default location in /sys/fs/cgroup cgroup hierarchies each subsystem can be subsystems (controllers) used at most once* cpu cpuacct cpuset cpu cpuacct memory hugetbl devices blkio net_cls net_prio memory freezer perf built as kernel module top level cgroup (mount) cgroups - user space view cgroup hierarchies common cpuacct cpu cpu cpuacct tasks cpuacct.stat cpu.stat cgroup.procs cpuacct.usage cpu.shares release_agent cpuacct.usage_percpu cpu.cfs_period_us notify_on_release cpu.cfs_quota_us memory cgroup.clone_children cpu.rt_period_us cgroup.sane_behavior cpu.rt_runtime_us cpuset memory hugetbl devices blkio net_cls net_prio freezer perf cgroups - kernel space view include / linux / cgroup.h kernel code for attach/detaching task from css_set list of all tasks using the same cgroups - kernel space view include / linux / cgroup.h include / linux / cgroup_subsys.h list of all tasks using the same cgroups - kernel space view include / linux / cgroup_subsys.h cgroups - summary cgroup hierarchies each subsystem can be subsystems (controllers) used at most once* cpu cpuacct cpuset cpu cpuacct memory hugetbl
    [Show full text]
  • Linux Containers and the Future Cloud
    Linux Containers and the Future Cloud Rami Rosen [email protected] http://ramirose.wix.com/ramirosen 1 About me ● Bio: Rami Rosen, a Linux kernel expert, author of a recently published book, “Linux Kernel Networking - Implementation and Theory”, 648 pages, Apress, 2014. http://ramirose.wix.com/ramirosen 2 Table of Contents Namespaces – Namespaces implementation – UTS namespace – Network Namespaces – PID namespaces Cgroups – cgroups and kernel namespaces – cgroups VFS – devices cgroup controller Linux containers – LXC management Docker – Dockerfile CRIU http://ramirose.wix.com/ramirosen 3 General ● Lightweight process virtualization is not new. – Solaris Zones. – BSD jails. – AIX WPARs (Workload Partitions). – Linux-based containers projects. ● Why now ? – Primarily because kernel support is now available (namespaces and cgroups). ● Kernel 3.8 (released in February 2013). http://ramirose.wix.com/ramirosen 4 Scope and Agenda ● This lecture is a sequel to a lecture given in May 2013 in Haifux: “Resource Management in Linux”, http://www.haifux.org/lectures/299/ Agenda: ● Namespaces and cgroups – the Linux container building blocks. ● Linux-based containers (focusing on the LXC Open Source project, implementation and some hands-on examples). ● Docker – an engine for creating and deploying containers. ● CRIU (Checkpoint/restore in userspace) Scope: – We will not deal in depth with security aspects of containers. – We will not deal with containers in Android. http://ramirose.wix.com/ramirosen 5 Namespaces There are currently 6 namespaces: ● mnt (mount points, filesystems) ● pid (processes) ● net (network stack) ● ipc (System V IPC) ● uts (hostname) ● user (UIDs) – Eric Biederman talked about 10 namespaces in OLS 2006. ● security namespace and security keys namespace will probably not be developed (replaced by user namespaces) ● time namespace.
    [Show full text]
  • LXCF: Tools for Dividing Host OS Into Container with Libvirt-LXC
    LXCF: Tools for Dividing Host OS into Container with libvirt-LXC May 20, 2014 Yasunori Goto / Hideyuki Niwa Fujitsu Limited Copyright 2014 FUJITSU LIMITED Who are we? Yasunori Goto (Speaker) Leader for KVM and LXC development team of Fujitsu • developed Linux memory hotplug with community until 2008 • joined customer support team to analyze users troubles about Linux kernel for several years • returned to the Linux development team last year Hideyuki Niwa Author of LXCF • developed UNIX OS for super computer with vector processor • joined Linux division and has worked for Linux now Copyright 2014 FUJITSU LIMITED Agenda Basis of LXC Why we are developing LXCF Design of LXCF How to use Copyright 2014 FUJITSU LIMITED Basis of LXC Copyright 2014 FUJITSU LIMITED Linux Container Linux Container is an OS level virtualization method It provides multiple isolated environments on one host • User can execute many services on the host No need to emulate hardware like virtual machine • Performance is better than virtual machine application application Middleware Middleware Guest OS Guest OS application application virtual machine virtual machine middle ware middle ware Hypervisor OS bare metal bare metal virtual machine container Linux container Container Copyright 2014 FUJITSU LIMITED User-land tools of LXC To use Linux Container, user-land tool is necessary Kernel provides only... • system calls to manage namespace (clone(), unshare(), and setns()) • interfaces for resource control (cgroup) There are many user-land tool set LXC (userland toolset which is simply called as “LXC”) • well developed tool to use Linux Container feature Libvirt-lxc • provides core feature for Linux Container with libvirt Docker • “The hottest” • “Linux Container” + “git style image management” • for immutable system etc • lmctfy, systemd-nspawn Copyright 2014 FUJITSU LIMITED Note in this presentation The name of “LXC” has 2 meanings 1.
    [Show full text]
  • An Updated Performance Comparison of Virtual Machines and Linux Containers
    RC25482 (AUS1407-001) July 21, 2014 Computer Science IBM Research Report An Updated Performance Comparison of Virtual Machines and Linux Containers Wes Felter, Alexandre Ferreira, Ram Rajamony, Juan Rubio IBM Research Division Austin Research Laboratory 11501 Burnet Road Austin, TX 78758 USA Research Division Almaden – Austin – Beijing – Cambridge – Dublin - Haifa – India – Melbourne - T.J. Watson – Tokyo - Zurich LIMITED DISTRIBUTION NOTICE: This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). Many reports are available at http://domino.watson.ibm.com/library/CyberDig.nsf/home. An Updated Performance Comparison of Virtual Machines and Linux Containers Wes Felter, Alexandre Ferreira, Ram Rajamony, Juan Rubio IBM Research, Austin, TX fwmf, apferrei, rajamony, [email protected] Abstract—Cloud computing makes extensive use of virtual Within the last two years, Docker [45] has emerged as a machines (VMs) because they permit workloads to be isolated standard runtime, image format, and build system for Linux from one another and for the resource usage to be somewhat containers. controlled. However, the extra levels of abstraction involved in virtualization reduce workload performance, which is passed This paper looks at two different ways of achieving re- on to customers as worse price/performance.
    [Show full text]
  • Providing Virtualized Storage for OS-Level Virtualization On
    MultiLanes: Providing Virtualized Storage for OS-level Virtualization on Many Cores Junbin Kang, Benlong Zhang, Tianyu Wo, Chunming Hu, and Jinpeng Huai, Beihang University https://www.usenix.org/conference/fast14/technical-sessions/presentation/kang This paper is included in the Proceedings of the 12th USENIX Conference on File and Storage Technologies (FAST ’14). February 17–20, 2014 • Santa Clara, CA USA ISBN 978-1-931971-08-9 Open access to the Proceedings of the 12th USENIX Conference on File and Storage Technologies (FAST ’14) is sponsored by MultiLanes: Providing Virtualized Storage for OS-level Virtualization on Many Cores Junbin Kang, Benlong Zhang, Tianyu Wo, Chunming Hu, and Jinpeng Huai Beihang University, Beijing, China kangjb, woty, hucm @act.buaa.edu.cn, [email protected], [email protected] { } Abstract high efficiency [32]. Previous work on OS-level virtual- ization mainly focuses on how to efficiently space parti- OS-level virtualization is an efficient method for server tion or time multiplex the hardware resources (e.g., CPU, consolidation. However, the sharing of kernel services memory and disk). among the co-located virtualized environments (VEs) in- However, the advent of non-volatile memory tech- curs performance interference between each other. Es- nologies (e.g., NAND flash, phase change memories pecially, interference effects within the shared I/O stack and memristors) creates challenges for system software. would lead to severe performance degradations on many- Specially, emerging fast storage devices built with non- core platforms incorporating fast storage technologies volatile memories deliver low access latency and enor- (e.g., non-volatile memories). mous data bandwidth, thus enabling high degree of This paper presents MultiLanes, a virtualized storage application-level parallelism [16, 31].
    [Show full text]