Linux Containers and the Future Cloud
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Improving Route Scalability: Nexthops As Separate Objects
Improving Route Scalability: Nexthops as Separate Objects September 2019 David Ahern | Cumulus Networks !1 Agenda Executive Summary ▪ If you remember nothing else about this talk … Driving use case Review legacy route API Dive into Nexthop API Benefits of the new API Cumulus Networks !2 Performance with the Legacy Route API route route route prefix/lenroute prefix/lendev prefix/lendev gatewayprefix/len gatewaydev gatewaydev gateway Cumulus Networks !3 Splitting Next Hops from Routes Routes with separate Nexthop objects Legacy Route API route route prefix/len nexthop route nexthop id dev route gateway prefix/lenroute prefix/lendev prefix/lendev gatewayprefix/len gatewaydev gatewaydev gateway route prefix/len nexthop nexthop nexthop id group nexthopdev nexthop[N] gatewaydev gateway Cumulus Networks !4 Dramatically Improves Route Scalability … Cumulus Networks !5 … with the Potential for Constant Insert Times Cumulus Networks !6 Networking Operating System Using Linux APIs Routing daemon or utility manages switchd ip FRR entries in kernel FIBs via rtnetlink APIs SDK userspace ▪ Enables other control plane software to use Linux networking APIs rtnetlink Data path connections, stats, troubleshooting, … FIB notifications FIB Management of hardware offload is separate kernel upper devices tunnels ▪ Keeps hardware in sync with kernel ... eth0 swp1 swp2 swpN Userspace driver with SDK leveraging driver driver driver kernel notifications NIC switch ASIC H / W Cumulus Networks !7 NOS with switchdev Driver In-kernel switchdev driver ip FRR Leverages -
Checkpoint and Restoration of Micro-Service in Docker Containers
3rd International Conference on Mechatronics and Industrial Informatics (ICMII 2015) Checkpoint and Restoration of Micro-service in Docker Containers Chen Yang School of Information Security Engineering, Shanghai Jiao Tong University, China 200240 [email protected] Keywords: Lightweight Virtualization, Checkpoint/restore, Docker. Abstract. In the present days of rapid adoption of micro-service, it is imperative to build a system to support and ensure the high performance and high availability for micro-services. Lightweight virtualization, which we also called container, has the ability to run multiple isolated sets of processes under a single kernel instance. Because of the possibility of obtaining a low overhead comparable to the near-native performance of a bare server, the container techniques, such as openvz, lxc, docker, they are widely used for micro-service [1]. In this paper, we present the high availability of micro-service in containers. We investigate capabilities provided by container (docker, openvz) to model and build the Micro-service infrastructure and compare different checkpoint and restore technologies for high availability. Finally, we present preliminary performance results of the infrastructure tuned to the micro-service. Introduction Lightweight virtualization, named the operating system level virtualization technology, partitions the physical machines resource, creating multiple isolated user-space instances. Each container acts exactly like a stand-alone server. A container can be rebooted independently and have root access, users, IP address, memory, processes, files, etc. Unlike traditional virtualization with the hypervisor layer, containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary for containerization, including Linux with openvz, vserver and more recently lxc, Solaris with zones, and FreeBSD with Jails [2]. -
Administració De Sistemes GNU Linux Mòdul4 Administració
Administració local Josep Jorba Esteve PID_00238577 GNUFDL • PID_00238577 Administració local Es garanteix el permís per a copiar, distribuir i modificar aquest document segons els termes de la GNU Free Documentation License, Version 1.3 o qualsevol altra de posterior publicada per la Free Software Foundation, sense seccions invariants ni textos de la oberta anterior o posterior. Podeu consultar els termes de la llicència a http://www.gnu.org/licenses/fdl-1.3.html. GNUFDL • PID_00238577 Administració local Índex Introducció.................................................................................................. 5 1. Eines bàsiques per a l'administrador........................................... 7 1.1. Eines gràfiques i línies de comandes .......................................... 8 1.2. Documents d'estàndards ............................................................. 10 1.3. Documentació del sistema en línia ............................................ 13 1.4. Eines de gestió de paquets .......................................................... 15 1.4.1. Paquets TGZ ................................................................... 16 1.4.2. Fedora/Red Hat: paquets RPM ....................................... 19 1.4.3. Debian: paquets DEB ..................................................... 24 1.4.4. Nous formats d'empaquetat: Snap i Flatpak .................. 28 1.5. Eines genèriques d'administració ................................................ 36 1.6. Altres eines ................................................................................. -
Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4
SUSE Linux Enterprise Server 12 SP4 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xii 2 Giving Feedback xiv 3 Documentation Conventions xiv 4 Product Life Cycle and Support xvi Support Statement for SUSE Linux Enterprise Server xvii • Technology Previews xviii I FILE SYSTEMS AND MOUNTING 1 1 Overview -
Trusted Docker Containers and Trusted Vms in Openstack
Trusted Docker Containers and Trusted VMs in OpenStack Raghu Yeluri Abhishek Gupta Outline o Context: Docker Security – Top Customer Asks o Intel’s Focus: Trusted Docker Containers o Who Verifies Trust ? o Reference Architecture with OpenStack o Demo o Availability o Call to Action Docker Overview in a Slide.. Docker Hub Lightweight, open source engine for creating, deploying containers Provides work flow for running, building and containerizing apps. Separates apps from where they run.; Enables Micro-services; scale by composition. Underlying building blocks: Linux kernel's namespaces (isolation) + cgroups (resource control) + .. Components of Docker Docker Engine – Runtime for running, building Docker containers. Docker Repositories(Hub) - SaaS service for sharing/managing images Docker Images (layers) Images hold Apps. Shareable snapshot of software. Container is a running instance of image. Orchestration: OpenStack, Docker Swarm, Kubernetes, Mesos, Fleet, Project Docker Layers Atomic, Lattice… Docker Security – 5 key Customer Asks 1. How do you know that the Docker Host Integrity is there? o Do you trust the Docker daemon? o Do you trust the Docker host has booted with Integrity? 2. How do you verify Docker Container Integrity o Who wrote the Docker image? Do you trust the image? Did the right Image get launched? 3. Runtime Protection of Docker Engine & Enhanced Isolation o How can Intel help with runtime Integrity? 4. Enterprise Security Features – Compliance, Manageability, Identity authentication.. Etc. 5. OpenStack as a single Control Plane for Trusted VMs and Trusted Docker Containers.. Intel’s Focus: Enable Hardware-based Integrity Assurance for Docker Containers – Trusted Docker Containers Trusted Docker Containers – 3 focus areas o Launch Integrity of Docker Host o Runtime Integrity of Docker Host o Integrity of Docker Images Today’s Focus: Integrity of Docker Host, and how to use it in OpenStack. -
Systematic Evaluation of Sandboxed Software Deployment for Real-Time Software on the Example of a Self-Driving Heavy Vehicle
Systematic Evaluation of Sandboxed Software Deployment for Real-time Software on the Example of a Self-Driving Heavy Vehicle Philip Masek1, Magnus Thulin1, Hugo Andrade1, Christian Berger2, and Ola Benderius3 Abstract— Companies developing and maintaining software-only products like web shops aim for establishing persistent links to their software running in the field. Monitoring data from real usage scenarios allows for a number of improvements in the software life-cycle, such as quick identification and solution of issues, and elicitation of requirements from previously unexpected usage. While the processes of continuous integra- tion, continuous deployment, and continuous experimentation using sandboxing technologies are becoming well established in said software-only products, adopting similar practices for the automotive domain is more complex mainly due to real-time and safety constraints. In this paper, we systematically evaluate sandboxed software deployment in the context of a self-driving heavy vehicle that participated in the 2016 Grand Cooperative Driving Challenge (GCDC) in The Netherlands. We measured the system’s scheduling precision after deploying applications in four different execution environments. Our results indicate that there is no significant difference in performance and overhead when sandboxed environments are used compared to natively deployed software. Thus, recent trends in software architecting, packaging, and maintenance using microservices encapsulated in sandboxes will help to realize similar software and system Fig. 1. Volvo FH16 truck from Chalmers Resource for Vehicle Research engineering for cyber-physical systems. (Revere) laboratory that participated in the 2016 Grand Cooperative Driving Challenge in Helmond, NL. I. INTRODUCTION Companies that produce and maintain software-only prod- ucts, i.e. -
An Empirical Study of Virtualization Versus Lightweight Containers
On Exploiting Page Sharing in a Virtualized Environment - an Empirical Study of Virtualization Versus Lightweight Containers Ashish Sonone, Anand Soni, Senthil Nathan, Umesh Bellur Department of Computer Science and Engineering IIT Bombay Mumbai, India ashishsonone, anandsoni, cendhu, [email protected] Abstract—While virtualized solutions are firmly entrenched faulty application can result in the failure of all applications in cloud data centers to provide isolated execution environ- executing on that machine. On the other hand, each virtual ments, the chief overhead it suffers from is that of memory machine runs its own OS and hence, a faulty application consumption. Even pages that are common in multiple virtual machines (VMs) on the same physical machine (PM) are not cannot crash the host OS. Further, each application can shared and multiple copies exist thereby draining valuable run on any customized and optimized guest OS which is memory resources and capping the number of VMs that can be different from the host OS. For example, each guest OS can instantiated on a PM. Lightweight containers (LWCs) on the run its own I/O scheduler which is optimized for the hosted other hand does not suffer from this situation since virtually application. In addition to this, the hypervisor provides many everything is shared via Copy on Write semantics. As a result the capacity of a PM to host LWCs is far higher than hosting other features such as live migration of virtual machines, equivalent VMs. In this paper, we evaluate a solution using high availability using periodic checkpointing and storage uKSM to exploit memory page redundancy amongst multiple migration. -
Mesalock Linux: Towards a Memory-Safe Linux Distribution
MesaLock Linux Towards a memory-safe Linux distribution Mingshen Sun MesaLock Linux Maintainer | Baidu X-Lab, USA Shanghai Jiao Tong University, 2018 whoami • Senior Security Research in Baidu X-Lab, Baidu USA • PhD, The Chinese University of Hong Kong • System security, mobile security, IoT security, and car hacking • MesaLock Linux, TaintART, Pass for iOS, etc. • mssun @ GitHub | https://mssun.me !2 MesaLock Linux • Why • What • How !3 Why • Memory corruption occurs in a computer program when the contents of a memory location are unintentionally modified; this is termed violating memory safety. • Memory safety is the state of being protected from various software bugs and security vulnerabilities when dealing with memory access, such as buffer overflows and dangling pointers. !4 Stack Buffer Overflow • https://youtu.be/T03idxny9jE !5 Types of memory errors • Access errors • Buffer overflow • Race condition • Use after free • Uninitialized variables • Memory leak • Double free !6 Memory-safety in user space • CVE-2017-13089 wget: Stack-based buffer overflow in HTTP protocol handling • A stack-based buffer overflow when processing chunked, encoded HTTP responses was found in wget. By tricking an unsuspecting user into connecting to a malicious HTTP server, an attacker could exploit this flaw to potentially execute arbitrary code. • https://bugzilla.redhat.com/show_bug.cgi?id=1505444 • POC: https://github.com/r1b/CVE-2017-13089 !7 What • Linux distribution • Memory-safe user space !8 Linux Distribution • A Linux distribution (often abbreviated as distro) is an operating system made from a software collection, which is based upon the Linux kernel and, often, a package management system. !9 Linux Distros • Server: CentOS, Federa, RedHat, Debian • Desktop: Ubuntu • Mobile: Android • Embedded: OpenWRT, Yocto • Hard-core: Arch Linux, Gentoo • Misc: ChromeOS, Alpine Linux !10 Security and Safety? • Gentoo Hardened: enables several risk-mitigating options in the toolchain, supports PaX, grSecurity, SELinux, TPE and more. -
USB Composite Gadget Using CONFIG-FS on Dra7xx Devices
Application Report SPRACB5–September 2017 USB Composite Gadget Using CONFIG-FS on DRA7xx Devices RaviB ABSTRACT This application note explains how to create a USB composite gadget, network control model (NCM) and abstract control model (ACM) from the user space using Linux® CONFIG-FS on the DRA7xx platform. Contents 1 Introduction ................................................................................................................... 2 2 USB Composite Gadget Using CONFIG-FS ............................................................................. 3 3 Creating Composite Gadget From User Space.......................................................................... 4 4 References ................................................................................................................... 8 List of Figures 1 Block Diagram of USB Composite Gadget............................................................................... 3 2 Selection of CONFIGFS Through menuconfig........................................................................... 4 3 Select USB Configuration Through menuconfig......................................................................... 4 4 Composite Gadget Configuration Items as Files and Directories ..................................................... 5 5 VID, PID, and Manufacturer String Configuration ....................................................................... 6 6 Kernel Logs Show Enumeration of USB Composite Gadget by Host ................................................ 6 7 Ping -
Adding Generic Process Containers to the Linux Kernel
Adding Generic Process Containers to the Linux Kernel Paul B. Menage∗ Google, Inc. [email protected] Abstract Technically resource control and isolation are related; both prevent a process from having unrestricted access to even the standard abstraction of resources provided by the Unix kernel. For the purposes of this paper, we While Linux provides copious monitoring and control use the following defintions: options for individual processes, it has less support for applying the same operations efficiently to related Resource control is any mechanism which can do either groups of processes. This has led to multiple proposals or both of: for subtly different mechanisms for process aggregation for resource control and isolation. Even though some of these efforts could conceptually operate well together, • tracking how much of a resource is being com- merging each of them in their current states would lead sumed by a set of processes to duplication in core kernel data structures/routines. • imposing quantative limits on that consumption, ei- The Containers framework, based on the existing ther absolutely, or just in times of contention. cpusets mechanism, provides the generic process group- ing features required by the various different resource Typically resource control is visible to the processes be- controllers and other process-affecting subsystems. The ing controlled. result is to reduce the code (and kernel impact) required for such subsystems, and provide a common interface Namespace isolation1 is a mechanism which adds an with greater scope for co-operation. additional indirection or translation layer to the nam- ing/visibility of some unix resource space (such as pro- This paper looks at the challenges in meeting the needs cess ids, or network interfaces) for a specific set of pro- of all the stakeholders, which include low overhead, cesses. -
Large-Scale Cluster Management at Google with Borg Abhishek Verma† Luis Pedrosa‡ Madhukar Korupolu David Oppenheimer Eric Tune John Wilkes Google Inc
Large-scale cluster management at Google with Borg Abhishek Vermay Luis Pedrosaz Madhukar Korupolu David Oppenheimer Eric Tune John Wilkes Google Inc. Abstract config file command-line Google’s Borg system is a cluster manager that runs hun- borgcfg webweb browsers tools dreds of thousands of jobs, from many thousands of differ- ent applications, across a number of clusters each with up to Cell BorgMaster tens of thousands of machines. BorgMasterBorgMaster UIUI shard shard BorgMasterBorgMaster read/UIUIUI shard shard It achieves high utilization by combining admission con- shard Scheduler persistent store trol, efficient task-packing, over-commitment, and machine scheduler (Paxos) link shard linklink shard shard sharing with process-level performance isolation. It supports linklink shard shard high-availability applications with runtime features that min- imize fault-recovery time, and scheduling policies that re- duce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification Borglet Borglet Borglet Borglet language, name service integration, real-time job monitor- ing, and tools to analyze and simulate system behavior. We present a summary of the Borg system architecture and features, important design decisions, a quantitative anal- Figure 1: The high-level architecture of Borg. Only a tiny fraction ysis of some of its policy decisions, and a qualitative ex- of the thousands of worker nodes are shown. amination of lessons learned from a decade of operational experience with it. cluding with a set of qualitative observations we have made from operating Borg in production for more than a decade. 1. Introduction The cluster management system we internally call Borg ad- 2. -
Evaluating and Improving LXC Container Migration Between
Evaluating and Improving LXC Container Migration between Cloudlets Using Multipath TCP By Yuqing Qiu A thesis submitted to the Faculty of Graduate and Postdoctoral Affairs in partial fulfillment of the requirements for the degree of Master of Applied Science in Electrical and Computer Engineering Carleton University Ottawa, Ontario © 2016, Yuqing Qiu Abstract The advent of the Cloudlet concept—a “small data center” close to users at the edge is to improve the Quality of Experience (QoE) of end users by providing resources within a one-hop distance. Many researchers have proposed using virtual machines (VMs) as such service-provisioning servers. However, seeing the potentiality of containers, this thesis adopts Linux Containers (LXC) as Cloudlet platforms. To facilitate container migration between Cloudlets, Checkpoint and Restore in Userspace (CRIU) has been chosen as the migration tool. Since the migration process goes through the Wide Area Network (WAN), which may experience network failures, the Multipath TCP (MPTCP) protocol is adopted to address the challenge. The multiple subflows established within a MPTCP connection can improve the resilience of the migration process and reduce migration time. Experimental results show that LXC containers are suitable candidates for the problem and MPTCP protocol is effective in enhancing the migration process. i Acknowledgement I would like to express my sincerest gratitude to my principal supervisor Dr. Chung-Horng Lung who has provided me with valuable guidance throughout the entire research experience. His professionalism, patience, understanding and encouragement have always been my beacons of light whenever I go through difficulties. My gratitude also goes to my co-supervisor Dr.