Towards a Secure Iot Computing Platform Using Linux-Based Containers

Total Page:16

File Type:pdf, Size:1020Kb

Towards a Secure Iot Computing Platform Using Linux-Based Containers Towards a Secure IoT Computing Platform Using Linux-Based Containers Marcus Hufvudsson Information Security, master's level (120 credits) 2017 Luleå University of Technology Department of Computer Science, Electrical and Space Engineering Abstract The Internet of Things (IoT) are small, sensing, network enabled computing devices which can extend smart behaviour into resource constrained domains. This thesis focus on evaluating the viability of Linux containers in relation to IoT devices. Three research questions are posed to investigate various aspects of this. (1) Can any guidelines and best practices be derived from creating a Linux container based security enhanced IoT platform? (2) Can the LiCShield project be extended to build dynamic, default deny seccomp configurations? (3) Are Linux containers viable on IoT platforms in regards to operational performance impact? To answer these questions, a literature review was conducted, research gaps identified and a research methodology selected. A Linux-based container platform was then created in which appli- cations could be run. Experimentation was conducted on the platform and operational measurements collected. A number of interesting results was produced during the project. In relation to the first research question, it was discovered that the LXC templating code created could probably benefit other Linux container projects as well as the LXC project itself. Secondly, it was found that a robust, layered containerized security architecture could be created by utilizing basic container configurations and by drawing from best practices from LXC and docker. In relation to the second research ques- tion, a proof of concept system was created to profile and build dynamic, default deny seccomp configurations. Analysis of the system shows that the developed method is viable. In relation to the final research question; Con- tainer overhead with regards to CPU, memory, network I/O and storage was measured. In this project, there were no CPU overhead and only a slight per- formance decrease of 0.1 % on memory operations. With regards to network I/O, a speed decrease of 0.2 % was observed when a container received data and utilized NAT. On the other hand, while the container was sending data, a speed increase of 1.4 % was observed while the container was operating in bridge mode and an increase of 0.9 % was observed while utilizing NAT. Regarding storage overhead, a total of 508 KB base overhead was added to each container on creation. Due to these findings, the overhead containers introduce are considered negligible and thus deemed viable on IoT devices. Contents 1 Introduction 1 1.1 IoT Classification . .2 1.2 Powerful IoT Use-Cases . .6 1.3 Introduction to Containers . .8 1.4 Problem Statement . .9 1.5 Research Questions . 10 1.6 Research Objectives . 10 1.7 Expected Contributions . 11 1.8 Delimitations . 11 1.9 Thesis Outline . 12 2 Background 13 2.1 Linux-Based Containers . 13 2.2 Linux Kernel Features . 14 2.2.1 Namespaces . 15 2.2.2 Root File System . 16 2.2.3 Cgroups . 17 2.2.4 Capabilities . 18 2.2.5 Secure Computing Mode . 19 2.2.6 Linux Security Modules . 20 3 Related Work 22 3.1 Container Security Studies . 22 3.2 Container Performance Studies . 25 3.3 Comparative Study . 30 3.4 Research Gap . 31 4 Research Methodology 34 4.1 Methodology Implementation . 36 4.1.1 Problem Identification and Motivation . 36 4.1.2 Definition of the Objectives for a Solution . 36 4.1.3 Design and Development . 37 4.1.4 Demonstration . 37 4.1.5 Evaluation . 37 4.1.6 Communication . 38 5 Design 39 5.1 Phase One - IoT Container Platform . 39 5.2 Phase Two - Dynamic Seccomp Profiling . 40 5.3 Phase Three - Container Performance Measurements . 40 5.4 Project Overview and Planning . 41 6 Implementation 43 6.1 Phase One - IoT Container Platform . 43 6.1.1 Base Operating System . 45 6.1.2 LXC Container Platform . 45 6.1.3 UTS Namespace . 48 6.1.4 Networking Namespace . 48 6.1.5 Mount Namespace . 49 6.1.6 Root File System . 49 6.1.7 Cgroups . 50 6.1.8 Capabilities . 51 6.1.9 Secure Computing Mode . 52 6.2 Phase Two - Dynamic Seccomp Profiling . 54 6.2.1 First Iteration . 55 6.2.2 Second Iteration . 55 6.2.3 Third Iteration . 57 6.3 Phase Three - Container Performance Measurements . 61 7 Results 63 7.1 Phase One - IoT Container Platform . 63 7.1.1 Base Operating System . 63 7.1.2 LXC Container Platform . 64 7.1.3 UTS Namespace . 65 7.1.4 Networking Namespace . 65 7.1.5 Mount Namespace . 66 7.1.6 Root File System . 67 7.1.7 Cgroups . 67 7.1.8 Capabilities . 68 7.1.9 Secure Computing Mode . 68 7.2 Phase Two - Dynamic Seccomp Profiling . 69 7.3 Phase Three - Container Performance Measurements . 70 7.3.1 CPU & Memory Operation Measurements . 70 7.3.2 Network Measurements . 71 8 Discussion 79 9 Conclusions 84 9.1 Future Work . 85 Chapter 1 Introduction The term Internet of Things was, according to Sundmaeker et al. [1] first mentioned by the founders of the MIT Auto-ID center. The term was then picked up by various news organizations. In 2005, the International Telecom- munications Union (ITU) published a report on the IoT concepts and its meaning. Sundmaeker et al. [1] summarizes the ITU's report on the mean- ing of IoT: "The ITU report adopts a comprehensive and holistic approach by sug- gesting that the Internet of Things will connect the world's objects in both a sensory and intelligent manner through combining technological develop- ments in item identification ("tagging things"), sensors and wireless sensor networks ("feeling things"), embedded systems ("thinking things") and nan- otechnology ("shrinking things")." In essence, the definition boils down to relatively small devices (compared to the more common traditional computers) which has the ability to com- municate with other devices. The smart devices that is IoT has steadily been making progress into society and will most likely continue to do so. Aerospace, automotive, telecommunication, housing, medical, agriculture, retail, processing industries and transportation are just some examples of industries that have seen adoption of IoT [1]. A diverging aspect of the IoT concept compared to traditional comput- 1 ing is its inherent heterogeneous nature. IoT devices can range from small identification tags (RFID), simple sensors and actuators to smart and more powerful, distributed intelligence devices (or even a mixture of all of them). These different types of devices must often interact with each other to pro- vide a meaningful function. One example of how these various heterogeneous devices could work together to form a system is given by Atzori et al. [2]. In their health care example: tracking, identification/authentication, data collection and sensing are used in conjunction in order to provide services needed in the domain. Tracking with the help of RFID systems could for ex- ample be useful to identify the location of a patient. Similarly, identification could be used to associate medication to the patient and data collection could be performed by an intermediary device to which sensor nodes are attached which keep track of the patient's status. 1.1 IoT Classification Bormann et al. [3] has proposed a set of terms to be used in the IoT domain. This thesis makes use of some of these terms to facilitate a common ground for the definition of the work presented. In their paper, Bormann et al. define a "constrained node" by comparing it to a node operating on the Internet. In this context, an "Internet node" could for example be a server, or laptop. In contrast to an Internet node, a constrained node is one that costs less and/or exhibit "physical constraints on characteristics such as size, weight, and available power and energy" [3]. These constraints leads to lower expectations of the constrained device. Bormann et al. recognizes that this definition is not very rigorous. It does however offer a relativistic definition that will always points toward significantly less powerful devices than what is currently the state of the art technology used in Internet nodes. Bormann et al. [3] further exemplifies the definition of a constrained node by providing a list of typical facets exhibited: • constraints on the maximum code complexity (ROM/Flash) • constraints on the size of state and buffers (RAM) 2 • constraints on the amount of computation feasible in a period of time ("processing power") • constraints on the available power • constraints on user interface and accessibility in deployment (ability to set keys, update software, etc.) The first two items in this list, "code complexity (ROM/Flash)" and "size of state and buffers (RAM)" is used to further define three specific classes of constrained nodes in order to further extend and understand the definition. Bormann et al. provides a table of the different classes. Table 1: Classes of Constrained Devices [3] Name data size (e.g., RAM) code size (e.g., Flash) Class 0 <<10 KiB <<100 KiB Class 1 ∼ 10 KiB ∼ 100 KiB Class 2 ∼ 50 KiB ∼ 250 KiB As can be seen in table 1, a constrained node differs significantly from an Internet node in terms of memory capacity. This table is constructed based on "commercially available chips and design cores for constrained devices" [3]. Bormann et al. notes that the boundaries of table 1 will change over time as technology progresses and maybe it already have since this thesis is written three years after [3] was published. The advent of "single board computers", such as the Raspberry Pi [4] could very well have skewed the table. Although, this is just speculation.
Recommended publications
  • Google Android
    Google Android 2008/3/10 NemusTech, Inc. Lee Seung Min 네무스텍㈜ Agenda Introduction Mobile Platform Overview Background : Current Linux Mobile Platform What is Android? Features Architecture Technical Detail Android SDK Porting Android to Real Target Future of Android A conceptual model for mobile software Software Stack Kernel the core of the SW (HW drivers, memory, filesystem, and process management) Middleware The set of peripheral software libraries (messaging and communication engines, WAP renders, codecs, etc) Application Execution Environment An application manager and set APIs UI framework A set of graphic components and an interaction framework Application Suite The set of core handset application ( IDLE screen, dialer, menu screen, contacts, calendar, etc) Mobile Platforms Feature Phone Vendor Platform : Mocha, PDK, WAVE, WISE, KX, etc...... Carrier Platform : SKTelecom TPAK, NTT i-Mode (WAP), Java, WIPI, BREW, etc…… 3rd Party Solution : TAT Cascade, Qualcomm uiOne Smart Phone MicroSoft Windows Mobile Nokia : Symbian, Series 60 Apple, iPhone – OSX 10.5 Leopard Linux Customers & Licensees Not all customers or licensees are shown Source:vendor data Smartphone OS Market Share by Region Smartphone OS market share by region, 2006 Source : Canalys Current Linux Mobile Platforms LiMo Foundation https://www.limofoundation.org/sf/sfmain/do/home TrollTech Qtopia GreenPhone Acquired by Nokia OpenMoko : GNU/Linux based software development platform http://www.openmoko.org , http://www.openmoko.com Linux
    [Show full text]
  • Oracle® Linux Administrator's Solutions Guide for Release 6
    Oracle® Linux Administrator's Solutions Guide for Release 6 E37355-64 August 2017 Oracle Legal Notices Copyright © 2012, 2017, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S.
    [Show full text]
  • Security Assurance Requirements for Linux Application Container Deployments
    NISTIR 8176 Security Assurance Requirements for Linux Application Container Deployments Ramaswamy Chandramouli This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8176 NISTIR 8176 Security Assurance Requirements for Linux Application Container Deployments Ramaswamy Chandramouli Computer Security Division Information Technology Laboratory This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8176 October 2017 U.S. Department of Commerce Wilbur L. Ross, Jr., Secretary National Institute of Standards and Technology Walter Copan, NIST Director and Under Secretary of Commerce for Standards and Technology NISTIR 8176 SECURITY ASSURANCE FOR LINUX CONTAINERS National Institute of Standards and Technology Internal Report 8176 37 pages (October 2017) This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8176 Certain commercial entities, equipment, or materials may be identified in this document in order to describe an experimental procedure or concept adequately. Such identification is not intended to imply recommendation or endorsement by NIST, nor is it intended to imply that the entities, materials, or equipment are necessarily the best available for the purpose. This p There may be references in this publication to other publications currently under development by NIST in accordance with its assigned statutory responsibilities. The information in this publication, including concepts and methodologies, may be used by federal agencies even before the completion of such companion publications. Thus, until each ublication is available free of charge from: http publication is completed, current requirements, guidelines, and procedures, where they exist, remain operative. For planning and transition purposes, federal agencies may wish to closely follow the development of these new publications by NIST.
    [Show full text]
  • Fog Computing: a Platform for Internet of Things and Analytics
    Fog Computing: A Platform for Internet of Things and Analytics Flavio Bonomi, Rodolfo Milito, Preethi Natarajan and Jiang Zhu Abstract Internet of Things (IoT) brings more than an explosive proliferation of endpoints. It is disruptive in several ways. In this chapter we examine those disrup- tions, and propose a hierarchical distributed architecture that extends from the edge of the network to the core nicknamed Fog Computing. In particular, we pay attention to a new dimension that IoT adds to Big Data and Analytics: a massively distributed number of sources at the edge. 1 Introduction The “pay-as-you-go” Cloud Computing model is an efficient alternative to owning and managing private data centers (DCs) for customers facing Web applications and batch processing. Several factors contribute to the economy of scale of mega DCs: higher predictability of massive aggregation, which allows higher utilization with- out degrading performance; convenient location that takes advantage of inexpensive power; and lower OPEX achieved through the deployment of homogeneous compute, storage, and networking components. Cloud computing frees the enterprise and the end user from the specification of many details. This bliss becomes a problem for latency-sensitive applications, which require nodes in the vicinity to meet their delay requirements. An emerging wave of Internet deployments, most notably the Internet of Things (IoTs), requires mobility support and geo-distribution in addition to location awareness and low latency. We argue that a new platform is needed to meet these requirements; a platform we call Fog Computing [1]. We also claim that rather than cannibalizing Cloud Computing, F. Bonomi R.
    [Show full text]
  • Reducing Power Consumption in Mobile Devices by Using a Kernel
    IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. Z, NO. B, AUGUST 2017 1 Reducing Event Latency and Power Consumption in Mobile Devices by Using a Kernel-Level Display Server Stephen Marz, Member, IEEE and Brad Vander Zanden and Wei Gao, Member, IEEE E-mail: [email protected], [email protected], [email protected] Abstract—Mobile devices differ from desktop computers in that they have a limited power source, a battery, and they tend to spend more CPU time on the graphical user interface (GUI). These two facts force us to consider different software approaches in the mobile device kernel that can conserve battery life and reduce latency, which is the duration of time between the inception of an event and the reaction to the event. One area to consider is a software package called the display server. The display server is middleware that handles all GUI activities between an application and the operating system, such as event handling and drawing to the screen. In both desktop and mobile devices, the display server is located in the application layer. However, the kernel layer contains most of the information needed for handling events and drawing graphics, which forces the application-level display server to make a series of system calls in order to coordinate events and to draw graphics. These calls interrupt the CPU which can increase both latency and power consumption, and also require the kernel to maintain event queues that duplicate event queues in the display server. A further drawback of placing the display server in the application layer is that the display server contains most of the information required to efficiently schedule the application and this information is not communicated to existing kernels, meaning that GUI-oriented applications are scheduled less efficiently than they might be, which further increases power consumption.
    [Show full text]
  • Openfog Reference Architecture for Fog Computing
    OpenFog Reference Architecture for Fog Computing Produced by the OpenFog Consortium Architecture Working Group www.OpenFogConsortium.org February 2017 1 OPFRA001.020817 © OpenFog Consortium. All rights reserved. Use of this Document Copyright © 2017 OpenFog Consortium. All rights reserved. Published in the USA. Published February 2017. This is an OpenFog Consortium document and is to be used in accordance with the terms and conditions set forth below. The information contained in this document is subject to change without notice. The information in this publication was developed under the OpenFog Consortium Intellectual Property Rights policy and is provided as is. OpenFog Consortium makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of fitness for a particular purpose. This document contains content that is protected by copyright. Copying or distributing the content from this document without permission is prohibited. OpenFog Consortium and the OpenFog Consortium logo are registered trademarks of OpenFog Consortium in the United States and other countries. All other trademarks used herein are the property of their respective owners. Acknowledgements The OpenFog Reference Architecture is the product of the OpenFog Architecture Workgroup, co-chaired by Charles Byers (Cisco) and Robert Swanson (Intel). It represents the collaborative work of the global membership of the OpenFog Consortium. We wish to thank these organizations for contributing
    [Show full text]
  • Have You Driven an Selinux Lately? an Update on the Security Enhanced Linux Project
    Have You Driven an SELinux Lately? An Update on the Security Enhanced Linux Project James Morris Red Hat Asia Pacific Pte Ltd [email protected] Abstract All security-relevant accesses between subjects and ob- jects are controlled according to a dynamically loaded Security Enhanced Linux (SELinux) [18] has evolved mandatory security policy. Clean separation of mecha- rapidly over the last few years, with many enhancements nism and policy provides considerable flexibility in the made to both its core technology and higher-level tools. implementation of security goals for the system, while fine granularity of control ensures complete mediation. Following integration into several Linux distributions, SELinux has become the first widely used Mandatory An arbitrary number of different security models may be Access Control (MAC) scheme. It has helped Linux to composed (or “stacked”) by SELinux, with their com- receive the highest security certification likely possible bined effect being fully analyzable under a unified pol- for a mainstream off the shelf operating system. icy scheme. SELinux has also proven its worth for general purpose Currently, the default SELinux implementation com- use in mitigating several serious security flaws. poses the following security models: Type Enforcement (TE) [7], Role Based Access Control (RBAC) [12], While SELinux has a reputation for being difficult to Muilti-level Security (MLS) [29], and Identity Based use, recent developments have helped significantly in Access Control (IBAC). These complement the standard this area, and user adoption is advancing rapidly. Linux Discretionary Access Control (DAC) scheme. This paper provides an informal update on the project, With these models, SELinux provides comprehensive discussing key developments and challenges, with the mandatory enforcement of least privilege, confidential- aim of helping people to better understand current ity, and integrity.
    [Show full text]
  • Addressing Challenges in Automotive Connectivity: Mobile Devices, Technologies, and the Connected Car
    2015-01-0224 Published 04/14/2015 Copyright © 2015 SAE International doi:10.4271/2015-01-0224 saepcelec.saejournals.org Addressing Challenges in Automotive Connectivity: Mobile Devices, Technologies, and the Connected Car Patrick Shelly Mentor Graphics Corp. ABSTRACT With the dramatic mismatch between handheld consumer devices and automobiles, both in terms of product lifespan and the speed at which new features (or versions) are released, vehicle OEMs are faced with a perplexing dilemma. If the connected car is to succeed there has to be a secure and accessible method to update the software in a vehicle's infotainment system - as well as a real or perceived way to graft in new software content. The challenge has become even more evident as the industry transitions from simple analog audio systems which have traditionally served up broadcast content to a new world in which configurable and interactive Internet- based content rules the day. This paper explores the options available for updating and extending the software capability of a vehicle's infotainment system while addressing the lifecycle mismatch between automobiles and consumer mobile devices. Implications to the design and cost of factory installed equipment will be discussed, as will expectations around the appeal of these various strategies to specific target demographics. CITATION: Shelly, P., "Addressing Challenges in Automotive Connectivity: Mobile Devices, Technologies, and the Connected Car," SAE Int. J. Passeng. Cars – Electron. Electr. Syst. 8(1):2015, doi:10.4271/2015-01-0224. INTRODUCTION be carefully taken into account. The use of app stores is expected to grow significantly in the coming years as automotive OEMs begin to Contemporary vehicle infotainment systems face an interesting explore apps not only on IVI systems, but on other components of the challenge.
    [Show full text]
  • Hypervisors Vs. Lightweight Virtualization: a Performance Comparison
    2015 IEEE International Conference on Cloud Engineering Hypervisors vs. Lightweight Virtualization: a Performance Comparison Roberto Morabito, Jimmy Kjällman, and Miika Komu Ericsson Research, NomadicLab Jorvas, Finland [email protected], [email protected], [email protected] Abstract — Virtualization of operating systems provides a container and alternative solutions. The idea is to quantify the common way to run different services in the cloud. Recently, the level of overhead introduced by these platforms and the lightweight virtualization technologies claim to offer superior existing gap compared to a non-virtualized environment. performance. In this paper, we present a detailed performance The remainder of this paper is structured as follows: in comparison of traditional hypervisor based virtualization and Section II, literature review and a brief description of all the new lightweight solutions. In our measurements, we use several technologies and platforms evaluated is provided. The benchmarks tools in order to understand the strengths, methodology used to realize our performance comparison is weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. introduced in Section III. The benchmark results are presented Our results show that containers achieve generally better in Section IV. Finally, some concluding remarks and future performance when compared with traditional virtual machines work are provided in Section V. and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance II. BACKGROUND AND RELATED WORK difference with other technologies is in many cases relatively small. In this section, we provide an overview of the different technologies included in the performance comparison.
    [Show full text]
  • Onload User Guide
    Onload User Guide Copyright © 2017 SOLARFLARE Communications, Inc. All rights reserved. The software and hardware as applicable (the “Product”) described in this document, and this document, are protected by copyright laws, patents and other intellectual property laws and international treaties. The Product described in this document is provided pursuant to a license agreement, evaluation agreement and/or non‐disclosure agreement. The Product may be used only in accordance with the terms of such agreement. The software as applicable may be copied only in accordance with the terms of such agreement. Onload is licensed under the GNU General Public License (Version 2, June 1991). See the LICENSE file in the distribution for details. The Onload Extensions Stub Library is Copyright licensed under the BSD 2‐Clause License. Onload contains algorithms and uses hardware interface techniques which are subject to Solarflare Communications Inc patent applications. Parties interested in licensing Solarflare's IP are encouraged to contact Solarflare's Intellectual Property Licensing Group at: Director of Intellectual Property Licensing Intellectual Property Licensing Group Solarflare Communications Inc, 7505 Irvine Center Drive Suite 100 Irvine, California 92618 You will not disclose to a third party the results of any performance tests carried out using Onload or EnterpriseOnload without the prior written consent of Solarflare. The furnishing of this document to you does not give you any rights or licenses, express or implied, by estoppel or otherwise, with respect to any such Product, or any copyrights, patents or other intellectual property rights covering such Product, and this document does not contain or represent any commitment of any kind on the part of SOLARFLARE Communications, Inc.
    [Show full text]
  • Kdump, a Kexec-Based Kernel Crash Dumping Mechanism
    Kdump, A Kexec-based Kernel Crash Dumping Mechanism Vivek Goyal Eric W. Biederman Hariprasad Nellitheertha IBM Linux NetworkX IBM [email protected] [email protected] [email protected] Abstract important consideration for the success of a so- lution has been the reliability and ease of use. Kdump is a crash dumping solution that pro- Kdump is a kexec based kernel crash dump- vides a very reliable dump generation and cap- ing mechanism, which is being perceived as turing mechanism [01]. It is simple, easy to a reliable crash dumping solution for Linux R . configure and provides a great deal of flexibility This paper begins with brief description of what in terms of dump device selection, dump saving kexec is and what it can do in general case, and mechanism, and plugging-in filtering mecha- then details how kexec has been modified to nism. boot a new kernel even in a system crash event. The idea of kdump has been around for Kexec enables booting into a new kernel while quite some time now, and initial patches for preserving the memory contents in a crash sce- kdump implementation were posted to the nario, and kdump uses this feature to capture Linux kernel mailing list last year [03]. Since the kernel crash dump. Physical memory lay- then, kdump has undergone significant design out and processor state are encoded in ELF core changes to ensure improved reliability, en- format, and these headers are stored in a re- hanced ease of use and cleaner interfaces. This served section of memory. Upon a crash, new paper starts with an overview of the kdump de- kernel boots up from reserved memory and pro- sign and development history.
    [Show full text]
  • Systemd As a Container Manager
    systemd as a Container Manager Seth Jennings [email protected] Texas Linux Fest 2015 8/21/2015 Agenda ● Very quick overview of systemd ● What is a Linux Container ● systemd as a Container Manager ● Live Demo! Because I like to punish myself! Disclaimer What is systemd? ● systemd is a suite of system management daemons, libraries, and utilities designed as a central management and configuration platform for the Linux operating system. How Big Is This “Suite” ● systemd - init process, pid 1 ● journald ● logind ● udevd ● hostnamed ● machined ● importd ● networkd ● resolved ● localed ● timedated ● timesyncd ● and more! Don't Leave! ● No deep dive on all of these ● Focus on using systemd for container management – Spoiler alert: many of the systemd commands you already use work on containers managed by systemd too! What is a Linux Container ● What it is not – Magic ● conjured only from the mystical language of Go – Virtualization (hardware emulation) – A completely new concept never before conceived of by man since time began – An image format – An image distribution mechanism – Only usable by modular (microservice) applications at scale What is a Linux Container ● A resource-constrained, namespaced environment, initialized by a container manager and enforced by the kernel, where processes can run – kernel cgroups limits hardware resources ● cpus, memory, i/o ● special cgroup filesystem /sys/fs/cgroup – kernel namespacing limits resource visibility ● mount, PID, user, network, UTS, IPC ● syscalls clone(), setns(), unshare() What is a Linux Container ● The set of processes in the container is rooted in a process that has pid 1 inside the pid namespace of the container ● The filesystem inside the container can be as complex as a docker image or as simple as a subdirectory on the host (think chroot).
    [Show full text]