Supporting Docker in Emulab-Based Network Testbeds

Total Page:16

File Type:pdf, Size:1020Kb

Supporting Docker in Emulab-Based Network Testbeds Supporting Docker in Emulab-Based Network Testbeds David Johnson Elijah Grubb Eric Eide University of Utah {johnsond, grubb, eeide}@cs.utah.edu Abstract that allows a person to package a collection of software Researchers conduct experiments in a variety of comput- into an image and then instantiate virtual devices based ing environments, including dedicated testbeds and com- on that image. Docker images are portable across multi- mercial clouds, and they need convenient mechanisms for ple infrastructures because they are based on containers, deploying their software within these disparate platforms. which run atop an infrastructure-managed OS kernel and To address this need, we have extended Emulab so that it hardware device. The Docker ecosystem includes public can instantiate and configure container-based virtual de- repositories of popular “virtual appliances” and tools that vices using Docker images. Docker is a de facto standard allow people to create and publish their own images. for packaging and deploying software in cloud environ- We have extended the Emulab testbed-management ments; now, researchers can use Docker to package and software to support the allocation and configuration of deploy software within Emulab-based testbeds as well. container-based devices using Docker images. The main We describe how Emulab incorporates Docker and how benefit is to make it easier for experimenters to move ac- it extends Docker images to support the interactivity that tivities into Emulab-based testbeds—including the Utah researchers expect within a testbed. We show that Emulab Emulab site, CloudLab, PhantomNet, and POWDER2— can use many popular Docker images to create testbed and out to production or other environments. An addi- experiments. We expect that Emulab’s support for Docker tional and important benefit is scale. Because containers will make it easier for researchers to move their activities are a lightweight virtualization technology, testbed users freely, both into the testbed and out into production. can run experiments that involve large numbers of vir- tual nodes while using only modest physical resources. 1 Introduction Obtaining these benefits was not straightforward. Our Reliably initializing and configuring devices is essen- goal was to add Docker support to Emulab in a way that tial for conducting repeatable experiments in a testbed. preserves both the Emulab features that testbed users ex- Since its inception, the Emulab testbed-management sys- pect and the Docker features that users of that ecosystem tem [17] has used a custom disk-imaging subsystem for expect. Smooth integration requires that existing Docker installing operating systems and other software on de- images “just work” as the bases of devices in Emulab. vices. That subsystem, called Frisbee [11], was built and This paper describes how we met this challenge. optimized for distributing disk images within a controlled We make two contributions. First, we present the de- cluster of machines—and only within a cluster. Today, sign and implementation of Docker support within Em- however, many Emulab users rely on multiple platforms ulab. Emulab automatically adapts Docker images for to conduct experiments over the course of a full study. use in a testbed: this allows container-based virtual de- In addition to using multiple clusters (e.g., Utah Emu- vices to (1) be “first-class citizens” within experiments lab, CloudLab, and Chameleon)1 to run tests atop myriad and (2) support features that researchers expect of devices hardware, a cybersecurity researcher might debug an ex- within Emulab, such as interactive login. Second, we periment by scaling it down to run in VMs on a laptop, show that Emulab’s Docker support works with existing, and perform “production testing” by scaling it up for a third-party Docker images. Emulab can automatically commercial cloud. An experimenter who moves among adapt 52 of the 60 most popular images from Docker Hub multiple platforms needs to deploy software and config- into the Emulab environment and instantiate containers ure devices in each of these environments—a task that from them (§5.1). The process also works for images from Frisbee was never designed to tackle. research projects that we selected as case studies (§5.2). Happily for testbed users, the DevOps community has The time to adapt a Docker image for Emulab is suitable already met the challenge. Docker [6] is a popular tool for on-demand conversion, and Emulab can quickly create large experiments using Docker images (§5.3). 1https://www.emulab.net/, https://cloudlab.us/, https: //www.chameleoncloud.org/ 2https://phantomnet.us/, https://powderwireless.net/ 2 Background: Emulab are visible to the processes running inside it. These may be very different from the resources that are available to Emulab is testbed-management software that supports processes that run on the host outside of the container; controlled and repeatable experiments in systems, net- indeed, through the careful selection of resources, con- working, cybersecurity, and other areas of computing. tainers are generally configured to isolate the processes The Utah Emulab testbed manages hundreds of bare-metal that run within them. Docker deploys containers along servers that have multiple network interfaces to a variety with filesystem images: the user-space software running of layer 1 and 2 switches. It provisions subsets of those within the container is therefore independent of the soft- resources into isolated, emulated networks of physical ware provided by the container’s host. and virtual nodes for users on demand. Emulab resembles Prior to Docker, a variety of other systems (e.g., lxc, the control software of today’s clouds, which provision OpenVZ, Linux vservers, and FreeBSD jails) provided virtual machines and networks atop physical substrate, al- similar container experiences. The overwhelming success though Emulab in general offers more kinds of resources, of Docker can be attributed to its overall accessibility lower-level resource control, and stronger isolation. to users, especially its image-management toolchain and To allocate resources in Emulab, a user writes a profile, central, public image repository. Any system that seeks which is a description of everything needed to build an em- to build on top of Docker, therefore, must be careful to ulated, networked research environment: physical and/or preserve the user experience that makes Docker attractive. virtual hardware (servers, storage, switches), network con- The primary use case envisioned by Docker’s devel- figuration (links and LANs, IP addresses, traffic shaping), opers was not that a Docker container would serve as a and software (operating systems, packages, scripts). The kind of node in a testbed. Still, we assert that Docker can user then instantiates the profile to create an experiment. be successfully integrated into testbeds, and that it can Emulab provisions the experiment with the hardware and provide useful provenance and reproducibility features software described in the profile. The profile may request to experimenters. Below, we examine the possibility of specific hardware and configuration, such as requesting utilizing Docker as a type of network testbed node in Em- specific physical machines, or asserting that some virtual ulab and discuss the challenges (C1–C4) we encountered machines should run on specific physical host nodes. Al- while designing our integration. Our goals were not only ternatively, the user can leave some or all of the node and to provide a testbed-oriented container orchestration ser- network configuration details to Emulab, which will find vice, but also to allow users to seamlessly use the Docker a best-fit mapping onto its available physical resources. toolchain in concert with Emulab’s services and features. Once an experiment is allocated, its creator has full con- trol over its resources. Users interact with nodes over a 3.1 Broadening the Process Model (C1) remotely accessible control network. In addition to simply logging into nodes and running installed software, a user The primary use case that Docker supports involves a can change software and configuration, capture disk im- large collection of containers, potentially spanning many ages, change traffic-shaping parameters, take down links, machines, in which each container runs a single applica- reboot nodes, and so on. tion (one or more processes launched by a single parent) that provides a specific service (i.e., a microservice ar- 3 Leveraging Docker in a Network Testbed chitecture). For instance, there are popular Docker im- ages that run memcached and redis3 as services. In this The core abstraction that a network testbed provides to model, container images are tailored to support only their its users is a collection of physical or virtual nodes, con- single service, unencumbered by unnecessary software nected into networks of links and LANs. A user provides or configuration. Many Docker images do not include an the testbed with a description of the environment he or init daemon to launch and monitor processes, or even a she requires: the hardware, network, software, and au- basic set of user-space tools and libraries. tomation needed to run the user’s experiment(s). The The Docker community discourages whole-OS-like testbed maps this description to a subset of its resources, containers that run an init and the traditional litany of configures them, and makes them available to the user. supporting services, e.g., syslogd, sshd, and crond [9]. Docker enables its users to create, deploy, and manage The rationale is that these services are not needed in ev- containers, which are process-level “virtual machines” fa- ery container: in a large-scale deployment, if a single cilitated by the underlying operating system abstractions. service container crashes, orchestration software can sim- A container is an execution environment that sits atop a ply spawn another container to fill the void. Docker 1.13 host’s operating system kernel and hardware resources; (API version 1.25) added the ability to run a containerized a collection of processes runs within the container.
Recommended publications
  • Introduction Use Runit with Traditional Init (Sysvinit)
    2021/07/26 19:10 (UTC) 1/12 Runit Runit Introduction runit is a UNIX init scheme with service supervision. It is a cross-platform Unix init scheme with service supervision, a replacement for sysvinit, and other init schemes and supervision that are used with the traditional init. runit is compatible with djb's daemontools. In Unix-based computer operating systems, init (short for initialization) is the first process started during booting of the computer system. Init is a daemon process that continues running until the system is shut down. Slackware comes with its own legacy init (/sbin/init) from the sysvinit package, that used to be included in almost all other major Linux distributions. The init daemon (or its replacement) is characterised by Process ID 1 (PID 1). To read on the benefits of runit, see here: http://smarden.org/runit/benefits.html * Unless otherwise stated, all commands in this article are to be run by root. Use runit with traditional init (sysvinit) runit is not provided by Slackware, but a SlackBuild is maintained on https://slackbuilds.org/. It does not have any dependencies. As we do not want yet to replace init with runit, run the slackbuild with CONFIG=no: CONFIG=no ./runit.SlackBuild Then install the resulting package and proceed as follows: mkdir /etc/runit/ /service/ cp -a /usr/doc/runit-*/etc/2 /etc/runit/ /sbin/runsvdir-start & Starting via rc.local For a typical Slackware-stlyle service, you can edit /etc/rc.d/rc.local file if [ -x /sbin/runsvdir-start ]; then /sbin/runsvdir-start & fi and then edit write /etc/rc.d/rc.local_shutdown #!/bin/sh SlackDocs - https://docs.slackware.com/ Last update: 2020/05/06 08:08 (UTC) howtos:slackware_admin:runit https://docs.slackware.com/howtos:slackware_admin:runit RUNIT=x$( /sbin/pidof runsvdir ) if [ "$RUNIT" != x ]; then kill $RUNIT fi Then give rc.local_shutdown executive permission: chmod +x /etc/rc.d/rc.local_shutdown and reboot Starting via inittab (supervised) Remove the entries in /etc/rc.d/rc.local and /etc/rc.d/rc.local_shutdown described above.
    [Show full text]
  • Troubleshooting Guide
    Java Platform, Standard Edition Troubleshooting Guide Release 9 E61074-05 October 2017 Java Platform, Standard Edition Troubleshooting Guide, Release 9 E61074-05 Copyright © 1995, 2017, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency- specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S.
    [Show full text]
  • Container Technologies
    Zagreb, NKOSL, FER Container technologies Marko Golec · Juraj Vijtiuk · Jakov Petrina April 11, 2020 About us ◦ Embedded Linux development and integration ◦ Delivering solutions based on Linux, OpenWrt and Yocto • Focused on software in network edge and CPEs ◦ Continuous participation in Open Source projects ◦ www.sartura.hr Introduction to GNU/Linux ◦ Linux = operating system kernel ◦ GNU/Linux distribution = kernel + userspace (Ubuntu, Arch Linux, Gentoo, Debian, OpenWrt, Mint, …) ◦ Userspace = set of libraries + system software Linux kernel ◦ Operating systems have two spaces of operation: • Kernel space – protected memory space and full access to the device’s hardware • Userspace – space in which all other application run • Has limited access to hardware resources • Accesses hardware resources via kernel • Userspace applications invoke kernel services with system calls User applications E.g. bash, LibreOffice, GIMP, Blender, Mozilla Firefox, etc. System daemons: Windowing system: User mode Low-level system systemd, runit, logind, X11, Wayland, Other libraries: GTK+, Qt, EFL, SDL, SFML, Graphics: Mesa, AMD components networkd, PulseAudio, SurfaceFlinger FLTK, GNUstep, etc. Catalyst, … … (Android) C standard library Up to 2000 subroutines depending on C library (glibc, musl, uClibc, bionic) ( open() , exec() , sbrk() , socket() , fopen() , calloc() , …) About 380 system calls ( stat , splice , dup , read , open , ioctl , write , mmap , close , exit , etc.) Process scheduling Memory management IPC subsystem Virtual files subsystem Network subsystem Kernel mode Linux Kernel subsystem subsystem Other components: ALSA, DRI, evdev, LVM, device mapper, Linux Network Scheduler, Netfilter Linux Security Modules: SELinux, TOMOYO, AppArmor, Smack Hardware (CPU, main memory, data storage devices, etc.) TABLE 1 Layers within Linux Virtualization Virtualization Concepts Two virtualization concepts: ◦ Hardware virtualization (full/para virtualization) • Emulation of complete hardware (virtual machines - VMs) • VirtualBox, QEMU, etc.
    [Show full text]
  • Slides for the S6 Lightning Talk
    The s6 supervision suite Laurent Bercot, 2017 What is an init system ? - “init” is vague terminology. “init wars” happened because nobody had a clear vision on what an init system even is or should be. - The 4 elements of an init system: /sbin/init, pid 1, process supervision, service management. - Not necessarily in the same process. Definition: process supervision A long-lived process (daemon) is supervised when it’s spawned by the supervision tree, a set of stable, long-lived processes started at boot time by pid 1. (Often just pid 1.) Supervision is a good pattern: the service is stable and launched in a reproducible env. Supervision only applies to daemons. Service management: definition - Boot time: bring all services up - Shutdown time: bring all services down - More generally: change services’ states Services can be oneshots (short-lived programs with side effects) or longruns (daemons). They have dependencies, which the service manager should enforce. What features do “init”s offer ? - Integrated init systems (systemd, launchd, upstart): “the big guys”. All four elements in one package, plus out-of-scope stuff. - sysvinit, BSD init: /sbin/init, pid 1, supervision (/etc/inittab, /etc/gettys). Service manager not included: sysv-rc, /etc/rc - OpenRC: service manager. - Epoch: similar to sysvinit + sysv-rc The “daemontools family” - /etc/inittab supervision is impractical; nobody uses it for anything else than gettys. - daemontools (DJB, 1998): the first project offering flexible process supervision. Realistic to supervise all daemons with it. - daemontools-encore, runit, perp, s6: supervision suites. - nosh: suite of tools similar to s6, in C++ Supervision suites are not enough - Only ¼ of an init system.
    [Show full text]
  • Your Init; Your Choice
    Your Computer; Your Init; Your Choice By Steve Litt Version 20150108_1348 Copyright © 2015 by Steve Litt Creative Commons Attribution-NoDerivatives 4.0 International License http://creativecommons.org/licenses/by-nd/4.0/legalcode Available online at http://www.troubleshooters.com/linux/presentations/golug_inits/golug_inits.pdf NO WARRANTY, use at your own risk. Slide 1 of 26 Your Computer; Your Init; Your Choice Steve Litt System Overview ● Kernel runs one program, init. ● Everything else run directly or indirectly by init. Slide 2 of 26 Your Computer; Your Init; Your Choice Steve Litt Many Different Init Systems ● Epoch ● nosh ● OpenRC ● perp ● RichFelker ● runit ● s6 ● systemd ● sysvinit ● Upstart ● uselessd ● Many more ● There's an init for every situation ● You can make your own Slide 3 of 26 Your Computer; Your Init; Your Choice Steve Litt Full vs Partial ● Kernel->full-init at PID1->daemons – Systemd, sysvinit, runit, Epoch, Upstart, etc. ● Kernel->PID1->partial-init->daemons – OpenRC, daemontools, damontools-encore, etc. Slide 4 of 26 Your Computer; Your Init; Your Choice Steve Litt Many Features ● Socket Activation ● Parallel starting ● Event controlled ● Sequential starting ● Daemontools-like ● Numeric ordering ● Simplicity ● Dependency ordering ● Descriptive config ● Work with sysvinit scripts ● Script config ● OS toolkit ● Forget features ● Look for benefits that fit your priorities and situation Slide 5 of 26 Your Computer; Your Init; Your Choice Steve Litt Many Routes to Benefits ● Within and outside of init ● With or without sockets ● With or without packaging ● Cutting edge or oldschool Slide 6 of 26 Your Computer; Your Init; Your Choice Steve Litt Bogus Characterizations ● ___ is a toy. – What does that even mean? ● ___ is not ready for prime time.
    [Show full text]
  • FAQ Release V1
    FAQ Release v1 The Syncthing Authors Jul 28, 2020 CONTENTS 1 What is Syncthing?1 2 Is it “syncthing”, “Syncthing” or “SyncThing”?3 3 How does Syncthing differ from BitTorrent/Resilio Sync?5 4 What things are synced?7 5 Is synchronization fast?9 6 Why is the sync so slow? 11 7 Why does it use so much CPU? 13 8 Should I keep my device IDs secret? 15 9 What if there is a conflict? 17 10 How do I serve a folder from a read only filesystem? 19 11 I really hate the .stfolder directory, can I remove it? 21 12 Am I able to nest shared folders in Syncthing? 23 13 How do I rename/move a synced folder? 25 14 How do I configure multiple users on a single machine? 27 15 Does Syncthing support syncing between folders on the same system? 29 16 When I do have two distinct Syncthing-managed folders on two hosts, how does Syncthing handle moving files between them? 31 17 Is Syncthing my ideal backup application? 33 18 Why is there no iOS client? 35 19 How can I exclude files with brackets ([]) in the name? 37 20 Why is the setup more complicated than BitTorrent/Resilio Sync? 39 21 How do I access the web GUI from another computer? 41 i 22 Why do I get “Host check error” in the GUI/API? 43 23 My Syncthing database is corrupt 45 24 I don’t like the GUI or the theme. Can it be changed? 47 25 Why do I see Syncthing twice in task manager? 49 26 Where do Syncthing logs go to? 51 27 How can I view the history of changes? 53 28 Does the audit log contain every change? 55 29 How do I upgrade Syncthing? 57 30 Where do I find the latest release? 59 31 How do I run Syncthing as a daemon process on Linux? 61 32 How do I increase the inotify limit to get my filesystem watcher to work? 63 33 How do I reset the GUI password? 65 ii CHAPTER ONE WHAT IS SYNCTHING? Syncthing is an application that lets you synchronize your files across multiple devices.
    [Show full text]
  • Achieve Fastest System Startup Sequences
    Achieve Fastest System Startup Sequences. How to tune an Embedded System. Embedded Systems Design Conference ARM vs. x86 July 3, 2014 Kei Thomsen MicroSys Electronics GmbH Agenda • Target: reduce startup time of embedded systems. • Meaning of Fast-Boot. • Comparison of Operating Systems, function, benefit and startup times. • Embedded Systems, boot medium and behavior. • Identifying time consuming artifacts. • Summary. ESDC, July 3, 2014 2 © MicroSys Electronics GmbH Meaning of Fast-Boot? • Sense of time depends on the Application • Is a Windows Start below 60 seconds fast? • Do I want to wait 2 seconds for my camera to start? • An endoscope must be available after a power fault within 0.8 seconds, including graphic on 2 screens: Not possible? It is possible! • Fast-Boot = System start time from reset to running application in shortest time. ESDC, July 3, 2014 3 © MicroSys Electronics GmbH Comparison of Operating Systems, function, benefit and startup times. Standard Operating System Real Time Operating System (RTOS) Linux, Windows (CE) OS-9, QNX, VxWorks … • „All Inclusive Plus“. • „Bed & Breakfast“. • extensive and complex. • simple and fast. • Many functions already included • Additional functions are added and build in. and configured. • Simple to set up, as everything • Simple to create and extend the is included. functionality volume. • Many components available, • Only functions, which are really were it is difficult to understand used / needed. No „nice to have what it is for, or needed by other features“. components. • Long startup times • Very fast startup times ESDC, July 3, 2014 4 © MicroSys Electronics GmbH Function und benefit in Linux (Embedded) • The Kernel-Start of a Linux System initializes the build-in Kernel driver.
    [Show full text]
  • Gentoo Kernel Recent and Future Project
    Gentoo Kernel recent and Future project Fast Releasing and Testing of Gentoo Kernel Packages and Future plans of the Gentoo Kernel Project Alice Ferrazzi <[email protected]> kernel :~ $ whoami - Gentoo Kernel Project Leader - Gentoo Kernel Security - Gentoo General System Administrator - Gentoo Proxy Maintainer - Gentoo Study Meeting Tokyo Organizator Tokyo University of Technology - Google Summer of Code 2017 for Gentoo organization - Currently searching job as researcher in Japan Summary ● What is Gentoo? ○ Why I should consider Gentoo? ● What is Gentoo Kernel Project? ● Kernel related project in Gentoo ● Gentoo Kernel recent and Future project ○ Toward Automation ○ Gentoo Kernel CI ○ kernel security live patch ○ Considering PAX fork ● Concluding What is Gentoo? ● Highly customizable meta-distribution ● Built from source and support for user patching ● Available in most architecture ● Freedom of choice (OpenRC, SystemD, Runit, Epoch, and Busybox) ● Easy maintenance (also of the Linux Kernel) Who is using Gentoo? ● Chrome OS ○ Chrome OS Has Double the Marketshare of Regular Linux in USA(2017/03) ○ Chromebooks outsold Macs for the first time in the US (2016/05) ● Softbank Pepper (NAOqi OS) ● CoreOS ● Most of Gentoo’s sponsors run Gentoo: ○ https://www.gentoo.org/inside-gentoo/sponsors/ ● Daniel Robbins maintains a useful graphic of Gentoo derivatives: ○ http://www.funtoo.org/Gentoo_Ecosystem Why I should consider Gentoo? ● Easy management of most recent upstream including kernel ● Many Kernel options (gentoo-sources, git-sources, rt-sources, ck-sources) ● Increased security with Hardened package ● Kernel Patches managed by package settings (USE flag) ● Gentoo Kernel wiki documentation ● Automatic Kernel deblobing for specific kernel (ck-sources, hardened-sources, rt-sources) What is the Gentoo Kernel Project? ● Writing Gentoo Kernel guide and policy ● Stabilizing Gentoo Kernel for most architectures ● Releasing Gentoo Kernel sources packages ● Writing library for managing the Gentoo Kernel sources installation.
    [Show full text]
  • System, Software, DR, and Skill Requirements Webneers, an EFNEP Web Application
    System, Software, DR, and Skill Requirements WebNeers, An EFNEP Web Application Youth Learning Institute, Clemson University 10-27-2016 Contents Minimum System Requirements .................................................................................................................. 2 Software Requirements ................................................................................................................................ 2 Disaster Recovery Requirements* ................................................................................................................ 2 Skill Requirements ........................................................................................................................................ 3 Development............................................................................................................................................. 3 Server Side ............................................................................................................................................ 3 Client Side ............................................................................................................................................. 3 System Administration .............................................................................................................................. 3 Data and Database Management ............................................................................................................. 3 Competency Matrix .....................................................................................................................................
    [Show full text]
  • Container Mechanics in Linux and Rkt
    Container mechanics in Linux and rkt FOSDEM 2016 Alban Crequy github.com/alban Jonathan Boulle github.com/jonboulle @baronboulle a modern, secure, composable container runtime an implementation of appc (image format, execution environment) rkt simple CLI tool golang + Linux self-contained simple CLI tool no (mandatory) daemon: apps run directly under spawning process bash/runit/systemd rkt application(s) rkt internals modular architecture execution divided into stages stage0 → stage1 → stage2 bash/runit/systemd rkt application(s) bash/runit/systemd/... (invoking process) rkt (stage0) pod (stage1) app1 (stage2) app2 (stage2) bash/runit/systemd/... (invoking process) rkt (stage0) pod (stage1) app1 (stage2) app2 (stage2) stage0 (rkt binary) discover, fetch, manage application images set up pod filesystems commands to manage pod lifecycle bash/runit/systemd/... (invoking process) rkt (stage0) pod (stage1) app1 (stage2) app2 (stage2) bash/runit/systemd/... (invoking process) rkt (stage0) pod (stage1) app1 (stage2) app2 (stage2) stage1 "the container" execution environment for pods process lifecycle management resource constraints (isolators) stage1 (swappable) ● binary ABI with stage0 ● rkt's stage0 calls exec(stage1, args...) ● default implementation ○ based on systemd-nspawn + systemd ○ Linux namespaces + cgroups for isolation ● kvm implementation ○ based on lkvm + systemd ○ hardware virtualisation for isolation bash/runit/systemd/... (invoking process) rkt (stage0) pod (stage1) app1 (stage2) app2 (stage2) bash/runit/systemd/... (invoking
    [Show full text]
  • System Fundamentals
    System Fundamentals System and Network Administration Revision 2 (2020/21) Pierre-Philipp Braun <[email protected]> Table of contents ▶ What is a server? ▶ UNIX history ▶ Linux distributions ▶ Terminal tips & tricks ▶ Lab: install Slackware Linux Legal notice & guidelines ▶ Originally designed for 3rd year bachelors at Innopolis University ▶ Modified and enhanced since then ▶ Downgraded lab, much easier now ▶ Open and public knowledge – resources in the appendix ▶ This course is practice and industry oriented What’s a server? What’s the difference between a server and a desktop computer? in terms of packaging?… Rackmount - DL380 gen 10 DL380 gen 10 (w/o cover) ==> Enterprise-class ▶ Fault-tolerant storage disks ▶ Fault-tolerant Power Supply Units (PSU) ▶ Out-of-band management (Lights-out) Fault-tolerant storage disks RAID controller there is… RAID-1 DL380 gen 10 top view Fault-tolerant Power Supply Units (PSU) DL380 gen 10 rear slots DL380 gen 10 rear filled Racks More racks Datacenter cooling A self-made PC is fine too, as long as it is dedicated! ▶ low-cost PC with some AMD Ryzen inside same goes for a 500 RUB SoC ▶ TI BBB ▶ RPi4 ▶ Nvidia Jetson Nano Developer Kit ▶ … By the way, who’s selling more desktop computer CPUs, Intel or AMD?… ==> AMD took over end 2020 // hardwaretimes.com Still loosing the laptop market // hardwaretimes.com Lights-Out Management (LOM) ▶ THIS IS NOT ABOUT SSH ▶ Dedicated daughter board –or– ▶ Hardware integrated in the mobo Low-level console Reach it through ▶ Serial console ▶ Java ▶ HTML5 Remote management engines HP ▶ Management Processor (MP) on HP9000 systems ▶ HPE Integrated Lights-Out 2 (iLO2) IBM ▶ Baseboard Management Controller (BMC) ▶ e.g.
    [Show full text]
  • The System XVI Manual Release Manual-Pre1.0 David Mackay
    The System XVI Manual Release manual-pre1.0 David Mackay Mar 08, 2020 I: Introduction to System XVI 1 What is System XVI? 3 1.1 The Four Motives............................................3 1.2 Flexibility................................................3 1.3 Fault-Tolerant..............................................4 2 Prior Art 5 2.1 IBM System Resource Controller....................................5 2.2 The Daemontools Family........................................5 2.3 OpenRC.................................................6 2.4 Launchd.................................................6 2.5 SMF...................................................6 3 Principles of System XVI 7 4 Workflow 9 4.1 Dependencies...............................................9 4.2 Getting the Source Tree......................................... 10 4.3 Building System XVI.......................................... 10 i ii The System XVI Manual, Release manual-pre1.0 A forewarning: System XVI is not yet useable, not even in a limited fashion. It remains still prototypical, and will require significant work before making generally available: completion of basic features, making robust the code, and creation of tests. The manual is made available to the public only under this proviso. This is the System XVI Manual. It aims to become an authoritative document on System XVI, or S16, the new System Resource Manager available for Linux and BSD. I: Introduction to System XVI 1 The System XVI Manual, Release manual-pre1.0 2 I: Introduction to System XVI CHAPTER 1 What is System XVI? System XVI, abbreviated S16, is a modern system resource manager for Linux and the BSD distros. It manages system resources, ensuring that they are kept running, self-repairing after faults, and providing the administrator with the information necessary to make appropriate repairs if an un-self-reparable error should occur. It is designed in line with the Four Motives, which are detailed below: 1.1 The Four Motives • Interface Orientation: the system should be designed to fit a clean and stable interface.
    [Show full text]