Multiple Instances of the Global Linux Namespaces

Total Page:16

File Type:pdf, Size:1020Kb

Multiple Instances of the Global Linux Namespaces Multiple Instances of the Global Linux Namespaces Eric W. Biederman Linux Networx [email protected] Abstract to create a clean implementation that can be merged into the Linux kernel. The discussion has begun on the linux-kernel list and things are Currently Linux has the filesystem namespace slowly progressing. for mounts which is beginning to prove use- ful. By adding additional namespaces for pro- cess ids, SYS V IPC, the network stack, user ids, and probably others we can, at a trivial 1 Introduction cost, extend the UNIX concept and make novel uses of Linux possible. Multiple instances of a namespace simply means that you can have two 1.1 High Performance Computing things with the same name. For servers the power of computers is growing, and it has become possible for a single server I have been working with high performance to easily fulfill the tasks of what previously re- clusters for several years and the situation is quired multiple servers. Hypervisor solutions painful. Each Linux box in the cluster is ref- like Xen are nice but they impose a perfor- ered to as a node, and applications running or mance penalty and they do not easily allow re- queued to run on a cluster are jobs. sources to be shared between multiple servers. Jobs are run by a batch scheduler and, once For clusters application migration and preemp- launched, each job runs to completion typically tion are interesting cases but almost impossibly consuming 99% of the resources on the nodes hard because you cannot restart the application it is running on. once you have moved it to a new machine, as usually there are resource name conflicts. In practice a job cannot be suspended, swapped, or even moved to a different set of For users certain desktop applications interface nodes once it is launched. This is the oldest with the outside world and are large and hard and most primitive way of running a computer. to secure. It would be nice if those applications Given the long runs and high computation over- could be run on their own little world to limit head of HPC jobs it isn’t a bad fit for HPC en- what a security breach could compromise. vironments, but it isn’t a really good fit either. Several implementations of this basic idea have Linux has much more modern facilities. What been done succsessfully. Now the work is prevents us from just using them? 102 • Multiple Instances of the Global Linux Namespaces HPC jobs currently can be suspended, but that uses. Process ids, SYS V IPC identifiers, file- just takes them off the cpu. If you have suffi- names, and the like. When you restore an appli- cient swap there is a chance the jobs can even cation on another machine there is no guarantee be pushed to swap but frequently these applica- that it can reuse the same global identifiers as tion lock pages in memory so they can be en- another process on that machine may be using sured of low latency communication. those identifiers. The key problem is simply having multiple ma- There are two general approaches to solving chines and multiple kernels. In general, how this problem. Modifying things so these global to take an application running under one Linux machine identifiers are unique across all ma- kernel and move it completely to another kernel chines in a cluster, or modifying things so these is an unsolved problem. machine global identifiers can be repeated on a single machine. Many attempts have been The problem is unsolved not because it is fun- made to scale global identifiers cluster-wide— damentally hard, but simply because it has Mosix, OpenSSI, bproc, to name a few—and not be a problem in the UNIX environment. all of them have had to work hard to scale. So Most applications don’t need multiple ma- I choose to go with an implementation that will chines or even big machines to run on (espe- easily scale to the largest of clusters, with no cially with Moore’s law exponentially increas- communication needed to do so. ing the power of small machines). For many of the rest the large multiprocessor systems have This has the added advantage that in a cluster been large enough. it doesn’t change the programming model pre- sented to the user. Just some machines will What has changed is the economic observation now appear as multiple machines. As the rise that a cluster of small commodity machines is of the internet has shown building applications much cheaper and equally as fast as a large su- that utilize multiple machines is not foreign to percomputer. the rest of the computing world either. The other reason this problem has not been To make this happen I need to solve the solved (besides the fact that most people work- challenging problem of how to refactor the ing on it are researchers) is that it is not im- UNIX/Linux API so that we can have multi- mediately obvious what a general solution is. ple instances of the global Linux namespaces. Nothing quite like it has been done before so The Plan 9 inspired mount/filesystem name- you can’t look into a text book or into the space has already proved how this can be done archives of history and know a solution. Which and is slowly proving useful. in the broad strokes of operating system theory is a rarity. 1.2 Jails The hard part of the problem also does not lie in the obvious place people first look— how to save all of the state of an application. Instead, Outside the realm of high perfomance comput- the problem is how do you restore a saved ap- ing people have been restricting their server ap- plication so it runs successfully on another ma- plication to chroot jails for years. The problems chine. with chroot jails have become well understood and people have begun fixing them. First BSD The problem with restoring a saved applica- jails, and then Solaris containers are some of tion is all of the global resources an application the better known examples. 2006 Linux Symposium, Volume One • 103 Under Linux the open source community has cations when they log out. Vulnerable or un- not been idle. There is the linux-jail project, trusted applications like a web browser or an irc Vserver, Openvz, and related projects like client could be contained so that if they are at- SELinux, UML, and Xen. tacked all that is gained is the ability to browse the web and draw pictures in an X window. Jails are a powerful general purpose tool use- ful for a great variety of things. In resource There would finally be answer to the age old utilization jails are cheap, dynamically load- question: How do I preserve my user space ing glibc is likely to consume more memory while upgrading my kernel? than the additional kernel state needed to track a jail. Jails allow applications to be run in sim- All this takes is an eye to designing the inter- ple stripped down environments, increasing se- faces so they are general purpose and nestable. curity, and decreasing maintenance costs, while It should be possible to have a jail inside a jail leaving system administrators with the familiar inside a jail forever, or at least until there don’t UNIX environment. exist the resources to support it. The only problem with the current general pur- pose implementation of jails under Linux is nothing has been merged into the mainline ker- 2 Namespaces nel, and the code from the various projects is not really mergable as it stands. The closest I have seen is the code from the linux-jail project, How much work is this? Looking at the ex- and that is simply because it is less general pur- isting patches it appears that 10,000 to 20,000 pose and implemented completely as a Linux lines of code will ultimately need to be touched. security module. The core of Linux is about 130,000 lines of code, so we will need to touch between 7% and Allowing multiple instances of global names 15% of the core kernel code. Which clearly which is needed to restore a migrated applica- indicates that one giant patch to do everything tion is a more constrained problem than that of even if it was perfect would be rejected simply simply implementing a jail. But a jail that en- because it is too large to be reviewed. sures you can have multiple instances of all of the global names is a powerful general purpose In Linux there are multiple classes of global jail that you can run just about anything in. So identifiers (i.e. process id, SYS V IPC keys, the two problems can share a common kernel user ids). Each class of identifier can be thought solution. of living in its own namespace. 1.3 Future Directions This gives us a natural decomposition to the problem, allowing each namespace to be mod- ified separately so we can support multiple in- A cheap and always available jail mecha- stances of that namespace. Unfortunately this nism is also potentially quite useful outside also increases the difficulty of the problem, as the realm of high performance computing we need to modify the kernel’s reporting and and server applications. A general purupose configuration interfaces to support multiple in- checkpoint/restart mechanism can allow desk- stances of a namespace instead of having them top users to preserve all of their running appli- tightly coupled.
Recommended publications
  • In Search of the Ideal Storage Configuration for Docker Containers
    In Search of the Ideal Storage Configuration for Docker Containers Vasily Tarasov1, Lukas Rupprecht1, Dimitris Skourtis1, Amit Warke1, Dean Hildebrand1 Mohamed Mohamed1, Nagapramod Mandagere1, Wenji Li2, Raju Rangaswami3, Ming Zhao2 1IBM Research—Almaden 2Arizona State University 3Florida International University Abstract—Containers are a widely successful technology today every running container. This would cause a great burden on popularized by Docker. Containers improve system utilization by the I/O subsystem and make container start time unacceptably increasing workload density. Docker containers enable seamless high for many workloads. As a result, copy-on-write (CoW) deployment of workloads across development, test, and produc- tion environments. Docker’s unique approach to data manage- storage and storage snapshots are popularly used and images ment, which involves frequent snapshot creation and removal, are structured in layers. A layer consists of a set of files and presents a new set of exciting challenges for storage systems. At layers with the same content can be shared across images, the same time, storage management for Docker containers has reducing the amount of storage required to run containers. remained largely unexplored with a dizzying array of solution With Docker, one can choose Aufs [6], Overlay2 [7], choices and configuration options. In this paper we unravel the multi-faceted nature of Docker storage and demonstrate its Btrfs [8], or device-mapper (dm) [9] as storage drivers which impact on system and workload performance. As we uncover provide the required snapshotting and CoW capabilities for new properties of the popular Docker storage drivers, this is a images. None of these solutions, however, were designed with sobering reminder that widespread use of new technologies can Docker in mind and their effectiveness for Docker has not been often precede their careful evaluation.
    [Show full text]
  • Resource Management: Linux Kernel Namespaces and Cgroups
    Resource management: Linux kernel Namespaces and cgroups Rami Rosen [email protected] Haifux, May 2013 www.haifux.org 1/121 http://ramirose.wix.com/ramirosen TOC Network Namespace PID namespaces UTS namespace Mount namespace user namespaces cgroups Mounting cgroups links Note: All code examples are from for_3_10 branch of cgroup git tree (3.9.0-rc1, April 2013) 2/121 http://ramirose.wix.com/ramirosen General The presentation deals with two Linux process resource management solutions: namespaces and cgroups. We will look at: ● Kernel Implementation details. ●what was added/changed in brief. ● User space interface. ● Some working examples. ● Usage of namespaces and cgroups in other projects. ● Is process virtualization indeed lightweight comparing to Os virtualization ? ●Comparing to VMWare/qemu/scaleMP or even to Xen/KVM. 3/121 http://ramirose.wix.com/ramirosen Namespaces ● Namespaces - lightweight process virtualization. – Isolation: Enable a process (or several processes) to have different views of the system than other processes. – 1992: “The Use of Name Spaces in Plan 9” – http://www.cs.bell-labs.com/sys/doc/names.html ● Rob Pike et al, ACM SIGOPS European Workshop 1992. – Much like Zones in Solaris. – No hypervisor layer (as in OS virtualization like KVM, Xen) – Only one system call was added (setns()) – Used in Checkpoint/Restart ● Developers: Eric W. biederman, Pavel Emelyanov, Al Viro, Cyrill Gorcunov, more. – 4/121 http://ramirose.wix.com/ramirosen Namespaces - contd There are currently 6 namespaces: ● mnt (mount points, filesystems) ● pid (processes) ● net (network stack) ● ipc (System V IPC) ● uts (hostname) ● user (UIDs) 5/121 http://ramirose.wix.com/ramirosen Namespaces - contd It was intended that there will be 10 namespaces: the following 4 namespaces are not implemented (yet): ● security namespace ● security keys namespace ● device namespace ● time namespace.
    [Show full text]
  • Linux Virtualization Update
    Linux Virtualization Update Chris Wright <[email protected]> Japan Linux Symposium, November 2007 Intro Virtualization mini-summit Paravirtualization Full virtualization Hardware changes Libvirt Xen Virtualization Mini-summit June 25-27, 2007 ± Just before OLS in Ottawa. 18 attendees ● Xen, Vmware, KVM, lguest, UML, LinuxOnLinux ● x86, ia64, PPC and S390 Focused primarily on Linux as guest and areas of cooperation ● paravirt_ops and virtio Common interfaces ● Not the best group to design or discuss management interfaces ● Defer to libvirt, CIM, etc... ● CPUID 0x4000_00xx for hypervisor feature detection ● Can we get to common ABI for paravirt hybrid guest? Virtualization Mini-summit paravirt_ops ● Make use of existing abstractions wherever possible (clocksource, clockevents or irqchip) ● Could use a common lib/x86_emulate.c ● Open question: performance benefit of shadow vs. direct paging? Distro Issues ● Lack of feature parity between bare metal and Xen is difficult for distros ● Single binary kernel image ● Merge upstream Performance ● NUMA awareness lacking in Xen ± difficult for Altix ● Static NUMA representation doesn©t map well to dynamic virt environment ● Cooperative memory management ± guest memory hints Virtualization Mini-summit Hardware ● x86 and ia64 hardware virtualization roadmap ● ppc virtualization is gaining in embedded market, realtime requirments ● S390 ªhas an instruction for thatº Virtio ● Separate driver from transport ● Makes driver small, looks like a Linux driver and reusable ● Hypervisor specific
    [Show full text]
  • Ubuntu Server Guide Basic Installation Preparing to Install
    Ubuntu Server Guide Welcome to the Ubuntu Server Guide! This site includes information on using Ubuntu Server for the latest LTS release, Ubuntu 20.04 LTS (Focal Fossa). For an offline version as well as versions for previous releases see below. Improving the Documentation If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with any page. PDFs and Previous Releases Below are links to the previous Ubuntu Server release server guides as well as an offline copy of the current version of this site: Ubuntu 20.04 LTS (Focal Fossa): PDF Ubuntu 18.04 LTS (Bionic Beaver): Web and PDF Ubuntu 16.04 LTS (Xenial Xerus): Web and PDF Support There are a couple of different ways that the Ubuntu Server edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per-server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions.
    [Show full text]
  • Unprivileged GPU Containers on a LXD Cluster
    Unprivileged GPU containers on a LXD cluster GPU-enabled system containers at scale Stéphane Graber Christian Brauner LXD project leader LXD maintainer @stgraber @brau_ner https://stgraber.org https://brauner.io [email protected] [email protected] What are system containers? They are the oldest type of containers 01 BSD jails, Linux vServer, Solaris Zones, OpenVZ, LXC and LXD. They behave like standalone systems 02 No need for specialized software or custom images. No virtualization overhead 03 They are containers after all. LXD System nova-lxd command line tool your own client/script ? container LXD REST API manager LXD LXD LXD LXD LXC LXC LXC LXC Linux kernel Linux kernel Linux kernel Linux kernel Host A Host B Host C Host ... What LXD is Simple 01 Clean command line interface, simple REST API and clear terminology. Fast 02 Image based, no virtualization, direct hardware access. Secure 03 Safe by default. Combines all available kernel security features. Scalable 04 From a single container on a laptop to tens of thousands of containers in a cluster. What LXD isn’t Another virtualization technology 01 LXD offers an experience very similar to a virtual machine. But it’s still containers, with no virtualization overhead and real hardware. A fork of LXC 02 LXD uses LXC’s API to manage the containers behind the scene. Another application container manager 03 LXD only cares about full system containers. You can run whatever you want inside a LXD container, including Docker. LXD Main Certificates components Cluster Containers Snapshots Backups Events Images Aliases Networks Operations Projects Storage pools Storage volumes Snapshots LXD clustering Built-in clustering support 01 No external dependencies, all LXD 3.0 or higher installations can be instantly turned into a cluster.
    [Show full text]
  • Separating Protection and Management in Cloud Infrastructures
    SEPARATING PROTECTION AND MANAGEMENT IN CLOUD INFRASTRUCTURES A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Zhiming Shen December 2017 c 2017 Zhiming Shen ALL RIGHTS RESERVED SEPARATING PROTECTION AND MANAGEMENT IN CLOUD INFRASTRUCTURES Zhiming Shen, Ph.D. Cornell University 2017 Cloud computing infrastructures serving mutually untrusted users provide se- curity isolation to protect user computation and resources. Additionally, clouds should also support flexibility and efficiency, so that users can customize re- source management policies and optimize performance and resource utiliza- tion. However, flexibility and efficiency are typically limited due to security requirements. This dissertation investigates the question of how to offer flexi- bility and efficiency as well as strong security in cloud infrastructures. Specifically, this dissertation addresses two important platforms in cloud in- frastructures: the containers and the Infrastructure as a Service (IaaS) platforms. The containers platform supports efficient container provisioning and execut- ing, but does not provide sufficient security and flexibility. Different containers share an operating system kernel which has a large attack surface, and kernel customization is generally not allowed. The IaaS platform supports secure shar- ing of cloud resources among mutually untrusted users, but does not provide sufficient flexibility and efficiency. Many powerful management primitives en- abled by the underlying virtualization platform are hidden from users, such as live virtual machine migration and consolidation. The main contribution of this dissertation is the proposal of an approach in- spired by the exokernel architecture that can be generalized to any multi-tenant system to improve security, flexibility, and efficiency.
    [Show full text]
  • Containers – Namespaces and Cgroups
    Containers – namespaces and cgroups --- Ajay Nayak Motivation Why containers? • Important use-case: implementing lightweight virtualization • Virtualization == isolation of processes • Traditional virtualization: Hypervisors • Processes isolated by running in separate guest kernels that sit on top of host kernel • Isolation is “all or nothing” • Virtualization via containers • Permit isolation of processes running on a single kernel be per-global- resource --- via namespaces • Restrict resource consumption --- via cgroups Outline • Motivation • Concepts • Linux Namespaces • UTS • UID • Mount • C(ontrol) groups • Food for thought Introduction Concepts • Isolation • Goal: Limit “WHAT” a process can use • “wrap” some global system resource to provide resource isolation • Namespaces jump into the picture • Control • Goal: Limit “HOW MUCH” a process can use • A mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups • Assign specialized behaviour to the group • C(ontrol) groups jump into the picture https://github.com/nayakajay/linux-namespaces Namespaces Linux namespaces • Supports following NS types: (CLONE_FLAG; symlink) • Mount (CLONE_NEWNS; /proc/pid/ns/mnt) • UTS (CLONE_NEWUTS; /proc/pid/ns/uts) • IPC (CLONE_NEWIPC; /proc/pid/ns/ipc) • PID (CLONE_NEWPID; /proc/pid/ns/pid) • Network (CLONE_NEWNET; /proc/pid/ns/net) • User (CLONE_NEWUSER; /proc/pid/ns/user) • Cgroup (CLONE_NEWCGROUP; /proc/pid/ns/cgroup) • Time (CLONE_NEWTIME; /proc/pid/ns/time) <= very new! # Magic symlinks, which
    [Show full text]
  • Openvz Forum Each Container
    Subject: I am thinking of moving to LXD Posted by votsalo on Wed, 17 Aug 2016 19:13:50 GMT View Forum Message <> Reply to Message I am thinking of moving my dedicated server to LXD (a variant of LXC). One reason is that I can run LXD on several different environments where I can run Ubuntu: * On a dedicated server. * On an Amazon EC2 instance. * On my home machine, running Ubuntu desktop. * On a cheap Ubuntu 16.04 Openstack KVM VPS. I find the VPS option very usuful, however tricky because of the tight disk space. I can have a few containers on my VPS. I can move containers between the VPS and another LXD server. I can run a container OS that I need on my VPS that is not offered as a choice by my VPS provider, but is available as an LXD template. And perhaps I can replace the dedicated server with a few cheaper VPSs. I have tried doing all these with OpenVZ in the past. I couldn't get it to run anywhere, except on the dedicated server or on a dedicated home machine (or dual boot). Now it seems that I cannot run OpenVZ 7.0.0 on any of the above infrastructure. I have installed OpenVZ 7.0.0 on my home machine, as a dual boot, but I'm not using it, since I use my desktop OS and I cannot use both at the same time. In the past I tried installing OpenVZ on several desktop distributions, but I couldn't find one that worked.
    [Show full text]
  • Security of OS-Level Virtualization Technologies: Technical Report
    Security of OS-level virtualization technologies Elena Reshetova1, Janne Karhunen2, Thomas Nyman3, N. Asokan4 1 Intel OTC, Finland 2 Ericsson, Finland 3 University of Helsinki, Finland 4 Aalto University and University of Helsinki, Finland Abstract. The need for flexible, low-overhead virtualization is evident on many fronts ranging from high-density cloud servers to mobile devices. During the past decade OS-level virtualization has emerged as a new, efficient approach for virtualization, with implementations in multiple different Unix-based systems. Despite its popularity, there has been no systematic study of OS-level virtualization from the point of view of security. In this report, we conduct a comparative study of several OS- level virtualization systems, discuss their security and identify some gaps in current solutions. 1 Introduction During the past couple of decades the use of different virtualization technolo- gies has been on a steady rise. Since IBM CP-40 [19], the first virtual machine prototype in 1966, many different types of virtualization and their uses have been actively explored both by the research community and by the industry. A relatively recent approach, which is becoming increasingly popular due to its light-weight nature, is Operating System-Level Virtualization, where a number of distinct user space instances, often referred to as containers, are run on top of a shared operating system kernel. A fundamental difference between OS-level virtualization and more established competitors, such as Xen hypervisor [24], VMWare [48] and Linux Kernel Virtual Machine [29] (KVM), is that in OS-level virtualization, the virtualized artifacts are global kernel resources, as opposed to hardware.
    [Show full text]
  • Container Technologies
    Zagreb, NKOSL, FER Container technologies Marko Golec · Juraj Vijtiuk · Jakov Petrina April 11, 2020 About us ◦ Embedded Linux development and integration ◦ Delivering solutions based on Linux, OpenWrt and Yocto • Focused on software in network edge and CPEs ◦ Continuous participation in Open Source projects ◦ www.sartura.hr Introduction to GNU/Linux ◦ Linux = operating system kernel ◦ GNU/Linux distribution = kernel + userspace (Ubuntu, Arch Linux, Gentoo, Debian, OpenWrt, Mint, …) ◦ Userspace = set of libraries + system software Linux kernel ◦ Operating systems have two spaces of operation: • Kernel space – protected memory space and full access to the device’s hardware • Userspace – space in which all other application run • Has limited access to hardware resources • Accesses hardware resources via kernel • Userspace applications invoke kernel services with system calls User applications E.g. bash, LibreOffice, GIMP, Blender, Mozilla Firefox, etc. System daemons: Windowing system: User mode Low-level system systemd, runit, logind, X11, Wayland, Other libraries: GTK+, Qt, EFL, SDL, SFML, Graphics: Mesa, AMD components networkd, PulseAudio, SurfaceFlinger FLTK, GNUstep, etc. Catalyst, … … (Android) C standard library Up to 2000 subroutines depending on C library (glibc, musl, uClibc, bionic) ( open() , exec() , sbrk() , socket() , fopen() , calloc() , …) About 380 system calls ( stat , splice , dup , read , open , ioctl , write , mmap , close , exit , etc.) Process scheduling Memory management IPC subsystem Virtual files subsystem Network subsystem Kernel mode Linux Kernel subsystem subsystem Other components: ALSA, DRI, evdev, LVM, device mapper, Linux Network Scheduler, Netfilter Linux Security Modules: SELinux, TOMOYO, AppArmor, Smack Hardware (CPU, main memory, data storage devices, etc.) TABLE 1 Layers within Linux Virtualization Virtualization Concepts Two virtualization concepts: ◦ Hardware virtualization (full/para virtualization) • Emulation of complete hardware (virtual machines - VMs) • VirtualBox, QEMU, etc.
    [Show full text]
  • L4 – Virtualization and Beyond
    L4 – Virtualization and Beyond Hermann Härtig!," Michael Roitzsch! Adam Lackorzynski" Björn Döbel" Alexander Böttcher! #!Department of Computer Science# "GWT-TUD GmbH # Technische Universität Dresden# 01187 Dresden, Germany # 01062 Dresden, Germany {haertig,mroi,adam,doebel,boettcher}@os.inf.tu-dresden.de Abstract Mac OS X environment, using full hardware After being introduced by IBM in the 1960s, acceleration by the GPU. Virtual machines are virtualization has experienced a renaissance in used to test potentially malicious software recent years. It has become a major industry trend without risking the host environment and they in the server context and is also popular on help developers in debugging crashes with back- consumer desktops. In addition to the well-known in-time execution. benefits of server consolidation and legacy In the server world, virtual machines are used to preservation, virtualization is now considered in consolidate multiple services previously located embedded systems. In this paper, we want to look on dedicated machines. Running them within beyond the term to evaluate advantages and virtual machines on one physical server eases downsides of various virtualization approaches. management and helps saving power by We show how virtualization can be increasing utilization. In server farms, migration complemented or even superseded by modern of virtual machines between servers is used to operating system paradigms. Using L4 as the balance load with the potential of shutting down basis for virtualization and as an advanced completely unloaded servers or adding more to microkernel provides a best-of-both-worlds increase capacity. Lastly, virtual machines also combination. Microkernels can contribute proven isolate different customers who can purchase real-time capabilities and small trusted computing virtual shares of a physical server in a data center.
    [Show full text]
  • Container-Based Virtualization
    Container-Based Virtualization Advanced Operating Systems Luca Abeni [email protected] Virtualized Resources • Virtual Machine: efficient, isolated duplicate of a physical machine • Why focusing on physical machines? • What about abstract machines? • Software stack: hierarchy of abstract machines • ... • Abstract machine: language runtime • Abstract machine: OS (hardware + system library calls) • Abstract machine: OS kernel (hardware + syscalls) • Physical machine (hardware) Advanced Operating Systems Container-Based Virtualization Hardware Virtualization • Can be full hardware virtualization or paravirtualization • Paravirtualization requires modifications to guest OS (kernel) • Can be based on trap and emulate • Can use special CPU features (hardware assisted virtualization) • In any case, the hardware (whole machine) is virtualized! • Guests can provide their own OS kernel • Guests can execute at various privilege levels Advanced Operating Systems Container-Based Virtualization OS-Level Virtualization • The OS kernel (or the whole OS) is virtualized • Guests can provide the user-space part of the OS (system libraries + binaries, boot scripts, ...) or just an application... • ...But continue to use the host OS kernel! • One single OS kernel (the host kernel) in the system • The kernel virtualizes all (or part) of its services • OS kernel virtualization: container-based virtualization • Example of OS virtualization: wine Advanced Operating Systems Container-Based Virtualization Virtualization at Language Level • The language runtime
    [Show full text]