A Comparison of Virtualization Technologies for HPC∗

Total Page:16

File Type:pdf, Size:1020Kb

A Comparison of Virtualization Technologies for HPC∗ 22nd International Conference on Advanced Information Networking and Applications A Comparison of Virtualization Technologies for HPC∗ J. P. Walters, Vipin Chaudhary, and Minsuk Cha Salvatore Guercio Jr. and Steve Gallo University at Buffalo, SUNY The Center for Computational Research Computer Science and Engineering University at Buffalo, SUNY {waltersj, vipin, mcha}@buffalo.edu {sguercio, smgallo}@ccr.buffalo.edu Abstract to provide “sandbox-like” environments with little perfor- mance reduction from the user’s perspective. Virtualization is a common strategy for improving the In light of the benefits to virtualization, we believe that utilization of existing computing resources, particularly virtual machine/virtual server-based computational clusters within data centers. However, its use for high performance will soon gain widespread adoption in the high performance computing (HPC) applications is currently limited despite computing (HPC) community. However, to date, a compre- its potential for both improving resource utilization as well hensive examination of the various virtualization strategies as providing resource guarantees to its users. This paper and implementations has not been conducted, particularly systematically evaluates various VMs for computationally with an eye towards its use in HPC environments. We fill intensive HPC applications using various standard bench- this literature gap by examining multiple aspects of cluster marks. Using VMWare Server, Xen, and OpenVZ we ex- performance with standard HPC benchmarks. In so doing amine the suitability of full virtualization, paravirtualiza- we make the following two contributions: tion, and operating system-level virtualization in terms of network utilization, SMP performance, file system perfor- 1. Single Server Evaluation: In Section 3 we evaluate mance, and MPI scalability. We show that the operating several virtualization solutions for single node perfor- system-level virtualization provided by OpenVZ provides mance and scalability. Rather than repeat existing re- the best overall performance, particularly for MPI scala- search, we focus our tests on industry-standard sci- bility. entific benchmarks including SMP (symmetric mul- tiprocessor) tests through the use of OpenMP (open multiprocessing) implementations of the NAS Paral- 1 Introduction lel Benchmarks (NPB) [2]. We examine file system and network performance (using IOZone [3] and Net- perf [4]) in the absence of MPI (message passing in- The use of virtualization in computing is a well- terface) benchmarks in order to gain insight into the established idea dating back more than 30 years [1]. Tradi- potential performancebottlenecks that may impact dis- tionally, its use has meant accepting a sizable performance tributed computations. reduction in exchangefor the convenienceof the virtual ma- chine. Now, however, the performance penalties have been 2. Cluster Evaluation: We extend our evaluation to the reduced. Faster processors as well as more efficient virtual- cluster-level and benchmark the virtualization solu- ization solutions now allow even modest desktop computers tions using the MPI implementation of NPB. Draw- to host virtual machines. ing on the results from our single node tests, we con- Soon large computational clusters will be leveraging the sider the effectiveness of each virtualization strategy benefits of virtualization in order to enhance the utility of by examining the overall performance demonstrated the cluster as well as to ease the burden of administering through industry-standard scientific benchmarks. such large numbers of machines. Indeed, virtual machines allow administrators to more accurately control their re- sources while simultaneously protecting the host node from 2 Existing Virtualization Technologies malfunctioning user-software. This allows administrators ∗ To accurately characterize the performance of different This research was supported in part by NSF IGERT grant 9987598, the Institute for Scientific Computing at Wayne State University, virtualization technologies we begin with an overview of MEDC/Michigan Life Science Corridor, and NYSTAR. the major virtualization strategies that are in common use 1550-445X/08 $25.00 © 2008 IEEE 861 DOI 10.1109/AINA.2008.45 Authorized licensed use limited to: SUNY Buffalo. Downloaded on November 5, 2008 at 12:12 from IEEE Xplore. Restrictions apply. for production computing environments. In general, most itself to aid in the virtualization effort. It allows multi- virtualization strategies fall into one of four major cate- ple unmodified operating systems to run alongside one gories: another, provided that all operating systems are capa- ble of running on the host processor directly. That 1. Full Virtualization: Also sometimes called hardware is, native virtualization does not emulate a processor. emulation. In this case an unmodified operating sys- This is unlike the full virtualization technique where tem is run using a hypervisor to trap and safely trans- it is possible to run an operating system on a fictional late/execute privileged instructions on-the-fly. Be- processor, though typically with poor performance. In cause trapping the privileged instructions can lead to x86 64 series processors, both Intel and AMD sup- significant performance penalties, novel strategies are port virtualization through the Intel-VT and AMD-V used to aggregate multiple instructions and translate virtualization extensions. x86 64 Processors with vir- them together. Other enhancements, such as binary tualization support are relatively recent, but are fast- translation, can further improve performance by reduc- becoming widespread. ing the need to translate these instructions in the fu- ture [5, 6]. For the remainder of this paper we use the word “guest” to refer to the virtualized operating system utilized within 2. Paravirtualization: Like full virtualization, paravir- any of the above virtualization strategies. Therefore a guest tualization also uses a hypervisor, and also uses the can refer to a VPS (OS-level virtualization), or a VM (full term virtual machine to refer to its virtualized operat- virtualization, paravirtualization). ing systems. However, unlike full virtualization, par- In order to evaluate the viability of the different virtual- avirtualization requires changes to the virtualized op- ization technologies, we compare VMWare Server version erating system. This allows the VM to coordinate with 1.0.21, Xen version 3.0.4.1, and OpenVZ based on kernel the hypervisor, reducing the use of the privileged in- version 2.6.16. These choices allow us to compare full vir- structions that are typically responsible for the major tualization, paravirtualization, and OS-level virtualization performance penalties in full virtualization. The ad- for their use in HPC scenarios. We do not include a com- vantage is that paravirtualized virtual machines typ- parison of native virtualization in our evaluation as the ex- ically outperform fully virtualized virtual machines. isting literature has already shown native virtualization to be The disadvantage, however, is the need to modify the comparable to VMWare’s freely available VMWare Player paravirtualized virtual machine/operating system to be in software mode [7]. hypervisor-aware. This has implications for operating systems without available source code. 2.1 Overview of Test Virtualization Im- 3. Operating System-level Virtualization: The most plementations intrusive form of virtualization is operating system- level virtualization. Unlike both paravirtualization and Before our evaluation we first provide a brief overview of full virtualization, operating system-level virtualiza- the three virtualization technologies that we will be testing: tion does not rely on a hypervisor. Instead, the op- VMWare Server [8], Xen [9], and OpenVZ [10]. erating system is modified to securely isolate multi- VMWare is currently the market leader in virtualization ple instances of an operating system within a single technology. We chose to evaluate the free VMWare Server host machine. The guest operating system instances product, which includes support for both full virtualization are often referred to as virtual private servers (VPS). and native virtualization, as well as limited (2 CPU) vir- The advantage to operating system-level virtualization tual SMP support. Unlike VMWare ESX Server, VMWare lies mainly in performance. No hypervisor/instruction Server (formerly GSX Server) operates on top of either the trapping is necessary. This typically results in system Linux or Windows operating systems. The advantage to this performanceof near-native speeds. The primary disad- approach is a user’s ability to use additional hardware that is vantage is that all VPS instances share a single kernel. supported by either Linux or Windows, but is not supported Thus, if the kernel crashes or is compromised, all VPS by the bare-metal ESX Server operating system (SATA hard instances are compromised. However, the advantage to disk support is notably missing from ESX Server). The dis- having a single kernel instance is that fewer resources advantage is the greater overhead from the base operating are consumed due to the operating system overhead of system, and consequently the potential for less efficient re- multiple kernels. source utilization. 4. Native Virtualization: Native virtualization leverages 1We had hoped to test VMWare ESX Server, but hardware incompati- hardware support for virtualization within a processor bilities prevented us from
Recommended publications
  • Understanding Full Virtualization, Paravirtualization, and Hardware Assist
    VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist Contents Introduction .................................................................................................................1 Overview of x86 Virtualization..................................................................................2 CPU Virtualization .......................................................................................................3 The Challenges of x86 Hardware Virtualization ...........................................................................................................3 Technique 1 - Full Virtualization using Binary Translation......................................................................................4 Technique 2 - OS Assisted Virtualization or Paravirtualization.............................................................................5 Technique 3 - Hardware Assisted Virtualization ..........................................................................................................6 Memory Virtualization................................................................................................6 Device and I/O Virtualization.....................................................................................7 Summarizing the Current State of x86 Virtualization Techniques......................8 Full Virtualization with Binary Translation is the Most Established Technology Today..........................8 Hardware Assist is the Future of Virtualization, but the Real Gains Have
    [Show full text]
  • The Server Virtualization Landscape, Circa 2007
    ghaff@ illuminata.com Copyright © 2007 Illuminata, Inc. single user license Gordon R Haff Illuminata, Inc. TM The Server Virtualization Bazaar, Circa 2007 Inspired by both industry hype and legitimate customer excitement, many Research Note companies seem to have taken to using the “virtualization” moniker more as the hip phrase of the moment than as something that’s supposed to convey actual meaning. Think of it as “eCommerce” or “Internet-enabled” for the Noughts. The din is loud. It doesn’t help matters that virtualization, in the broad sense of “remapping physical resources to more useful logical ones,” spans a huge swath of Gordon Haff technologies—including some that are so baked-in that most people don’t even 27 July 2007 think of them as virtualization any longer. Personally licensed to Gordon R Haff of Illuminata, Inc. for your personal education and individual work functions. Providing its contents to external parties, including by quotation, violates our copyright and is expressly forbidden. However, one particular group of approaches is capturing an outsized share of the limelight today. That would, of course, be what’s commonly referred to as “server virtualization.” Although server virtualization is in the minds of many inextricably tied to the name of one company—VMware—there are many companies in this space. Their offerings include not only products that let multiple virtual machines (VMs) coexist on a single physical server, but also related approaches such as operating system (OS) virtualization or containers. In the pages that follow, I offer a guide to today’s server virtualization bazaar— which at first glance can perhaps seem just a dreadfully confusing jumble.
    [Show full text]
  • A Virtual Software Appliance for LHC Applications
    CernVM - a virtual software appliance for LHC applications C. Aguado-Sanchez 1) , P. Buncic 1) , L. Franco 1) , S. Klemer 1) , P. Mato 1) 1) CERN, Geneva, Switzerland Predrag Buncic (CERN/PH-SFT) • Talk Outline Introduction CernVM Project • Building blocks • Scalability and performance • User Interface and API • Release status Future plans & directions Conclusions ACAT 2008 CernVM – A Virtial Machine for LHC Experiments Erice, 6/11/2008 - 2 Introduction Enjoying “Frequency scaling Era” Cluster of Clusters (GRID) Workstation & PC Clusters Mainframes Single, multi and many cores IBM-VM 360, CERNVM,1988 ACAT 2008 CernVM – A Virtial Machine for LHC Experiments Erice, 6/11/2008 - 4 RecentTrends (h/w) • Multi & many cores CPU Core CPU Core and and Software benefits from multicore architectures L1 Caches L1 Caches where code can be executed in parallel Bus Interface and • Under most common operating systems this L2 Caches requires code to execute in separate threads or processes. Unfortunatelly, HEP/LHC applications were developed during period when it looked like any performance issue can be easily solved by simply waiting 2 more years • Support for hardware assisted virtualization VMM can now efficiently virtualize the entire x86 instruction set • Intel VT and AMD-V implementations • VMware, Xen 3.x (including derivatives like Virtual Iron), Linux KVM and Microsoft Hyper-V Running Virtual Machine will benefit from adoption of multiple core architectures since each virtual machine runs independently of others and can be executed in
    [Show full text]
  • Linux Virtualization Update
    Linux Virtualization Update Chris Wright <[email protected]> Japan Linux Symposium, November 2007 Intro Virtualization mini-summit Paravirtualization Full virtualization Hardware changes Libvirt Xen Virtualization Mini-summit June 25-27, 2007 ± Just before OLS in Ottawa. 18 attendees ● Xen, Vmware, KVM, lguest, UML, LinuxOnLinux ● x86, ia64, PPC and S390 Focused primarily on Linux as guest and areas of cooperation ● paravirt_ops and virtio Common interfaces ● Not the best group to design or discuss management interfaces ● Defer to libvirt, CIM, etc... ● CPUID 0x4000_00xx for hypervisor feature detection ● Can we get to common ABI for paravirt hybrid guest? Virtualization Mini-summit paravirt_ops ● Make use of existing abstractions wherever possible (clocksource, clockevents or irqchip) ● Could use a common lib/x86_emulate.c ● Open question: performance benefit of shadow vs. direct paging? Distro Issues ● Lack of feature parity between bare metal and Xen is difficult for distros ● Single binary kernel image ● Merge upstream Performance ● NUMA awareness lacking in Xen ± difficult for Altix ● Static NUMA representation doesn©t map well to dynamic virt environment ● Cooperative memory management ± guest memory hints Virtualization Mini-summit Hardware ● x86 and ia64 hardware virtualization roadmap ● ppc virtualization is gaining in embedded market, realtime requirments ● S390 ªhas an instruction for thatº Virtio ● Separate driver from transport ● Makes driver small, looks like a Linux driver and reusable ● Hypervisor specific
    [Show full text]
  • Ubuntu Server Guide Basic Installation Preparing to Install
    Ubuntu Server Guide Welcome to the Ubuntu Server Guide! This site includes information on using Ubuntu Server for the latest LTS release, Ubuntu 20.04 LTS (Focal Fossa). For an offline version as well as versions for previous releases see below. Improving the Documentation If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with any page. PDFs and Previous Releases Below are links to the previous Ubuntu Server release server guides as well as an offline copy of the current version of this site: Ubuntu 20.04 LTS (Focal Fossa): PDF Ubuntu 18.04 LTS (Bionic Beaver): Web and PDF Ubuntu 16.04 LTS (Xenial Xerus): Web and PDF Support There are a couple of different ways that the Ubuntu Server edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per-server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions.
    [Show full text]
  • Virtualization Technologies Overview Course: CS 490 by Mendel
    Virtualization technologies overview Course: CS 490 by Mendel Rosenblum Name Can boot USB GUI Live 3D Snaps Live an OS on mem acceleration hot of migration another ory runnin disk alloc g partition ation system as guest Bochs partially partially Yes No Container s Cooperati Yes[1] Yes No No ve Linux (supporte d through X11 over networkin g) Denali DOSBox Partial (the Yes No No host OS can provide DOSBox services with USB devices) DOSEMU No No No FreeVPS GXemul No No Hercules Hyper-V iCore Yes Yes No Yes No Virtual Accounts Imperas Yes Yes Yes Yes OVP (Eclipse) Tools Integrity Yes No Yes Yes No Yes (HP-UX Virtual (Integrity guests only, Machines Virtual Linux and Machine Windows 2K3 Manager in near future) (add-on) Jail No Yes partially Yes No No No KVM Yes [3] Yes Yes [4] Yes Supported Yes [5] with VMGL [6] Linux- VServer LynxSec ure Mac-on- Yes Yes No No Linux Mac-on- No No Mac OpenVZ Yes Yes Yes Yes No Yes (using Xvnc and/or XDMCP) Oracle Yes Yes Yes Yes Yes VM (manage d by Oracle VM Manager) OVPsim Yes Yes Yes Yes (Eclipse) Padded Yes Yes Yes Cell for x86 (Green Hills Software) Padded Yes Yes Yes No Cell for PowerPC (Green Hills Software) Parallels Yes, if Boot Yes Yes Yes DirectX 9 Desktop Camp is and for Mac installed OpenGL 2.0 Parallels No Yes Yes No partially Workstati on PearPC POWER Yes Yes No Yes No Yes (on Hypervis POWER 6- or (PHYP) based systems, requires PowerVM Enterprise Licensing) QEMU Yes Yes Yes [4] Some code Yes done [7]; Also supported with VMGL [6] QEMU w/ Yes Yes Yes Some code Yes kqemu done [7]; Also module supported
    [Show full text]
  • Unprivileged GPU Containers on a LXD Cluster
    Unprivileged GPU containers on a LXD cluster GPU-enabled system containers at scale Stéphane Graber Christian Brauner LXD project leader LXD maintainer @stgraber @brau_ner https://stgraber.org https://brauner.io [email protected] [email protected] What are system containers? They are the oldest type of containers 01 BSD jails, Linux vServer, Solaris Zones, OpenVZ, LXC and LXD. They behave like standalone systems 02 No need for specialized software or custom images. No virtualization overhead 03 They are containers after all. LXD System nova-lxd command line tool your own client/script ? container LXD REST API manager LXD LXD LXD LXD LXC LXC LXC LXC Linux kernel Linux kernel Linux kernel Linux kernel Host A Host B Host C Host ... What LXD is Simple 01 Clean command line interface, simple REST API and clear terminology. Fast 02 Image based, no virtualization, direct hardware access. Secure 03 Safe by default. Combines all available kernel security features. Scalable 04 From a single container on a laptop to tens of thousands of containers in a cluster. What LXD isn’t Another virtualization technology 01 LXD offers an experience very similar to a virtual machine. But it’s still containers, with no virtualization overhead and real hardware. A fork of LXC 02 LXD uses LXC’s API to manage the containers behind the scene. Another application container manager 03 LXD only cares about full system containers. You can run whatever you want inside a LXD container, including Docker. LXD Main Certificates components Cluster Containers Snapshots Backups Events Images Aliases Networks Operations Projects Storage pools Storage volumes Snapshots LXD clustering Built-in clustering support 01 No external dependencies, all LXD 3.0 or higher installations can be instantly turned into a cluster.
    [Show full text]
  • Separating Protection and Management in Cloud Infrastructures
    SEPARATING PROTECTION AND MANAGEMENT IN CLOUD INFRASTRUCTURES A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Zhiming Shen December 2017 c 2017 Zhiming Shen ALL RIGHTS RESERVED SEPARATING PROTECTION AND MANAGEMENT IN CLOUD INFRASTRUCTURES Zhiming Shen, Ph.D. Cornell University 2017 Cloud computing infrastructures serving mutually untrusted users provide se- curity isolation to protect user computation and resources. Additionally, clouds should also support flexibility and efficiency, so that users can customize re- source management policies and optimize performance and resource utiliza- tion. However, flexibility and efficiency are typically limited due to security requirements. This dissertation investigates the question of how to offer flexi- bility and efficiency as well as strong security in cloud infrastructures. Specifically, this dissertation addresses two important platforms in cloud in- frastructures: the containers and the Infrastructure as a Service (IaaS) platforms. The containers platform supports efficient container provisioning and execut- ing, but does not provide sufficient security and flexibility. Different containers share an operating system kernel which has a large attack surface, and kernel customization is generally not allowed. The IaaS platform supports secure shar- ing of cloud resources among mutually untrusted users, but does not provide sufficient flexibility and efficiency. Many powerful management primitives en- abled by the underlying virtualization platform are hidden from users, such as live virtual machine migration and consolidation. The main contribution of this dissertation is the proposal of an approach in- spired by the exokernel architecture that can be generalized to any multi-tenant system to improve security, flexibility, and efficiency.
    [Show full text]
  • Virtualization and Cloud Computing Virtualization Introduction Day 01, Session 1.2 Virtualization
    Virtualization and Cloud Computing Virtualization Introduction Day 01, Session 1.2 Virtualization Virtualization is technology that lets you create useful IT services using resources that are traditionally bound to hardware. It allows you to use a physical machine’s full capacity by distributing its capabilities among many users or environments. Non Virtualization / Legacy Environment Virtualized Environment History of Virtualization Development • 1965 IBM M44/44X paging system • 2005 HP Integrity Virtual Machines • 1965 IBM System/360-67 virtual memory hardware • 2005 Intel VT • 1967 IBM CP-40 (January) and CP-67 (April) time-sharing • 2006 AMD VT • 1972 IBM VM/370 run VM under VM • 2005 XEN • 1997 Connectix First version of Virtual PC • 2006 VMWare Server • 1998 VMWare U.S. Patent 6,397,242 • 2006 Virtual PC 2006 • 1999 VMware Virtual Platform for the Intel IA-32 architecture • 2006 HP IVM Version 2.0 • 2000 IBM z/VM • 2006 Virtual Iron 3.1 • 2001 Connectix Virtual PC for Windows • 2007 InnoTek VirtualBox • 2003 Microsoft acquired Connectix • 2007 KVM in Linux Kernel • 2003 EMC acquired Vmware • 2007 XEN in Linux Kernel • 2003 VERITAS acquired Ejascent Type of Virtualization ❑ Data virtualization Data that’s spread all over can be consolidated into a single source. Data virtualization allows companies to treat data as a dynamic supply ❑ Desktop virtualization desktop virtualization allows a central administrator (or automated administration tool) to deploy simulated desktop environments to hundreds of physical machines at once ❑ Server virtualization Virtualizing a server lets it to do more of those specific functions and involves partitioning it so that the components can be used to serve multiple functions Type of Virtualization ❑ Operating system virtualization Operating system virtualization happens at the kernel—the central task managers of operating systems.
    [Show full text]
  • Openvz Forum Each Container
    Subject: I am thinking of moving to LXD Posted by votsalo on Wed, 17 Aug 2016 19:13:50 GMT View Forum Message <> Reply to Message I am thinking of moving my dedicated server to LXD (a variant of LXC). One reason is that I can run LXD on several different environments where I can run Ubuntu: * On a dedicated server. * On an Amazon EC2 instance. * On my home machine, running Ubuntu desktop. * On a cheap Ubuntu 16.04 Openstack KVM VPS. I find the VPS option very usuful, however tricky because of the tight disk space. I can have a few containers on my VPS. I can move containers between the VPS and another LXD server. I can run a container OS that I need on my VPS that is not offered as a choice by my VPS provider, but is available as an LXD template. And perhaps I can replace the dedicated server with a few cheaper VPSs. I have tried doing all these with OpenVZ in the past. I couldn't get it to run anywhere, except on the dedicated server or on a dedicated home machine (or dual boot). Now it seems that I cannot run OpenVZ 7.0.0 on any of the above infrastructure. I have installed OpenVZ 7.0.0 on my home machine, as a dual boot, but I'm not using it, since I use my desktop OS and I cannot use both at the same time. In the past I tried installing OpenVZ on several desktop distributions, but I couldn't find one that worked.
    [Show full text]
  • Security of OS-Level Virtualization Technologies: Technical Report
    Security of OS-level virtualization technologies Elena Reshetova1, Janne Karhunen2, Thomas Nyman3, N. Asokan4 1 Intel OTC, Finland 2 Ericsson, Finland 3 University of Helsinki, Finland 4 Aalto University and University of Helsinki, Finland Abstract. The need for flexible, low-overhead virtualization is evident on many fronts ranging from high-density cloud servers to mobile devices. During the past decade OS-level virtualization has emerged as a new, efficient approach for virtualization, with implementations in multiple different Unix-based systems. Despite its popularity, there has been no systematic study of OS-level virtualization from the point of view of security. In this report, we conduct a comparative study of several OS- level virtualization systems, discuss their security and identify some gaps in current solutions. 1 Introduction During the past couple of decades the use of different virtualization technolo- gies has been on a steady rise. Since IBM CP-40 [19], the first virtual machine prototype in 1966, many different types of virtualization and their uses have been actively explored both by the research community and by the industry. A relatively recent approach, which is becoming increasingly popular due to its light-weight nature, is Operating System-Level Virtualization, where a number of distinct user space instances, often referred to as containers, are run on top of a shared operating system kernel. A fundamental difference between OS-level virtualization and more established competitors, such as Xen hypervisor [24], VMWare [48] and Linux Kernel Virtual Machine [29] (KVM), is that in OS-level virtualization, the virtualized artifacts are global kernel resources, as opposed to hardware.
    [Show full text]
  • Container Technologies
    Zagreb, NKOSL, FER Container technologies Marko Golec · Juraj Vijtiuk · Jakov Petrina April 11, 2020 About us ◦ Embedded Linux development and integration ◦ Delivering solutions based on Linux, OpenWrt and Yocto • Focused on software in network edge and CPEs ◦ Continuous participation in Open Source projects ◦ www.sartura.hr Introduction to GNU/Linux ◦ Linux = operating system kernel ◦ GNU/Linux distribution = kernel + userspace (Ubuntu, Arch Linux, Gentoo, Debian, OpenWrt, Mint, …) ◦ Userspace = set of libraries + system software Linux kernel ◦ Operating systems have two spaces of operation: • Kernel space – protected memory space and full access to the device’s hardware • Userspace – space in which all other application run • Has limited access to hardware resources • Accesses hardware resources via kernel • Userspace applications invoke kernel services with system calls User applications E.g. bash, LibreOffice, GIMP, Blender, Mozilla Firefox, etc. System daemons: Windowing system: User mode Low-level system systemd, runit, logind, X11, Wayland, Other libraries: GTK+, Qt, EFL, SDL, SFML, Graphics: Mesa, AMD components networkd, PulseAudio, SurfaceFlinger FLTK, GNUstep, etc. Catalyst, … … (Android) C standard library Up to 2000 subroutines depending on C library (glibc, musl, uClibc, bionic) ( open() , exec() , sbrk() , socket() , fopen() , calloc() , …) About 380 system calls ( stat , splice , dup , read , open , ioctl , write , mmap , close , exit , etc.) Process scheduling Memory management IPC subsystem Virtual files subsystem Network subsystem Kernel mode Linux Kernel subsystem subsystem Other components: ALSA, DRI, evdev, LVM, device mapper, Linux Network Scheduler, Netfilter Linux Security Modules: SELinux, TOMOYO, AppArmor, Smack Hardware (CPU, main memory, data storage devices, etc.) TABLE 1 Layers within Linux Virtualization Virtualization Concepts Two virtualization concepts: ◦ Hardware virtualization (full/para virtualization) • Emulation of complete hardware (virtual machines - VMs) • VirtualBox, QEMU, etc.
    [Show full text]