Linux* Containers Streamline, Complement Hypervisor

Total Page:16

File Type:pdf, Size:1020Kb

Linux* Containers Streamline, Complement Hypervisor WhITE Paper Communications and Storage Infrastructure Container-Based Virtualization Linux* Containers Streamline Virtualization and Complement Hypervisor-Based Virtual Machines As strategic approaches such as software-defined networking (SDN) and network function virtualization (NFV) become more mainstream, combinations of both OS-level virtualization and hypervisor-based virtual machines (VMs) often provide dramatic cost, agility, and performance benefits, throughout the data center. 1 Executive Summary By decoupling physical computing resources from the software that executes on them, virtualization has dramatically improved resource utilization, enabled server consolidation, and allowed workloads to migrate among physical hosts for usages As virtualization using such as load balancing and failover. The following key techniques for resource Linux* containers emerges as sharing and partitioning have been under continual development and refinement a viable option for mainstream since work began on them in the 1960s: computing environments, Intel is investing in enablement • Hypervisor-based virtualization has been the main focus within the Intel® through hardware and software architecture ecosystem. Increased server headroom and hardware assists from technologies that include the Intel® Virtualization Technology (Intel® VT)1 have helped broaden its adoption and following: the scope of workloads for which it is effective on open-standard hardware. • The Intel® Data Plane • OS-level virtualization has been developed largely by makers of proprietary Development Kit (Intel® DPDK) operating systems and system architectures. Linux* containers, introduced in 2007, extend the capabilities for highly elastic or latency-sensitive workloads on • Intel® Solid State Drive open-standards hardware. Data Center Family for PCI Express* Both approaches have distinct advantages. For example, whereas each hypervisor- based virtual machine (VM) runs an entire copy of the OS, multiple containers • Intel® Virtualization Technology share one kernel instance, significantly reducing overhead. While hypervisor-based VMs can be provisioned far more quickly than physical servers—taking only tens of seconds as opposed to several weeks—that lag is unacceptable for workloads that range from real-time data analytics to control systems for autonomous vehicles. On the other hand, hypervisor-based VMs allow for multi-OS environments and superior workload isolation (even in public clouds). Enterprises are also finding that new virtualization usage models that draw on both containers and hypervisors help them get more value out of strategic approaches such as public cloud, software-defined networking (SDN), and network function virtualization (NFV). NOTE: In the context of this paper, the term “container” is a generalized reference for any virtual partition other than a hypervisor-based VM (e.g., a chroot jail, FreeBSD jail, Solaris* container/zone, or Linux container). Linux Containers Streamline Virtualization and Complement Hypervisor-Based Virtual Machines Table of Contents This paper introduces technical 2 The Historical Context for executives, architects, and engineers to Containers and Hypervisors 1 Executive Summary .................1 the potential value of Linux containers, Hypervisors and containers have a joint 2 The Historical Context for particularly in conjunction with Containers and Hypervisors .........2 ancestry, represented in Figure 1, which hypervisor-based virtualization; it has been driven by efforts to decrease 2.1 Resource-Partitioning Techniques includes the following sections: Not Based on Hypervisors .......3 the prescriptive coupling between 2.2 Emergence of Software-Only, • The Historical Context for Containers software workloads and the hardware Hypervisor-Based Virtualization and Hypervisors traces the they run on. From the beginning, these for Intel® architecture ..........4 development of these technologies advances enhanced the flexibility of 2.3 Hardware-Assisted, Hypervisor- from their common origin as an computing environments, enabling a Based Virtualization with Intel® effort to share and partition system single system to handle a broader range Virtualization Technology .......4 resources among workloads. of tasks. As a result, two related aspects 3 Virtualization Using of the interactions between hardware Linux Containers ...................5 • Virtualization Using Linux Containers and software have evolved: 3.1 Usage Model: Platform as a introduces the support for container- Service on Software-Defined based virtualization in Linux, • r esource sharing among workloads Infrastructure ..................6 illustrated by representative usage allows greater efficiency compared to 3.2 Usage Model: Combined models. the use of dedicated, single-purpose Hypervisor-Based VMs and equipment. Containers in the Public Cloud ..7 • Enabling Technologies from Intel 4 Enabling Technologies from Intel for for Container-Based Virtualization • r esource partitioning ensures that Container-Based Virtualization ......8 discusses Intel’s hardware and the system requirements of each 4.1 Intel® Data Plane software technologies that help workload are met and prevents Development Kit ................8 improve performance and data unwanted interactions among 4.2 Intel® Solid-State Drive protection when using Linux workloads (isolating sensitive data, Data Center Family containers. for example). for PCI Express* ................9 4.3 Enhanced Workload Isolation • Containers in real-World The first stage depicted in Figure 1 with Intel® Virtualization Implementations examines a few is that of the monolithic compute Technology ....................9 examples of early adopters that are environment, where the hardware and 5 Containers in real-World using containers as part of their software are logically conjoined, in a Implementations ................. 10 virtualization infrastructures today. single-purpose architecture. The ability 5.1 Google: Combinations and Layers to make significant changes to either of Complementary Containers and Hypervisors .............. 10 5.2 Heroku and Salesforce.com: Container-Based CrM, Worldwide 11 6 Conclusion ....................... 12 User User User … User Env Env … Env Monolithic Compute Enviroment Control Program Supervisor Supervisor (Hypervisor) Supervisor System Example: System Example: System Example: System Example: GE-635/Multics GE-635/Multics IBM 360 IBM 370 (circa 1959) (circa 1962) (circa 1964) (circa 1972) Figure 1. Early development of approaches to resource sharing and partitioning. 2 Linux Containers Streamline Virtualization and Complement Hypervisor-Based Virtual Machines hardware or software, short of replacing IBM introduced its first hypervisor it is vulnerable to intentional efforts the entire system, is severely limited for production (but unsupported) by users or programs to “break out” or even nonexistent. The second stage use in 1967, to run on System/360* of their jails and gain unauthorized includes the supervisor (progenitor of mainframes, followed by a fully access to resources. the kernel), an intermediary between supported version for System/370* • FreeBSD* jails are similar in concept user space and system resources that equipment in 1972. The company is to chroot jails, but with a greater allows hardware and software to be generally recognized as having held emphasis on security. This mechanism changed independently of each other. the industry’s most prominent role in was introduced as a feature of That de-coupling is advanced further the development of hypervisor-based FreeBSD 4.0 in 2000. FreeBSD jail in the third stage, where the supervisor virtualization from that point through definitions can explicitly restrict access mediates resources among multiple the rest of the twentieth century. outside the sandboxed environment users, allowing time-sharing among by entities such as files, processes, multiple, simultaneous jobs. 2.1 Resource-Partitioning Techniques Not Based on Hypervisors and user accounts (including The fourth stage adds a control accounts created by the jail definition program (CP, also known as the Although hypervisors enrich isolation specifically for that purpose). While “hypervisor”) that presents an among virtual environments running this approach significantly enhances independent view of the complete on a single physical host, they also add control over resources compared environment to each application; complexity and consume resources. to chroot, it is likewise incapable of applications execute in isolation, Alternate approaches to partitioning supporting full virtualization, because unaware of each other. Importantly, resources were developed as interest the FreeBSD jails must share a single various hypervisor-enabled virtual in Unix* and Unix-like operating OS kernel. environments can run different systems grew in the late 1970s and • Solaris containers (Solaris zones) operating systems simultaneously on early 1980s, driven by organizations build on the virtualization capabilities the same hardware, which would be that included Bell Laboratories, of chroot and FreeBSD jails. Sun impossible under most approaches AT&T, Digital Equipment Corporation, Microsystems introduced this feature to virtualization that are not based on Sun Microsystems, and academic with the name “Solaris containers” hypervisors. (Running workloads based institutions. These technologies (and as part of Solaris 10 in 2005, and on different operating systems on the others), which are
Recommended publications
  • Iocage Documentation Release 1.2
    iocage Documentation Release 1.2 Brandon Schneider and Peter Toth Sep 26, 2019 Contents 1 Documentation: 3 1.1 Install iocage...............................................3 1.2 Basic Usage...............................................6 1.3 Plugins.................................................. 10 1.4 Networking................................................ 11 1.5 Jail Types................................................. 14 1.6 Best Practices............................................... 15 1.7 Advanced Usage............................................. 16 1.8 Using Templates............................................. 20 1.9 Create a Debian Squeeze Jail (GNU/kFreeBSD)............................ 21 1.10 Known Issues............................................... 22 1.11 FAQ.................................................... 23 1.12 Indices and tables............................................ 24 Index 25 i ii iocage Documentation, Release 1.2 iocage is a jail/container manager written in Python, combining some of the best features and technologies the FreeBSD operating system has to offer. It is geared for ease of use with a simplistic and easy to learn command syntax. FEATURES: • Templates, basejails, and normal jails • Easy to use • Rapid thin provisioning within seconds • Automatic package installation • Virtual networking stacks (vnet) • Shared IP based jails (non vnet) • Dedicated ZFS datasets inside jails • Transparent ZFS snapshot management • Binary updates • Export and import • And many more! Contents 1 iocage Documentation,
    [Show full text]
  • Effective Virtual CPU Configuration with QEMU and Libvirt
    Effective Virtual CPU Configuration with QEMU and libvirt Kashyap Chamarthy <[email protected]> Open Source Summit Edinburgh, 2018 1 / 38 Timeline of recent CPU flaws, 2018 (a) Jan 03 • Spectre v1: Bounds Check Bypass Jan 03 • Spectre v2: Branch Target Injection Jan 03 • Meltdown: Rogue Data Cache Load May 21 • Spectre-NG: Speculative Store Bypass Jun 21 • TLBleed: Side-channel attack over shared TLBs 2 / 38 Timeline of recent CPU flaws, 2018 (b) Jun 29 • NetSpectre: Side-channel attack over local network Jul 10 • Spectre-NG: Bounds Check Bypass Store Aug 14 • L1TF: "L1 Terminal Fault" ... • ? 3 / 38 Related talks in the ‘References’ section Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications What this talk is not about 4 / 38 Related talks in the ‘References’ section What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications 4 / 38 What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications Related talks in the ‘References’ section 4 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP QEMU QEMU VM1 VM2 Custom Disk1 Disk2 Appliance ioctl() KVM-based virtualization components Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP Custom Appliance KVM-based virtualization components QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) Custom Appliance KVM-based virtualization components libvirtd QMP QMP QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 libguestfs (guestfish) Custom Appliance KVM-based virtualization components OpenStack, et al.
    [Show full text]
  • Oracle VM Virtualbox Container Domains for SPARC Or X86
    1 <Insert Picture Here> Virtualisierung mit Oracle VirtualBox und Oracle Solaris Containern Detlef Drewanz Principal Sales Consultant SAFE HARBOR STATEMENT The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle. In addition, the following is intended to provide information for Oracle and Sun as we continue to combine the operations worldwide. Each country will complete its integration in accordance with local laws and requirements. In the EU and other non-EU countries with similar requirements, the combinations of local Oracle and Sun entities as well as other relevant changes during the transition phase will be conducted in accordance with and subject to the information and consultation requirements of applicable local laws, EU Directives and their implementation in the individual members states. Sun customers and partners should continue to engage with their Sun contacts for assistance for Sun products and their Oracle contacts for Oracle products. 3 So .... Server-Virtualization is just reducing the number of boxes ? • Physical systems • Virtual Machines Virtualizationplattform Virtualizationplattform 4 Virtualization Use Workloads and Deployment Platforms
    [Show full text]
  • Freebsd's Jail(2) Facility
    Lousy virtualization, Happy users: FreeBSD's jail(2) facility Poul-Henning Kamp [email protected] CHROOT(2) FreeBSD System Calls Manual CHROOT(2) NAME chroot -- change root directory LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <unistd.h> int chroot(const char *dirname); Calling chroot(2) in ftpd(1) implemented ”anonymous FTP” without the hazzle of file/pathname parsing and editing. ”anonymous FTP” became used as a tool to enhance network security. By inference, chroot(2) became seen as a security enhancing feature. ...The source were not strong in those. Exercise 1: List at least four ways to escape chroot(2). Then the Internet happened, ...and web-servers, ...and web-hosting Virtual hosts in Apache User get their own ”virtual apache” but do do not get your own machine. Also shared: Databases mailprograms PHP/Perl etc. Upgrading tools (PHP, mySQL etc) on virtual hosting machines is a nightmare. A really bad nightmare: Cust#1 needs mySQL version > N Cust#2 cannot use mySQL version <M (unless PHP version > K) Cust#3 does not answer telephone Cust#4 has new sysadmin Cust#5 is just about ready with new version Wanted: Lightweight virtualization Same kernel, but virtual filesystem and network address plus root limitations. Just like chroot(2) with IP numbers on top. Will pay cash. Close holes in chroot(2) Introduce ”jail” syscall + kernel struct Block jailed root in most suser(9) calls. Check ”if jail, same jail ?” in strategic places. Fiddle socket syscall arguments: INADDR_ANY -> jail.ip INADDR_LOOPBACK -> jail.ip Not part of jail(2): Resource restriction Hardware virtualization Covert channel prevention (the hard stuff) Total implementation: 350 changed source lines 400 new lines of code FreeBSD without jail usr Resources of various sorts / home var process process process process process process Kernel FreeBSD with jail usr Resources of various sorts / home var process process process process* process process Kernel error = priv_check_cred( cred, PRIV_VFS_LINK, SUSER_ALLOWJAIL); if (error) return (error); The unjailed part One jailed part of the system.
    [Show full text]
  • Sandboxing 2 Change Root: Chroot()
    Sandboxing 2 Change Root: chroot() Oldest Unix isolation mechanism Make a process believe that some subtree is the entire file system File outside of this subtree simply don’t exist Sounds good, but. Sandboxing 2 2 / 47 Chroot Sandboxing 2 3 / 47 Limitations of Chroot Only root can invoke it. (Why?) Setting up minimum necessary environment can be painful The program to execute generally needs to live within the subtree, where it’s exposed Still vulnerable to root compromise Doesn’t protect network identity Sandboxing 2 4 / 47 Root versus Chroot Suppose an ordinary user could use chroot() Create a link to the sudo command Create /etc and /etc/passwd with a known root password Create links to any files you want to read or write Besides, root can escape from chroot() Sandboxing 2 5 / 47 Escaping Chroot What is the current directory? If it’s not under the chroot() tree, try chdir("../../..") Better escape: create device files On Unix, all (non-network) devices have filenames Even physical memory has a filename Create a physical memory device, open it, and change the kernel data structures to remove the restriction Create a disk device, and mount a file system on it. Then chroot() to the real root (On Unix systems, disks other than the root file system are “mounted” as a subtree somewhere) Sandboxing 2 6 / 47 Trying Chroot # mkdir /usr/sandbox /usr/sandbox/bin # cp /bin/sh /usr/sandbox/bin/sh # chroot /usr/sandbox /bin/sh chroot: /bin/sh: Exec format error # mkdir /usr/sandbox/libexec # cp /libexec/ld.elf_so /usr/sandbox/libexec # chroot /usr/sandbox
    [Show full text]
  • A Virtual Machine Environment for Real Time Systems Laboratories
    AC 2007-904: A VIRTUAL MACHINE ENVIRONMENT FOR REAL-TIME SYSTEMS LABORATORIES Mukul Shirvaikar, University of Texas-Tyler MUKUL SHIRVAIKAR received the Ph.D. degree in Electrical and Computer Engineering from the University of Tennessee in 1993. He is currently an Associate Professor of Electrical Engineering at the University of Texas at Tyler. He has also held positions at Texas Instruments and the University of West Florida. His research interests include real-time imaging, embedded systems, pattern recognition, and dual-core processor architectures. At the University of Texas he has started a new real-time systems lab using dual-core processor technology. He is also the principal investigator for the “Back-To-Basics” project aimed at engineering student retention. Nikhil Satyala, University of Texas-Tyler NIKHIL SATYALA received the Bachelors degree in Electronics and Communication Engineering from the Jawaharlal Nehru Technological University (JNTU), India in 2004. He is currently pursuing his Masters degree at the University of Texas at Tyler, while working as a research assistant. His research interests include embedded systems, dual-core processor architectures and microprocessors. Page 12.152.1 Page © American Society for Engineering Education, 2007 A Virtual Machine Environment for Real Time Systems Laboratories Abstract The goal of this project was to build a superior environment for a real time system laboratory that would allow users to run Windows and Linux embedded application development tools concurrently on a single computer. These requirements were dictated by real-time system applications which are increasingly being implemented on asymmetric dual-core processors running different operating systems. A real time systems laboratory curriculum based on dual- core architectures has been presented in this forum in the past.2 It was designed for a senior elective course in real time systems at the University of Texas at Tyler that combines lectures along with an integrated lab.
    [Show full text]
  • Systematic Evaluation of Sandboxed Software Deployment for Real-Time Software on the Example of a Self-Driving Heavy Vehicle
    Systematic Evaluation of Sandboxed Software Deployment for Real-time Software on the Example of a Self-Driving Heavy Vehicle Philip Masek1, Magnus Thulin1, Hugo Andrade1, Christian Berger2, and Ola Benderius3 Abstract— Companies developing and maintaining software-only products like web shops aim for establishing persistent links to their software running in the field. Monitoring data from real usage scenarios allows for a number of improvements in the software life-cycle, such as quick identification and solution of issues, and elicitation of requirements from previously unexpected usage. While the processes of continuous integra- tion, continuous deployment, and continuous experimentation using sandboxing technologies are becoming well established in said software-only products, adopting similar practices for the automotive domain is more complex mainly due to real-time and safety constraints. In this paper, we systematically evaluate sandboxed software deployment in the context of a self-driving heavy vehicle that participated in the 2016 Grand Cooperative Driving Challenge (GCDC) in The Netherlands. We measured the system’s scheduling precision after deploying applications in four different execution environments. Our results indicate that there is no significant difference in performance and overhead when sandboxed environments are used compared to natively deployed software. Thus, recent trends in software architecting, packaging, and maintenance using microservices encapsulated in sandboxes will help to realize similar software and system Fig. 1. Volvo FH16 truck from Chalmers Resource for Vehicle Research engineering for cyber-physical systems. (Revere) laboratory that participated in the 2016 Grand Cooperative Driving Challenge in Helmond, NL. I. INTRODUCTION Companies that produce and maintain software-only prod- ucts, i.e.
    [Show full text]
  • Understanding Full Virtualization, Paravirtualization, and Hardware Assist
    VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist Contents Introduction .................................................................................................................1 Overview of x86 Virtualization..................................................................................2 CPU Virtualization .......................................................................................................3 The Challenges of x86 Hardware Virtualization ...........................................................................................................3 Technique 1 - Full Virtualization using Binary Translation......................................................................................4 Technique 2 - OS Assisted Virtualization or Paravirtualization.............................................................................5 Technique 3 - Hardware Assisted Virtualization ..........................................................................................................6 Memory Virtualization................................................................................................6 Device and I/O Virtualization.....................................................................................7 Summarizing the Current State of x86 Virtualization Techniques......................8 Full Virtualization with Binary Translation is the Most Established Technology Today..........................8 Hardware Assist is the Future of Virtualization, but the Real Gains Have
    [Show full text]
  • An Empirical Study of Virtualization Versus Lightweight Containers
    On Exploiting Page Sharing in a Virtualized Environment - an Empirical Study of Virtualization Versus Lightweight Containers Ashish Sonone, Anand Soni, Senthil Nathan, Umesh Bellur Department of Computer Science and Engineering IIT Bombay Mumbai, India ashishsonone, anandsoni, cendhu, [email protected] Abstract—While virtualized solutions are firmly entrenched faulty application can result in the failure of all applications in cloud data centers to provide isolated execution environ- executing on that machine. On the other hand, each virtual ments, the chief overhead it suffers from is that of memory machine runs its own OS and hence, a faulty application consumption. Even pages that are common in multiple virtual machines (VMs) on the same physical machine (PM) are not cannot crash the host OS. Further, each application can shared and multiple copies exist thereby draining valuable run on any customized and optimized guest OS which is memory resources and capping the number of VMs that can be different from the host OS. For example, each guest OS can instantiated on a PM. Lightweight containers (LWCs) on the run its own I/O scheduler which is optimized for the hosted other hand does not suffer from this situation since virtually application. In addition to this, the hypervisor provides many everything is shared via Copy on Write semantics. As a result the capacity of a PM to host LWCs is far higher than hosting other features such as live migration of virtual machines, equivalent VMs. In this paper, we evaluate a solution using high availability using periodic checkpointing and storage uKSM to exploit memory page redundancy amongst multiple migration.
    [Show full text]
  • Introduction to Virtualization
    z Systems Introduction to Virtualization SHARE Orlando Linux and VM Program Romney White, IBM [email protected] z Systems Architecture and Technology © 2015 IBM Corporation Agenda ° Introduction to Virtualization – Concept – Server Virtualization Approaches – Hypervisor Implementation Methods – Why Virtualization Matters ° Virtualization on z Systems – Logical Partitions – Virtual Machines 2 z Systems Virtualization Technology © 2015 IBM Corporation Virtualization Concept Virtual Resources Proxies for real resources: same interfaces/functions, different attributes May be part of a physical resource or multiple physical resources Virtualization Creates virtual resources and "maps" them to real resources Primarily accomplished with software or firmware Resources Components with architecturally-defined interfaces/functions May be centralized or distributed - usually physical Examples: memory, disk drives, networks, servers Separates presentation of resources to users from actual resources Aggregates pools of resources for allocation to users as virtual resources 3 z Systems Virtualization Technology © 2015 IBM Corporation Server Virtualization Approaches Hardware Partitioning Bare-metal Hypervisor Hosted Hypervisor Apps ... Apps Apps ... Apps Apps ... Apps OS OS OS OS OS OS Adjustable partitions Hypervisor Hypervisor Partition Controller Host OS SMP Server SMP Server SMP Server Server is subdivided into fractions Hypervisor provides fine-grained Hypervisor uses OS services to each of which can run an OS timesharing of all resources
    [Show full text]
  • KVM Based Virtualization and Remote Management Srinath Reddy Pasunuru St
    St. Cloud State University theRepository at St. Cloud State Culminating Projects in Information Assurance Department of Information Systems 5-2018 KVM Based Virtualization and Remote Management Srinath Reddy Pasunuru St. Cloud State University, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/msia_etds Recommended Citation Pasunuru, Srinath Reddy, "KVM Based Virtualization and Remote Management" (2018). Culminating Projects in Information Assurance. 53. https://repository.stcloudstate.edu/msia_etds/53 This Starred Paper is brought to you for free and open access by the Department of Information Systems at theRepository at St. Cloud State. It has been accepted for inclusion in Culminating Projects in Information Assurance by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. 1 KVM Based Virtualization and Remote Management by Srinath Reddy Pasunuru A Starred Paper Submitted to the Graduate Faculty of St. Cloud State University in Partial Fulfillment of the Requirements for the Degree Master of Science in Information Assurance May, 2018 Starred Paper Committee Susantha Herath, Chairperson Ezzat Kirmani Sneh Kalia 2 Abstract In the recent past, cloud computing is the most significant shifts and Kernel Virtual Machine (KVM) is the most commonly deployed hypervisor which are used in the IaaS layer of the cloud computing systems. The Hypervisor is the one which provides the complete virtualization environment which will intend to virtualize as much as hardware and systems which will include the CPUs, Memory, network interfaces and so on. Because of the virtualization technologies such as the KVM and others such as ESXi, there has been a significant decrease in the usage if the resources and decrease in the costs involved.
    [Show full text]
  • ISSN: 1804-0527 (Online) 1804-0519 (Print) Vol.8 (2), PP. 63-69 Introduction During the Latest Years, a Lot of Projects Have Be
    Perspectives of Innovations, Economics & Business, Volume 8, Issue 2, 201 1 EVALUATION OF PERFORMANCE OF SOLARIS TRUSTED EXTENSIONS USING CONTAINERS TECHNOLOGY EVALUATION OF PERFORMANCE OF GENTI DACI SOLARIS TRUSTED EXTENSIONS USING CONTAINERS TECHNOLOGY Faculty of Information Technology Polytechnic University of Tirana, Albania UDC: 004.45 Key words: Solaris Containers. Abstract: Server and system administrators have been concerned about the techniques on how to better utilize their computing resources. Today, there are developed many technologies for this purpose, which consists of running multiple applications and also multiple operating systems on the same hardware, like VMWARE, Linux-VServer, VirtualBox, Xen, etc. These systems try to solve the problem of resource allocation from two main aspects: running multiple operating system instances and virtualizing the operating system environment. Our study presents an evaluation of scalability and performance of an operating system virtualization technology known as Solaris Containers, with the main objective on measuring the influence of a security technology known as Solaris Trusted Extensions. Solaris. We will study its advantages and disadvantages and also the overhead that it introduces to the scalability of the system’s main advantages. ISSN: 1804 -0527 (online) 1804 -0519 (print) Vol.8 (2), PP. 63 -69 Introduction administration because there are no multiple operating system instances in a system. During the latest years, a lot of projects have been looking on virtualizing operating system Operating systems environments, such as FreeBSD Jail, Linux- VServer, Virtuozzo etc. This virtualization technique is based in using only one underlying Solaris/OpenSolaris are Operating Systems operating system kernel. Using this paradigm the performing as the main building blocks of computer user has the possibility to run multiple applications systems; they provide the interface between user in isolation from each other.
    [Show full text]