Virtual Containers: Asset Management Best Practices and Licensing Considerations

Total Page:16

File Type:pdf, Size:1020Kb

Virtual Containers: Asset Management Best Practices and Licensing Considerations Virtual Containers: Asset Management Best Practices and Licensing Considerations Virtual containers have seen tremendous adoption and growth within all industries. However, in terms of IT asset management, cont- ainers are not being managed and are an unknown area of risk for many of our clients. Because it is a newer technology, there is very little information about managing containers and how to address the emerging SAM & ITAM challenges they bring. Due to this lack of public information, Anglepoint has published this whitepaper on navigating the world of containers, with an empha- sis on asset management and licensing. We will cover everything from the history of containers, to what containers are, the benefits of containers, asset management best practices, and some publisher-specific licensing considerations. A BRIEF HISTORY OF VIRTUAL CONTAINERS The first proper containers came from the Linux world as LXC (LinuX Containers) in 2008. However, it wasn’t until 2013 that containers entered the IT public consciousness, when Docker came onto the scene with Enterprise usage in mind. Even then, though, it was more of an enthusiast’s technology. In 2015, Google released and open sourced Kubernetes which manages and ‘orchestrates’ containers. However, it wasn’t until 2017 that Docker and Kubernetes had matured enough to be considered for production use within corporate environments. 2017 also saw VMware, Microsoft, and Amazon beginning to support and offer solutions for Kubernetes and Docker on their top-tier cloud infrastructure. WHAT IS A CONTAINER? Often, people conflate the term ‘container’ with multiple technologies that make up the container ecosystem. Let’s look at what a modern container is at the most fundamental level. [email protected] | 1.855.512.6453 | Anglepoint.com desktop. All these environments are likely different, with different versions of a dependency installed or perhaps the hardware configuration is slightly different which would create additional trouble shooting efforts. Containers, however, obfuscate the hardware layer. They are platform agnostic. You could run the container on a laptop, server, or the cloud and it’s going to run the same. Using the traditional model, migrating an application from on-premises to the cloud or across cloud platforms is an onerous process. However, this process is streamlined and overall Diagram 1 greatly simplified with containers. On the left side of diagram 1 is an Operating System which So we‘ve gone over containers themselves, but there are other has several different processes (applications) that are installed terms and technologies in the container ecosystem that we need and running. These processes are all installed in the same to be familiar with. Let’s take a look at those. environment, or namespace if you are talking about Linux, and can interact with each-other. A container is simply the CONTAINER IMAGE isolation of a single process and wrapping it up in – just Container images are what most people are referring to when as it sounds – a container. This container is isolated from the they talk about a container. A container image is the actual host-operating system and can only “see” and interact with what static container file or bit that contains the process and its is explicitly allowed. See the example below to illustrate our point. dependencies. A container image becomes a container when running. Example: Let’s start with a traditional model in which we are installing Container images themselves are immutable; all changes made applications on the OS: In this example, we‘ve installed NGINX to a container image become new ‘layers’ of the image. This Web Server (a process), but there are also several dependencies happens because when changes are made a git-like push/ installed that support the main application, NGINX Web Server. pull mechanism is used. One benefit of image ‘layers’ is that they create a natural audit trail when used in conjunction with Let’s say that we also want to install NodeJS, which requires some a container registry (defined below). All changes are visible of the same dependencies as NGINX Web Server , but perhaps over time, we can see the details of each change including by the version of NodeJS requires a different version of those whom each change was made. A hierarchical nature to these dependencies. Using the traditional model, this would require a ‘layers’ also exists, and container images can have parent/child complicated configuration to ensure that each of our applications relationships. E.g.: In our previous example container NGINX are pointing to the correct versions of the dependencies. It was running, but let’s say that we also needed a container would also be important to ensure that once an application or running NGINX and PHP. A child container could be created that dependency was updated, the configuration changes were references and builds off our main NGINX container. maintained. Example: Now if we were to use containers in this scenario, it would Let’s imagine that we discovered a vulnerability in one of the become easier to manage. The process (NGINX Web Server dependencies we had deployed. In the traditional virtual machine in this example) would be bundled in a container with the (VM) world we would have to patch each of our VMs that had dependencies that it relies on. When we want to add another this vulnerability. Now hopefully we would have an automated process (NodeJS), it resides in its own container along with its way of doing this, but even still, verifying that the patches were dependencies. This way, we don’t have to worry about version successful and the applications unaffected would be extremely conflicts as everything is isolated. time-consuming tasks. With containers, we would only need to update the container image and all containers running from Using containers is especially useful when developing that image would be updated. Additionally, any child container applications. Someone might be developing on a laptop, testing images referencing the now updated parent image would be on a server, and then deploying to the cloud or a co-worker’s [email protected] | 1.855.512.6453 | Anglepoint.com updated as well. VIRTUAL CONTAINERS VS. VIRTUAL MACHINES CONTAINER MANIFEST Another way to understand containers is comparing them with Part of the container image is the manifest, better known as virtual machines, as people are more familiar with them as a a ‘Dockerfile’ - if using Docker‘s terminology. The container technology. manifest is a structured text file that contains the configuration settings and instructions needed to build the container image. CONTAINER REGISTRY The container registry is a repository of container images. Public registries exist, such as Dockerhub, as do private registries which organizations can run to host their own internally developed images or clone public images. NODES & CLUSTERS Diagram 2 A node is the hardware supporting the container environment. This could be a server, VM, or a cloud instance. In some cases, a Referring to diagram 2, we see that both VMs and containers group of nodes will be working together to support a container start with Infrastructure, which could be a physical host or a cloud environment – this is referred to as a cluster. platform like AWS or Azure. The Host Operating System comes next, this would be something like Windows Server or ESX. After PODS & ORCHESTRATOR the Host OS, comes the hypervisor technology for VMs and the A pod is one or more containers which are grouped and container runtime (e.g. Docker) for containers. managed by an orchestrator. An orchestrator is where rules and operations for scaling, failover, and running container workloads Now, on the VM side, we see that each individual VM has a are created. So, while Docker offers tools and solutions for full OS installed – the applications and dependencies are also container creation and deployment, Kubernetes is an example of installed on the VMs. Additionally, the hypervisor is virtualizing an orchestrator. the hardware the VMs are running on which requires compute resources. [email protected] | 1.855.512.6453 | Anglepoint.com Conversely, with containers, we don’t need to install an entire in the cloud. The cloud is touted for its scalability and elastici- OS. All that runs in the container is the process and its depen- ty, however, dynamically scaling traditional Infrastructure as dencies; this means that from a storage standpoint, the container a Service (IaaS) workloads (be it right-sizing the instances or size is only a fraction of the size of the VM. The container is also deploying them based on need) is much easier said than done. much less resource-intensive to run from a computational power Containers, on the other hand, were built with this functionality standpoint. in mind and orchestration makes scaling and meeting demand simple. Another significant difference between virtual machines and containers is that a VM is typically running on a start-and-stop Often, when talking about containers in the cloud we hear the schedule. Whereas the lifecycle of a container traditionally term CaaS (Containers as a Service). CaaS could be regarded as mirrors the lifecycle of the process it’s running. Or in other words a sub-category of IaaS, except we don’t need to manage the OS - when the process starts, the container starts, when the process itself; we are just managing the containers and container runtime. ends, the container stops running. Let’s illustrate this: Google is one of the largest contributors to the container platform. When we go to Google Search or YouTube, these are processes running in containers. Starting YouTube, for example, creates a new container and when we exit YouTube, that kills the container. In fact, Google starts and stops over 2 billion containers each week and they are able to manage demand dynamically using container orchestration.
Recommended publications
  • Oracle VM Virtualbox Container Domains for SPARC Or X86
    1 <Insert Picture Here> Virtualisierung mit Oracle VirtualBox und Oracle Solaris Containern Detlef Drewanz Principal Sales Consultant SAFE HARBOR STATEMENT The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle. In addition, the following is intended to provide information for Oracle and Sun as we continue to combine the operations worldwide. Each country will complete its integration in accordance with local laws and requirements. In the EU and other non-EU countries with similar requirements, the combinations of local Oracle and Sun entities as well as other relevant changes during the transition phase will be conducted in accordance with and subject to the information and consultation requirements of applicable local laws, EU Directives and their implementation in the individual members states. Sun customers and partners should continue to engage with their Sun contacts for assistance for Sun products and their Oracle contacts for Oracle products. 3 So .... Server-Virtualization is just reducing the number of boxes ? • Physical systems • Virtual Machines Virtualizationplattform Virtualizationplattform 4 Virtualization Use Workloads and Deployment Platforms
    [Show full text]
  • ISSN: 1804-0527 (Online) 1804-0519 (Print) Vol.8 (2), PP. 63-69 Introduction During the Latest Years, a Lot of Projects Have Be
    Perspectives of Innovations, Economics & Business, Volume 8, Issue 2, 201 1 EVALUATION OF PERFORMANCE OF SOLARIS TRUSTED EXTENSIONS USING CONTAINERS TECHNOLOGY EVALUATION OF PERFORMANCE OF GENTI DACI SOLARIS TRUSTED EXTENSIONS USING CONTAINERS TECHNOLOGY Faculty of Information Technology Polytechnic University of Tirana, Albania UDC: 004.45 Key words: Solaris Containers. Abstract: Server and system administrators have been concerned about the techniques on how to better utilize their computing resources. Today, there are developed many technologies for this purpose, which consists of running multiple applications and also multiple operating systems on the same hardware, like VMWARE, Linux-VServer, VirtualBox, Xen, etc. These systems try to solve the problem of resource allocation from two main aspects: running multiple operating system instances and virtualizing the operating system environment. Our study presents an evaluation of scalability and performance of an operating system virtualization technology known as Solaris Containers, with the main objective on measuring the influence of a security technology known as Solaris Trusted Extensions. Solaris. We will study its advantages and disadvantages and also the overhead that it introduces to the scalability of the system’s main advantages. ISSN: 1804 -0527 (online) 1804 -0519 (print) Vol.8 (2), PP. 63 -69 Introduction administration because there are no multiple operating system instances in a system. During the latest years, a lot of projects have been looking on virtualizing operating system Operating systems environments, such as FreeBSD Jail, Linux- VServer, Virtuozzo etc. This virtualization technique is based in using only one underlying Solaris/OpenSolaris are Operating Systems operating system kernel. Using this paradigm the performing as the main building blocks of computer user has the possibility to run multiple applications systems; they provide the interface between user in isolation from each other.
    [Show full text]
  • The Server Virtualization Landscape, Circa 2007
    ghaff@ illuminata.com Copyright © 2007 Illuminata, Inc. single user license Gordon R Haff Illuminata, Inc. TM The Server Virtualization Bazaar, Circa 2007 Inspired by both industry hype and legitimate customer excitement, many Research Note companies seem to have taken to using the “virtualization” moniker more as the hip phrase of the moment than as something that’s supposed to convey actual meaning. Think of it as “eCommerce” or “Internet-enabled” for the Noughts. The din is loud. It doesn’t help matters that virtualization, in the broad sense of “remapping physical resources to more useful logical ones,” spans a huge swath of Gordon Haff technologies—including some that are so baked-in that most people don’t even 27 July 2007 think of them as virtualization any longer. Personally licensed to Gordon R Haff of Illuminata, Inc. for your personal education and individual work functions. Providing its contents to external parties, including by quotation, violates our copyright and is expressly forbidden. However, one particular group of approaches is capturing an outsized share of the limelight today. That would, of course, be what’s commonly referred to as “server virtualization.” Although server virtualization is in the minds of many inextricably tied to the name of one company—VMware—there are many companies in this space. Their offerings include not only products that let multiple virtual machines (VMs) coexist on a single physical server, but also related approaches such as operating system (OS) virtualization or containers. In the pages that follow, I offer a guide to today’s server virtualization bazaar— which at first glance can perhaps seem just a dreadfully confusing jumble.
    [Show full text]
  • Containerisation Gareth Roy Gridpp 32, Pitlochry 1 Intermodal Containers
    Containerisation Gareth Roy GridPP 32, Pitlochry "1 Intermodal Containers Developed by Malcolm P. McLean & Keith W. Tantlinger. Reaction to slow loading times produced by using “break bulk cargo.” Apparatus for shipping freight (1958): “In 1956, loose cargo cost $5.86 per ton US 2853968 A - Malcolm P McLean to load. Using an ISO shipping container, the cost was reduced to only .16 cents per ton.” IMPERIAL METRIC Length 19’ 10.5” 6.058 m Width 8’ 0” 2.438 m Height 8’ 6” 2.591 m Empty Weight 4,850 lb 2,200 kg Max Weight 66,139 lb 30,400 kg "2 Mærsk Mc-Kinney Møller (18270 TEU) Linux Containers Form of OS Level Virtualisation. Kernel hosts multiple separated user-land instances (Virtual Environment/Engine). Application Low overheads, elastic, multi-tennant. VE Storage can be Copy-on-Write or use UnionFS OS Examples: chroot (1982) Solaris Containers (2005) Physical Hardware FreeBSD Jails (1988) AIX WPARS (2007) Virtuozzo (2001) LXC (2008) OpenVZ (2005) "3 VM’s vs Containers Application Application Application Application Guest OS Guest OS VE VE Virtual HW Virtual HW OS Hypervisor / OS Physical Hardware Physical Hardware Virtual Machine Linux Container "4 VM’s vs Containers (Arguments) Pros: Pros: OS Independent Lightweight / Dense Secure / Isolated Fast Instantiation Flexible Elastic Resource Live Migration Low Memory Consumption Mature Ecosystem Native Performance Cons: Cons: Full System Image Restricted / Linux Only Slow Startup/Shutdown/Build Shared Kernel Memory Consumption Overhead Security Model Opaque to System Young Ecosystem Virtual Machine Linux Container "5 Containers in More Detail Running Application Application Application Instanced Namespace Virtual Environment Virtual Environment Resource Control Group Container CGROUP Container CGROUP Kernel Namespace Layer PID MNT IPC NET UTS USER* Linux Kernel > 2.6.23 OS Physical Hardware "6 Namespaces Application A Namespace wraps a global resource and presents an isolated instance to running process.
    [Show full text]
  • Container Technologies
    Zagreb, NKOSL, FER Container technologies Marko Golec · Juraj Vijtiuk · Jakov Petrina April 11, 2020 About us ◦ Embedded Linux development and integration ◦ Delivering solutions based on Linux, OpenWrt and Yocto • Focused on software in network edge and CPEs ◦ Continuous participation in Open Source projects ◦ www.sartura.hr Introduction to GNU/Linux ◦ Linux = operating system kernel ◦ GNU/Linux distribution = kernel + userspace (Ubuntu, Arch Linux, Gentoo, Debian, OpenWrt, Mint, …) ◦ Userspace = set of libraries + system software Linux kernel ◦ Operating systems have two spaces of operation: • Kernel space – protected memory space and full access to the device’s hardware • Userspace – space in which all other application run • Has limited access to hardware resources • Accesses hardware resources via kernel • Userspace applications invoke kernel services with system calls User applications E.g. bash, LibreOffice, GIMP, Blender, Mozilla Firefox, etc. System daemons: Windowing system: User mode Low-level system systemd, runit, logind, X11, Wayland, Other libraries: GTK+, Qt, EFL, SDL, SFML, Graphics: Mesa, AMD components networkd, PulseAudio, SurfaceFlinger FLTK, GNUstep, etc. Catalyst, … … (Android) C standard library Up to 2000 subroutines depending on C library (glibc, musl, uClibc, bionic) ( open() , exec() , sbrk() , socket() , fopen() , calloc() , …) About 380 system calls ( stat , splice , dup , read , open , ioctl , write , mmap , close , exit , etc.) Process scheduling Memory management IPC subsystem Virtual files subsystem Network subsystem Kernel mode Linux Kernel subsystem subsystem Other components: ALSA, DRI, evdev, LVM, device mapper, Linux Network Scheduler, Netfilter Linux Security Modules: SELinux, TOMOYO, AppArmor, Smack Hardware (CPU, main memory, data storage devices, etc.) TABLE 1 Layers within Linux Virtualization Virtualization Concepts Two virtualization concepts: ◦ Hardware virtualization (full/para virtualization) • Emulation of complete hardware (virtual machines - VMs) • VirtualBox, QEMU, etc.
    [Show full text]
  • Performance Isolation of a Misbehaving Virtual Machine with Xen, Vmware and Solaris Containers
    Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers Todd Deshane, Demetrios Dimatos, Gary Hamilton, Madhujith Hapuarachchi, Wenjin Hu, Michael McCabe, Jeanna Neefe Matthews Clarkson University {deshantm, dimatosd, hamiltgr, hapuarmg , huwj, mccabemt, jnm}@clarkson.edu Abstract In recent years, there have been a number of papers comparing the performance of different virtualization environ- ments for x86 such as Xen, VMware and UML. These comparisons have focused on quantifying the overhead of virtualization for one VM compared to a base OS. In addition, researchers have examined the performance degrada- tion experienced when multiple VMs are running the same workload. This is an especially relevant metric when determining a systems’ suitability for supporting commercial hosting environments – the target environment for some virtualization systems. In such an environment, a provider may allow multiple customers to administer virtual ma- chines on the same physical host. It is natural for these customers to want a certain guaranteed level of performance regardless of the actions taken by other VMs on the same physical host. In that light, another key aspect of the com- parison between virtualization environments has received less attention - how well do different virtualization systems protect VMs from misbehavior or resource hogging on other VMs? In this paper, we present the results of running a variety of different misbehaving applications under three different virtualization environments VMware, Xen, and Solaris containers. These are each examples of a larger class of virtualization techniques namely full virtualization, paravirtualization and generic operating systems with additional isolation layers. To test the isolation properties of these systems, we run six different stress tests - a fork bomb, a test that consumes a large amount of memory, a CPU intensive test, a test that runs 10 threads of IOzone and two tests that send and receive a large amount of network I/O.
    [Show full text]
  • Performance Evaluation of Containers for HPC Cristian Ruiz, Emmanuel Jeanvoine, Lucas Nussbaum
    Performance evaluation of containers for HPC Cristian Ruiz, Emmanuel Jeanvoine, Lucas Nussbaum To cite this version: Cristian Ruiz, Emmanuel Jeanvoine, Lucas Nussbaum. Performance evaluation of containers for HPC. VHPC - 10th Workshop on Virtualization in High-Performance Cloud Computing, Aug 2015, Vienna, Austria. pp.12. hal-01195549 HAL Id: hal-01195549 https://hal.inria.fr/hal-01195549 Submitted on 8 Sep 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Performance evaluation of containers for HPC Cristian Ruiz, Emmanuel Jeanvoine and Lucas Nussbaum Inria, Villers-l`es-Nancy, F-54600, France Universit´ede Lorraine, LORIA, F-54500, France CNRS, LORIA - UMR 7503, F-54500, France Abstract. Container-based virtualization technologies such as LXC or Docker have gained a lot of interest recently, especially in the HPC con- text where they could help to address a number of long-running issues. Even if they have proven to perform better than full-fledged, hypervisor- based, virtualization solutions, there are still a lot of questions about the use of container solutions in the HPC context. This paper evaluates the performance of Linux-based container solutions that rely on cgroups and namespaces using the NAS parallel benchmarks, in various configu- rations.
    [Show full text]
  • Operating Systems Design 20. Virtualization Paul Krzyzanowski [email protected]
    Operating Systems Design 20. Virtualization Paul Krzyzanowski [email protected] © 2010 Paul Krzyzanowski 11/30/2010 1 Virtualization • Memory virtualization – Process feels like it has its own address space – Created by MMU, configured by OS • Storage virtualization – Logical view of disks “connected” to a machine – External pool of storage • CPU/Machine virtualization – Each process feels like it has its own CPU – Created by OS preemption and scheduler © 2010 Paul Krzyzanowski 11/30/2010 2 Storage Virtualization • Dissociate knowledge of physical disks • Software between the computer and the disks manages the view of storage • Examples: – Make four 500 GB disks appear as one 2 TB disk – Make one 500 GB disk appear as two 200 GB disks and one 100 GB disk, with each of the 200 GB virtual disks available to different servers while the 100 GB disk can be shared by all. – Have all writes get mirrored to a backup disk • Virtualization software translates read-block/write-block requests for logical devices to read-block/write-block requests for physical devices © 2010 Paul Krzyzanowski 11/30/2010 3 Storage Virtualization • Logical view of disks “connected” to a machine • Separate logical view from physical storage • External pool of storage Host 1 Replication Virtualization Host 2 appliance Snapshots ... Fibre-channel Pooling switch Partitioning Host n © 2010 Paul Krzyzanowski 11/30/2010 4 Virtual CPUs (sort of) • Each process feels like it has its own CPU – But cannot execute privileged instructions (e.g., modify the MMU or the interval timer,
    [Show full text]
  • Oracle Solaris Operating System: Optimized for Sun X86 Systems in the Enterprise
    An Oracle White Paper August 2011 Oracle Solaris Operating System: Optimized for Sun x86 Systems in the Enterprise Oracle Solaris Operating System: Optimized for Sun x86 Systems in the Enterprise Executive Overview ........................................................................... 1 Introduction ....................................................................................... 2 The Oracle Solaris Ecosystem ....................................................... 3 Intel Xeon Processor E7 Family ..................................................... 3 Oracle Integration .......................................................................... 4 Intelligent Performance ...................................................................... 5 Memory Placement Optimization (MPO) ........................................ 5 Intel Turbo Boost ........................................................................... 8 Automated Energy Efficiency ............................................................. 8 Oracle Solaris Power Aware Dispatcher ........................................ 9 PowerTOP ................................................................................... 10 Power Budgets and Power Capping ............................................ 11 Always Running APIC Timer ....................................................... 12 Reliability ......................................................................................... 12 Oracle Solaris FMA for Intel Xeon Processors ............................. 13 Security
    [Show full text]
  • Containers and Virtual Machines at Scale: a Comparative Study
    Containers and Virtual Machines at Scale: A Comparative Study Prateek Sharma1, Lucas Chaufournier1, Prashant Shenoy1, Y.C. Tay2 {prateeks, lucasch, shenoy}@cs.umass.edu, [email protected] University of Massachusetts Amherst, USA1, National University of Singapore, Singapore2 ABSTRACT the mapping of virtual to physical resources as well as the amount Virtualization is used in data center and cloud environments to de- of resources to each application can be varied dynamically to ad- couple applications from the hardware they run on. Hardware vir- just to changing application workloads. Furthermore, virtualiza- tualization and operating system level virtualization are two promi- tion enables multi-tenancy, which allows multiple instances of vir- nent technologies that enable this. Containers, which use OS virtu- tualized applications (“tenants”) to share a physical server. Multi- alization, have recently surged in interest and deployment. In this tenancy allows data centers to consolidate and pack applications paper, we study the differences between the two virtualization tech- into a smaller set of servers and reduce operating costs. Virtualiza- nologies. We compare containers and virtual machines in large data tion also simplifies replication and scaling of applications. center environments along the dimensions of performance, man- There are two types of server virtualization technologies that ageability and software development. are common in data center environments—hardware-level virtual- We evaluate the performance differences caused by the different ization and operating system level virtualization. Hardware level virtualization technologies in data center environments where mul- virtualization involves running a hypervisor which virtualizes the tiple applications are running on the same servers (multi-tenancy). server’s resources across multiple virtual machines.
    [Show full text]
  • Fast Delivery of Virtual Machines and Containers : Understanding and Optimizing the Boot Operation Thuy Linh Nguyen
    Fast delivery of virtual machines and containers : understanding and optimizing the boot operation Thuy Linh Nguyen To cite this version: Thuy Linh Nguyen. Fast delivery of virtual machines and containers : understanding and optimizing the boot operation. Distributed, Parallel, and Cluster Computing [cs.DC]. Ecole nationale supérieure Mines-Télécom Atlantique, 2019. English. NNT : 2019IMTA0147. tel-02418752 HAL Id: tel-02418752 https://tel.archives-ouvertes.fr/tel-02418752 Submitted on 19 Dec 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THESE DE DOCTORAT DE L’ÉCOLE NATIONALE SUPERIEURE MINES-TELECOM ATLANTIQUE BRETAGNE PAYS DE LA LOIRE - IMT ATLANTIQUE COMUE UNIVERSITE BRETAGNE LOIRE ECOLE DOCTORALE N° 601 Mathématiques et Sciences et Technologies de l'Information et de la Communication SpéciAlité : InformAtique et Applications Par Thuy Linh NGUYEN Fast delivery of Virtual Machines and Containers: Understanding and optimizing the boot operation Thèse présentée et soutenue à Nantes, le 24 Septembre 2019 Unité de recherche : Inria Rennes Bretagne Atlantique Thèse N° : 2019IMTA0147 Rapporteurs avant soutenance : MariA S. PEREZ Professeure, UniversidAd PolitecnicA de MAdrid, EspAgne Daniel HAGIMONT Professeur, INPT/ENSEEIHT, Toulouse, France Composition du Jury : Président : Mario SUDHOLT Professeur, IMT AtlAntique, FrAnce Rapporteur : MariA S.
    [Show full text]
  • Virtual Lab: Oracle VM Server for SPARC (Logical Domains)
    <Insert Picture Here> Virtual Lab High Impact Low Cost Virtual Lab Vladimír Marek, Principal Software Engineer Systems Revenue Product Engineering (RPE) Program Agenda • Who are we <Insert Picture Here> • The problem • Virtualization Technologies overview • The solution • Demo • Technical details • Q/A 2 Who We Are: Systems Revenue Product Engineering Oracle Solaris Sustaining • Responsible for fixing bugs in released (revenue) versions of Oracle Solaris – Solaris 11.1, Solaris 10, Solaris 9, Solaris 8 – Dealing with bugs reported by customers and found internally • Organized in technology teams – Kernel, drivers, security, networking, file systems, utilities, naming, install, desktop, free and open source, etc. • Own technologies in maintenance mode – UFS, PCFS, ... • Development of troubleshooting and debugging technologies – Kernel debugger, crash dump analysis, ... 3 Most Common Tasks • Reproduce the reported problem • Find a root cause • Design and implement fix • Test the fix • Code review • Integrate the fix • Test resulting patch • Eventually provide interim diagnostic relief (patch) – Design, implement and test 4 Most Common Tasks: We Need HW For Testing • Reproduce the reported problem • Find a root cause • Design and implement fix • Test the fix The Problem • Code review • Integrate the fix • Test resulting patch • Eventually provide interim diagnostic relief (patch) – Design, implement and test 5 Classical Solution Yes, we do have a lot of HW but ... • Large labs with thousands of machines – Automated installation supported • Allow very complex setups • Definitely required for HW related issues but … • Rather slow to get the set up prepared – Set up and installation takes time – Not very suitable for quick tests and experiments • For a lot of tests any HW is sufficient 6 Engineers Started To Address The 'But...' There must a better solution • We develop and support Solaris Operating system • We develop virtualization tools • We develop and produce microprocessors (SPARC) • We develop and produce servers .
    [Show full text]