Virtual Machines & Vmware, Part I December 21, 2001 By: Jay Munro

Total Page:16

File Type:pdf, Size:1020Kb

Virtual Machines & Vmware, Part I December 21, 2001 By: Jay Munro Virtual Machines & VMware, Part I December 21, 2001 By: Jay Munro Ever want to try out a new operating system on your pc without trashing your existing Operating System? Got a legacy application that won’t run on your current OS? Are you a developer who needs to test your code on a number of platforms? Would you like to test distributed applications on a network without requiring a server farm? Find out what companies like Symantec and Merrill Lynch know--you can do these tasks with a Virtual Machine (VM) on a single PC. A Virtual Machine is a software environment that encapsulates one or more Operating Systems and applications that actually run inside or "under" the VM. The OS can’t tell the difference between operating in a VM or in a "real" machine. Within a Virtual Machine, you can do almost anything you can do with a real PC, with complete safety. Furthermore, the type of virtual machine we’ll soon discuss is encapsulated in a file that can be moved from one PC to another, without worrying about hardware compatibility. Virtual machines are isolated entities, and the security of a host is not threatened by an errant application. A virtual machine can let you use a given platform’s operating system software under another OS on the same platform, such as running a copy of Linux in a VM on your Windows 2000 PC. Virtual machine files can be stored on a server, and used more than one person, so a team of developers can use a common configuration to test. In this article we will look at where Virtual Machines came from, how they work, how they differ from software emulators, and finally a look at the new version of VMware Workstation 3.0 for Windows. VMware is an industrial strength virtual machine company with three levels of VM products: VMware Workstation, VMware GSX server, and VMware ESX server. We’ll be concentrating on the Workstation for Windows product, but we’ll also discuss a few of the differences between the products. The idea of a virtual machine is not new--its roots actually go back almost to the beginning of computing itself. Initially, the concept of a virtual machine came about in the 1960’s on mainframes as a way to create less complex multi user time share environments. As described by Melinda Varian’s canonical paper "VM and the VM Community: Past, Present, and Future", a time sharing system was developed by a group of MIT programmers, working with equipment donated by IBM. The system, "Compatible Time Share System" or CTSS, was initially developed in 1961 and evolved over the years as the example of how to do time sharing. CTSS was designed much like current multitasking systems, doling out processing time in scheduled slices. The system provided a supervisory program that controlled resources, and scheduled time shares for foreground and background tasks. The key to its operation was the supervisor program’s control of Trap Interrupts. By trapping interrupts, the control program was able to isolate the users or processes from each other. As the systems developed, changes in the hardware were made to support relocation of memory, a key process to facilitating a virtual memory system. Without the ability to relocate (page) memory, entire programs would need to be swapped in and out to active memory address space. With virtual memory, a big performance boost could be realized. In the 1960’s, the concept of a "virtualizable machine" was developed, and virtual machine technology became a very popular subject of study, and the key focus of user organizations and conferences in the late 60’s and 70’s. For some deeper background, in late 1964, a project at MIT called CP-40 (Control Program for IBM System/360 Model 40 mainframes) really turned the corner on virtual machines. The idea was to create an operating system that would let each mainframe user have their own IBM System/360 virtual machine (which was originally called a pseudo machine). The subsequent release of a single user virtual environment running atop CP-40 called CMS (Cambridge Monitor System) was the beginning of a long line of IBM VM operating system products. Later CMS would be called Conversational Monitor System, and it worked in conjunction with CP on IBM System/370 systems. Though IBM was reluctant to invest in VM technology in the 60’s and 70’s, it became fairly successful in the 80’s, and they still sell VM systems today. According to Mendel Rosenblum PhD, VMware’s Chief scientist and co-founder, standard college textbooks on computer architecture do not have a discussion of the virtualization of processors. Virtual Machines were also not in Intel’s mind as they designed the 64bit IA64 chip architecture, as the processors (like Itanium and the upcoming McKinley) are not completely virtualizable. Rosenblum offers a tongue in cheek theory that maybe the engineers had not been exposed to virtual machine technology in college, so they never considered implementing such capability. During the 1990’s, at Stanford University, work was being done on designing and building scalable multi-processor machines-- ones that could scale to 1000 or more processors. Rosenblum and a group of graduate students were tackling the problem from the operating system point of view, and came on the idea of building a virtual machine monitor (VMM) with an existing operating system, rather than designing one from scratch. In talking with various software and operating system development companies, the feedback Rosenblum received was favorable on the idea, but he had trouble securing development assistance from those companies. Since Rosenblum’s team was going to use Microsoft operating systems, the idea of having Microsoft work on the project in conjunction with their operating systems seemed like the best idea. However, rather than asking Microsoft to help integrate a virtual machine directly into the OS, Mendel came up with the idea of running a VMM under the Microsoft operating systems, without requiring core OS modifications. As an extension to that concept, Mendel and his graduate students started experimenting with creating a virtual machine monitor that can also run other commodity operating systems within a VM for use on single or multiprocessor systems. Since many of the graduate students had friends in entrepreneurial enterprises in nearby Silicon Valley, they started kicking around the idea of starting a company. The initial research and development at Stanford was targeted at building VMM’s on a server. However, the added complexity of the logistics of partnering with server companies soon gave the team the realization that they should start smaller, so they scaled back to a workstation platform to work out the design. VMware Workstation was born, and a company as well. Before we get too far, we should take a second to define some terms. A Virtual Machine (VM) is defined by Popek and Goldberg (in their paper "Formal requirements for virtualizable third generation architectures," Communications of the ACM, Vol 17, July 1974) as an "efficient, isolated duplicate of a real machine". A real machine has a number of systems that it provides to an operating system and applications for use. Starting at the core, the CPU and motherboard chipset provides a set of instructions and other foundational elements for processing data, memory allocation, and I/O handling. On top of that are hardware devices and resources such as memory, video, audio, disk drives, CDROMs, and ports (USB, Parallel, serial). In a "real machine", the BIOS provides a low level interface that an operating system can use to access various motherboard and I/O resources. With a real machine, when an operating system accesses a hardware device, it typically communicates through a low-level device driver that interfaces directly to physical hardware device memory or I/O ports. The effective opposite of a real machine is an emulator. An emulator will reproduce everything from the CPU instructions to the I/O devices in software. An emulator can provide cross-CPU operation, such as running Windows software on a Mac. Unfortunately, an emulator takes a performance hit since it must translate every instruction, function call, or data transfer. In addition, an emulator is quite complex, as it needs to emulate most if not all of the CPU instructions. The functionality and abstraction level of a Virtual Machine lies between a real machine and an emulator. A virtual machine is an environment created by a Virtual Machine Monitor (VMM). The VMM can create one or more VM’s on a single machine. While an emulator provides a complete layer between the operating system or application and the hardware, a VMM manages one or more VMs, with each VM providing facilities for an application or "guest OS" to believe it’s running in a normal environment with access to physical hardware. Instead, when applications or guest OSs execute low-level instructions that inspect or modify hardware state, they appear to the app or OS to be directly executing on the hardware, but are instead virtualized by the VM and passed to the VMM. For traps or interrupts occurring at the application level, they can pass directly to the VMM, which in turn interacts with the hardware. In the case of VMWare, we’ll see that its VMM isn’t in direct control of the hardware, but actually runs atop a primary or "host" OS for low-level hardware I/O control, while it also plants a low-level driver in the host OS to handle various low-level VM management chores and certain hardware interactions.
Recommended publications
  • Reverse Engineering a Microcomputer-Based Control Unit
    REVERSE ENGINEERING A MICROCOMPUTER-BASED CONTROL UNIT John R. Bork A Thesis Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of MASTER OF INDUSTRIAL TECHNOLOGY August 2005 Committee: David Border, Advisor Sri Kolla Sub Ramakrishnan © 2005 John R. Bork All Rights Reserved iii ABSTRACT David Border, Advisor This study demonstrated that complex process control solutions can be reverse engineered using the Linux 2.6 kernel without employing any external interrupts or real-time enhancements like RTLinux and RTAI. Reverse engineering creates knowledge through research, observation, and disassembly of a system part in order to discern elements of its design, manufacture, and use, often with the goal of producing a substitute. For this study Intel x86 compatible computer hardware running custom programs on a Fedora Core 2 GNU/Linux operating system replaced the failure-prone microcomputer-based control unit used in over 300,000 Bally electronic pinball machines manufactured from 1977 to 1985. A pinball machine embodies a degree of complexity on par with the problems encountered in a capstone undergraduate course in electronics and is fair game for reverse engineering because its patents have expired, although copyrighted program code is still protected. A black box technique for data development analyzed the microprocessor unit in terms of a closed-loop process control model. Knowledge of real-time computing theory was leveraged to supplant legacy circuits and firmware with modern, general-purpose computer architecture. The research design was based on iterative, quantitatively validated prototypes. The first iteration was a user program in which control of the solenoids was accomplished but the switch matrix failed to correctly detect switch closures.
    [Show full text]
  • Are Central to Operating Systems As They Provide an Efficient Way for the Operating System to Interact and React to Its Environment
    1 www.onlineeducation.bharatsevaksamaj.net www.bssskillmission.in OPERATING SYSTEMS DESIGN Topic Objective: At the end of this topic student will be able to understand: Understand the operating system Understand the Program execution Understand the Interrupts Understand the Supervisor mode Understand the Memory management Understand the Virtual memory Understand the Multitasking Definition/Overview: An operating system: An operating system (commonly abbreviated to either OS or O/S) is an interface between hardware and applications; it is responsible for the management and coordination of activities and the sharing of the limited resources of the computer. The operating system acts as a host for applications that are run on the machine. Program execution: The operating system acts as an interface between an application and the hardware. Interrupts: InterruptsWWW.BSSVE.IN are central to operating systems as they provide an efficient way for the operating system to interact and react to its environment. Supervisor mode: Modern CPUs support something called dual mode operation. CPUs with this capability use two modes: protected mode and supervisor mode, which allow certain CPU functions to be controlled and affected only by the operating system kernel. Here, protected mode does not refer specifically to the 80286 (Intel's x86 16-bit microprocessor) CPU feature, although its protected mode is very similar to it. Memory management: Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. www.bsscommunitycollege.in www.bssnewgeneration.in www.bsslifeskillscollege.in 2 www.onlineeducation.bharatsevaksamaj.net www.bssskillmission.in Key Points: 1.
    [Show full text]
  • Arxiv:1904.12226V1 [Cs.NI] 27 Apr 2019
    The Ideal Versus the Real: Revisiting the History of Virtual Machines and Containers Allison Randal, University of Cambridge Abstract also have greater access to the host’s privileged software (kernel, operating system) than a physically distinct ma- The common perception in both academic literature and chine would have. the industry today is that virtual machines offer better se- curity, while containers offer better performance. How- Ideally, multitenant environments would offer strong ever, a detailed review of the history of these technolo- isolation of the guest from the host, and between guests gies and the current threats they face reveals a different on the same host, but reality falls short of the ideal. The story. This survey covers key developments in the evo- approaches that various implementations have taken to lution of virtual machines and containers from the 1950s isolating guests have different strengths and weaknesses. to today, with an emphasis on countering modern misper- For example, containers share a kernel with the host, ceptions with accurate historical details and providing a while virtual machines may run as a process in the host solid foundation for ongoing research into the future of operating system or a module in the host kernel, so they secure isolation for multitenant infrastructures, such as expose different attack surfaces through different code cloud and container deployments. paths in the host operating system. Fundamentally, how- ever, all existing implementations of virtual machines and containers
    [Show full text]
  • A Virtualizable Machine for Multiprogrammed Operation Based on Non-Virtualizable Microprocessors
    University of Rhode Island DigitalCommons@URI Open Access Master's Theses 1978 A Virtualizable MAchine for Multiprogrammed Operation Based on Non-Virtualizable Microprocessors William D. Armitage University of Rhode Island Follow this and additional works at: https://digitalcommons.uri.edu/theses Recommended Citation Armitage, William D., "A Virtualizable MAchine for Multiprogrammed Operation Based on Non-Virtualizable Microprocessors" (1978). Open Access Master's Theses. Paper 1390. https://digitalcommons.uri.edu/theses/1390 This Thesis is brought to you for free and open access by DigitalCommons@URI. It has been accepted for inclusion in Open Access Master's Theses by an authorized administrator of DigitalCommons@URI. For more information, please contact [email protected]. A VIRTUALIZABLE MACHINE FOR MULTIPROGRAMMED OPERATION BASED ON NON-VIRTUALIZABLE MICROPROCESSORS BY WILLIAM D. ARMITAGE A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN COMPUTER SCIENCE UNIVERSITY OF RHODE ISLAND 1978 MASTER OF SCIENCE THESIS OF WILLIAM D. ARMITAGE Approved: Thesis Committee AI Maj or Professor J ~~~~~~~~~~~~~~~~~-v.tfsc'YI WeA·cfuul/l ev\,~ . ~cL a . L~ Ct-actc=<A.- Dean of the Graduate School UNIVERSITY OF RHODE ISLAND 1978 ABSTRACT Microcomputers are proliferating in d ed ic ated applications and as single-user general-purpose digital computers. Many common applications on larger machines are inherently multi-user and require a multiprogrammed mode of operation. Multiprogrammed operating systems, although desirable for this reason and to maximize utilization of expensive system components, have not yet been satisfactorily implemented on m ic rocom put er s. It is shown that a typical microprocessor -- the Intel 8080 is inherently incapable of supporting a multiprogrammed operating system due to a lack of any privileged instruction set whatsoever.
    [Show full text]
  • Major Trends in Operating Systems Development
    Rochester Institute of Technology RIT Scholar Works Theses 9-1-1981 Major Trends in Operating Systems Development Margaret M. Reek Follow this and additional works at: https://scholarworks.rit.edu/theses Recommended Citation Reek, Margaret M., "Major Trends in Operating Systems Development" (1981). Thesis. Rochester Institute of Technology. Accessed from This Thesis is brought to you for free and open access by RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected]. Rochester Institute of Technology School of Computer Science and Technology Majmr "Trends in Operating Systems Development A Thesis submitted in partial fulfillment of Master of Science in Computer Science Degree Program By: Margaret M. Reek Approved By: Michael J. Lutz: Advisor Peter H. Lutz Wiley R. McKinzie Table of Contents 1. Introduction and Overview 1.1. Introduction 1-1 1.2. Overview 1-1 2- Early History 3. Multiprogramming and Timesharing Systems 3.1. Introduction 3-1 3.2. Definition of Terms 3-1 3.3. Early Influences and Motivations 3-2 3.4. Historical Development 3-3 3.4.1. Early Concepts 3-3 3.4.2. Early Supervisors 3-3 3.4.3. The Growth Years - 1961 to 1964 3-7 3.4.4. The Close of the Era - 1965 to 1968 3-12 3.5. Contributions of the Era - 1957 to 1968 3-14 3.5.1. Hardware Refinements 3-14 3.5.2. Software Refinements 3-15 3.5.2.1. Scheduling 3-15 3.5.2.2.
    [Show full text]
  • Operating Systems and Middleware: Supporting Controlled Interaction
    Operating Systems and Middleware: Supporting Controlled Interaction Max Hailperin Gustavus Adolphus College Revised Edition 1.1.6 January 5, 2014 Copyright c 2011{2013 by Max Hailperin. This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License. To view a copy of this license, visit http:// creativecommons.org/ licenses/ by-sa/ 3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. To my family iv Contents Preface xi 1 Introduction 1 1.1 Chapter Overview . .1 1.2 What Is an Operating System? . .2 1.3 What is Middleware? . .6 1.4 Objectives for the Book . .8 1.5 Multiple Computations on One Computer . .9 1.6 Controlling the Interactions Between Computations . 11 1.7 Supporting Interaction Across Time . 13 1.8 Supporting Interaction Across Space . 15 1.9 Security . 17 2 Threads 21 2.1 Introduction . 21 2.2 Example of Multithreaded Programs . 23 2.3 Reasons for Using Concurrent Threads . 27 2.4 Switching Between Threads . 30 2.5 Preemptive Multitasking . 37 2.6 Security and Threads . 38 3 Scheduling 45 3.1 Introduction . 45 3.2 Thread States . 46 3.3 Scheduling Goals . 49 3.3.1 Throughput . 51 3.3.2 Response Time . 54 3.3.3 Urgency, Importance, and Resource Allocation . 55 3.4 Fixed-Priority Scheduling . 61 v vi CONTENTS 3.5 Dynamic-Priority Scheduling . 65 3.5.1 Earliest Deadline First Scheduling . 65 3.5.2 Decay Usage Scheduling . 66 3.6 Proportional-Share Scheduling . 71 3.7 Security and Scheduling .
    [Show full text]
  • Automated Operations: Five Benefits for Your Organization
    Automated Operations: Five Benefits for Your Organization By Pat Cameron isions of a mechanized world flourished long before the term “automation” was coined. The star V of Westinghouse’s exhibit at the 1939 World’s Fair was Electro, a robot that could walk, talk, and count on its fingers. Sadly, Electro and his kind were little more than demonstrations of remote control by human operators. By the early sixties, they had been replaced in the public imagination by something much more useful—the computer. In movies and advertisements, these “electronic brains” hummed away in orderly surroundings, watched over by well-dressed individuals who seemed to have nothing to do except gaze at the flickering lights and occasionally change a reel of tape. However, the reality was quite different. Most machine room supervisors would trade their last box of punch cards for an “Electro” who could unfailingly schedule and manage jobs, deal with unexpected events, and resolve performance problems. The same is true today—and that’s where automated computer operations comes in. In the mainframe computing environment, the variety of software needed to perform essential functions has always posed a challenging operations management problem. Initial solutions relied on numerous human operators, whose salaries became a significant portion of the IT budget. Operations became a monster that had to be fed constantly. Whole administrative structures were created to support it. This bureaucracy, and the nature of people, led to a system that was prone to errors— resulting in more expense and complexity. The situation was out of control and something had to be done.
    [Show full text]
  • Minimization of Supervisor Conflict For
    MINIMIZATION OF SUPERVISOR CONFLICT FOR MULTIPROCESSOR COMPUTER SYSTEMS A THESIS Presented to The Faculty of the Division of Graduate Studies and Research by Randy J. Raynor In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the School of Information and Computer Science Georgia Institute of Technology June, 1974 MINIMIZATION OF SUPERVISOR CONFLICT FOR MULTIPROCESSOR COMPUTER SYSTEMS Approved: jifclri M. Gwyimf" Jr j/, ChaiAan r r \-~ - —- - , i Michael D. Kelly <T Donovan B. Young\I Date approved by Chairman: S/ZSyyCi- ii ACKNOWLEDGMENTS I would first like to thank my thesis advisor, Dr. John Gwynn, for his support, ideas, and constructive criticism throughout my grad­ uate career. Our numerous discussions have been of inestimable value to me. The members of my guidance and reading committees, Drs. Robert Cooper, Michael Kelly, and Donovan Young deserve my thanks for their continued support and encouragement during my research and for their careful reading of my thesis drafts. I would also like to thank Dr. Donald Chand of Georgia State University and Dr. James Browne of The University of Texas at Austin for reading and commenting on this thesis. A note of thanks goes to Mr. Bill Brown of Univac and to the Office of Computing Services of Georgia Tech for their assistance in setting up the special use of the Univac computer. This research was partially supported by NSF Grant GN-655. Finally, special thanks are given to my wife, Vickie. Her assistance, patience, and impatience were an essential contribution to the completion of this thesis. Ill TABLE OF CONTENTS Page ACKNOWLEDGMENTS ±± LIST OF TABLES v LIST OF ILLUSTRATIONS vi SUMMARY vii Chapter I.
    [Show full text]
  • Silicongraphics Silicongraphics
    SiliconGraphics Computer Systems . , • For more illformatioll, please call COlpomte Office 2011 North Shoreline Boulevard SiliconGraphics United States 1 8008007441 Mountain View, CA 94043 United Kingdom 0800440440 Computer Systems Australia 008802677 UNIX is a registered trademark of AT&T. Ethernet is a trademark of XEROX Corporation. X-Window is a product of the Massachusetts Institute of Technology. 4: Silicon Graphics, the Silicon Graphics logo, and IRIS are registered trademarks of Silicon Graphics, Inc. Geometry Engine, IRIS Graphics Library, Personal IRI S, IRI S Indigo, IRI S Explorer, IRI S CODEvision, Elan, XS, XS24, IRIS VME Series, Indigo Lite, and Embedded Workstation are trademarks of Si licon Graphics. Specifications subject to change without prior notice. OSC - V30/ 35(7/ 92) V30/35 System Integrator's Guide Document Number 007-5015-020 ~ ~ To The Reader ~ Your V30j35 CPU subsystem came with a standard set of manuals. ~ The set includes the V30j35 System Integrator's Guide, the Personal Contributors System Administration Guide, the IRIS Software Installation Guide and ~ the IRIX Device Driver Programming Guide. Below you'll find a brief Written by Lawrence Ertel description of each book. For more detailed information on a lllustrated by Keith Granger Engineering contributions by Bob Abbott, Tore Kellgren, Larry Lewis, Todd Nordland, particular book, please refer to the first chapter in that book. Edward (Ted) Wilcox, and Ken Williams ~=­ V30j35 System Integrator's Guide - Explains how to install and ~=­ maintain your CPU subsystem, install the necessary software, and run © Copyright 1992, Silicon Graphics, Inc.- All Rights Reserved ~-:II interactive diagnostics. , This document contains proprietary and confidential information of Silicon Graphics, Inc.
    [Show full text]
  • What Is an Operating System? a Historical Investigation (1954–1964) Maarten Bullynck
    What is an Operating System? A historical investigation (1954–1964) Maarten Bullynck To cite this version: Maarten Bullynck. What is an Operating System? A historical investigation (1954–1964). Reflections on Programming Systems. Historical and Philosophical Aspects, 2019. halshs-01541602v2 HAL Id: halshs-01541602 https://halshs.archives-ouvertes.fr/halshs-01541602v2 Submitted on 30 Jan 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. What is an Operating System? A historical investigation (1954{1964) Maarten Bullynck Today, we could hardly imagine using a computer without an operating sys- tem, it shapes and frames how we access the computer and its peripherals and supports our interaction with it throughout. But when the first comput- ers were developed after World War II there was no such thing. In fact, only about a decade after the birth of digital computing did the first attempts at some kind of operating systems appear. It took another decade before the idea became widely accepted and most computers would be rented out or sold with an operating system. With the development of ambitious operat- ing systems during the mid 1960s, such as OS/360 for the IBM machines or Multics for an integrated time-sharing system, a more systematic frame- work was formulated that has determined our modern view of the operating system.
    [Show full text]