Osv—Optimizing the Operating System for Virtual Machines

Total Page:16

File Type:pdf, Size:1020Kb

Osv—Optimizing the Operating System for Virtual Machines OSv—Optimizing the Operating System for Virtual Machines Avi Kivity, Dor Laor, Glauber Costa, Pekka Enberg, Nadav Har’El, Don Marti, and Vlad Zolotarov, Cloudius Systems https://www.usenix.org/conference/atc14/technical-sessions/presentation/kivity This paper is included in the Proceedings of USENIX ATC ’14: 2014 USENIX Annual Technical Conference. June 19–20, 2014 • Philadelphia, PA 978-1-931971-10-2 Open access to the Proceedings of USENIX ATC ’14: 2014 USENIX Annual Technical Conference is sponsored by USENIX. OSv— Optimizing the Operating System for Virtual Machines Avi Kivity Dor Laor Glauber Costa Pekka Enberg Nadav Har’El Don Marti Vlad Zolotarov Cloudius Systems {avi,dor,glommer,penberg,nyh,dmarti,vladz}@cloudius-systems.com Abstract Virtual machines in the cloud typically run existing general-purpose operating systems such as Linux. We notice that the cloud’s hypervisor already provides some !" ! features, such as isolation and hardware abstraction, ! which are duplicated by traditional operating systems, and that this duplication comes at a cost. v We present the design and implementation of OS , a new guest operating system designed specifically for running a single application on a virtual machine in the cloud. It addresses the duplication issues by using a low- Figure 1: Software layers in a typical cloud VM. overhead library-OS-like design. It runs existing appli- cations written for Linux, as well as new applications v v written for OS . We demonstrate that OS is able to effi- support for a large selection of hardware, are losing their ciently run a variety of existing applications. We demon- relevance. At the same time, different features are be- strate its sub-second boot time, small OS image and how coming important: The VM’s operating system needs to it makes more memory available to the application. For be fast, small, and easy to administer at large scale. unmodified network-intensive applications, we demon- Moreover, fundamental features of traditional operat- strate up to 25% increase in throughput and 47% de- ing systems are becoming overhead, as they are now du- crease in latency. By using non-POSIX network APIs, plicated by other layers of the cloud stack (illustrated in we can further improve performance and demonstrate a Figure 1). 290% increase in Memcached throughput. For example, an important role of traditional operat- ing systems is to isolate different processes from one an- 1 Introduction other, and all of them from the kernel. This isolation comes at a cost, in performance of system calls and con- Cloud computing (Infrastructure-as-a-Service, or IaaS) text switches, and in complexity of the OS. This was was born out of the realization that virtualization makes necessary when different users and applications ran on it easy and safe for different organizations to share one the same OS, but on the cloud, the hypervisor provides pool of physical machines. At any time, each organi- isolation between different VMs so mutually-untrusting zation can rent only as many virtual machines as it cur- applications do not need to run on the same VM. In- rently needs to run its application. deed, the scale-out nature of cloud applications already Today, virtual machines on the cloud typically run the resulted in a trend of focused single-application VMs. same traditional operating systems that were used on A second example of duplication is hardware abstrac- physical machines, e.g., Linux, Windows, and *BSD. tion: An OS normally provides an abstraction layer But as the IaaS cloud becomes ubiquitous, this choice is through which the application accesses the hardware. starting to make less sense: The features that made these But on the cloud, this “hardware” is itself a virtualized operating systems desirable on physical machines, such abstraction created by the hypervisor. Again, this dupli- as familiar single-machine administration interfaces and cation comes at a performance cost. USENIX Association 2014 USENIX Annual Technical Conference 61 This paper explores the question of what an operating Scala, Jruby, etc.) on OSv. We then demonstrate that a system would look like if we designed it today with the zero-copy, lock-free API for packet processing can result sole purpose of running on virtual machines on the cloud, in a 4x increase of Memcached throughput. and not on physical machines. In Section 4, we evaluate our implementation, and We present OSv, a new OS we designed specifically compare OSv to Linux on several micro- and macro- for cloud VMs. The main goals of OSv are as follows: benchmarks. We show minor speedups over Linux in • computation- and memory-intensive workloads such as Run existing cloud applications (Linux executa- the SPECjvm2008 benchmark, and up to 25% increase bles). in throughput and 47% reduction in latency in network- • Run these applications faster than Linux does. dominated workloads such as Netperf and Memcached. • Make the image small enough, and the boot quick v enough, that starting a new VM becomes a viable 2 Design and Implementation of OS alternative to reconfiguring a running one. OSv follows the library OS design, an OS construct pio- • Explore new APIs for new applications written for neered by exokernels in the 1990s [5]. In OSv’s case, the OSv, that provide even better performance. hypervisor takes on the role of the exokernel, and VMs the role of the applications: Each VM is a single applica- • Explore using such new APIs in common runtime tion with its own copy of the library OS (OSv). Library environments, such as the Java Virtual Machine OS design attempts to address performance and function- (JVM). This will boost the performance of unmodi- ality limitations in applications that are caused by tradi- fied Java applications running on OSv. tional OS abstractions. It moves resource management • Be a platform for continued research on VM oper- to the application level, exports hardware directly to the ating systems. OSv is actively developed as open application via safe APIs, and reduces abstraction and protection layers. source, it is written in a modern language (C++11), v its codebase is relatively small, and our community OS runs a single application in the VM. If several encourages experimentation and innovation. mutually-untrusting applications are to be run, they can be run in separate VMs. Our assumption of a single ap- OSv supports different hypervisors and processors, plication per VM simplifies OSv, but more importantly, with only minimal amount of architecture-specific code. eliminates the redundant and costly isolation inside a For 64-bit x86 processors, it currently runs on the KVM, guest, leaving the hypervisor to do isolation. Conse- Xen, VMware and VirtualBox hypervisors, and also on quently, OSv uses a single address space — all threads the Amazon EC2 and Google GCE clouds (which use and the kernel use the same page tables, reducing the a variant of Xen and KVM, respectively). Preliminary cost of context switches between applications threads or support for 64-bit ARM processors is also available. between an application thread and the kernel. In Section 2, we present the design and implementa- The OSv kernel includes an ELF dynamic linker which tion of OSv. We will show that OSv runs only on a hyper- runs the desired application. This linker accepts stan- visor, and is well-tuned for this (e.g., by avoiding spin- dard ELF dynamically-linked code compiled for Linux. locks). OSv runs a single application, with the kernel When this code calls functions from the Linux ABI (i.e., and multiple threads all sharing a single address space. functions provided on Linux by the glibc library), these This makes system calls as efficient as function calls, calls are resolved by the dynamic linker to functions im- and context switches quicker. OSv supports SMP VMs, plemented by the OSv kernel. Even functions which and has a redesigned network stack (network channels) are considered “system calls” on Linux, e.g., read(), to lower socket API overheads. OSv includes other facil- in OSv are ordinary function calls and do not incur spe- ities one expects in an operating system, such as standard cial call overheads, nor do they incur the cost of user- libraries, memory management and a thread scheduler, to-kernel parameter copying which is unnecessary in our and we will briefly survey those. OSv’s scheduler incor- single-application OS. porates several new ideas including lock-free algorithms Aiming at compatibility with a wide range of exist- and floating-point based fair accounting of run-time. ing applications, OSv emulates a big part of the Linux In Section 3, we begin to explore what kind of new programming interface. Some functions like fork() APIs a single-application OS like OSv might have be- and exec() are not provided, since they don’t have any yond the traditional POSIX APIs to further improve per- meaning in the one-application model employed by OSv. formance. We suggest two techniques to improve JVM The core of OSv is new code, written in C++11. This memory utilization and garbage-collection performance, includes OSv’s loader and dynamic linker, memory man- which boost performance of all JVM languages (Java, agement, thread scheduler and synchronization mecha- 2 62 2014 USENIX Annual Technical Conference USENIX Association nisms such as mutex and RCU, virtual-hardware drivers, work channels, a non-traditional design for the network- and more. We will discuss below some of these mecha- ing stack, and 4. the OSv thread scheduler, which incor- nisms in more detail. porates several new ideas including lock-free algorithms Operating systems designed for physical machines and floating-point based fair accounting of run-time. usually devote much of their code to supporting diverse hardware.
Recommended publications
  • The Design of the EMPS Multiprocessor Executive for Distributed Computing
    The design of the EMPS multiprocessor executive for distributed computing Citation for published version (APA): van Dijk, G. J. W. (1993). The design of the EMPS multiprocessor executive for distributed computing. Technische Universiteit Eindhoven. https://doi.org/10.6100/IR393185 DOI: 10.6100/IR393185 Document status and date: Published: 01/01/1993 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.
    [Show full text]
  • Interrupt Handling in Linux
    Department Informatik Technical Reports / ISSN 2191-5008 Valentin Rothberg Interrupt Handling in Linux Technical Report CS-2015-07 November 2015 Please cite as: Valentin Rothberg, “Interrupt Handling in Linux,” Friedrich-Alexander-Universitat¨ Erlangen-Nurnberg,¨ Dept. of Computer Science, Technical Reports, CS-2015-07, November 2015. Friedrich-Alexander-Universitat¨ Erlangen-Nurnberg¨ Department Informatik Martensstr. 3 · 91058 Erlangen · Germany www.cs.fau.de Interrupt Handling in Linux Valentin Rothberg Distributed Systems and Operating Systems Dept. of Computer Science, University of Erlangen, Germany [email protected] November 8, 2015 An interrupt is an event that alters the sequence of instructions executed by a processor and requires immediate attention. When the processor receives an interrupt signal, it may temporarily switch control to an inter- rupt service routine (ISR) and the suspended process (i.e., the previously running program) will be resumed as soon as the interrupt is being served. The generic term interrupt is oftentimes used synonymously for two terms, interrupts and exceptions [2]. An exception is a synchronous event that occurs when the processor detects an error condition while executing an instruction. Such an error condition may be a devision by zero, a page fault, a protection violation, etc. An interrupt, on the other hand, is an asynchronous event that occurs at random times during execution of a pro- gram in response to a signal from hardware. A proper and timely handling of interrupts is critical to the performance, but also to the security of a computer system. In general, interrupts can be emitted by hardware as well as by software. Software interrupts (e.g., via the INT n instruction of the x86 instruction set architecture (ISA) [5]) are means to change the execution context of a program to a more privileged interrupt context in order to enter the kernel and, in contrast to hardware interrupts, occur synchronously to the currently running program.
    [Show full text]
  • IBM VM Recovery Manager DR for Power Systems Version 1.5: Deployment Guide Overview for IBM VM Recovery Manager DR for Power Systems
    IBM VM Recovery Manager DR for Power Systems Version 1.5 Deployment Guide IBM Note Before using this information and the product it supports, read the information in “Notices” on page 195. This edition applies to IBM® VM Recovery Manager DR for Power Systems Version 1.5 and to all subsequent releases and modifications until otherwise indicated in new editions. © Copyright International Business Machines Corporation 2020, 2021. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents About this document............................................................................................vii Highlighting.................................................................................................................................................vii Case-sensitivity in VM Recovery Manager DR............................................................................................vii ISO 9000....................................................................................................................................................viii Overview...............................................................................................................1 Concepts...............................................................................................................5 KSYS............................................................................................................................................................. 5 HMC.............................................................................................................................................................
    [Show full text]
  • The Next Generation Cloud: the Rise of the Unikernel
    The Next Generation Cloud: The Rise of the Unikernel A Xen Project publication April 2015 xenproject.org Docker and Linux container technologies dominate headlines today as a powerful, easy way to package applications, especially as cloud computing becomes more mainstream. While still a work-in-progress, they offer a simple, clean and lean way to distribute application workloads. With enthusiasm continuing to grow for container innovations, a related technology called unikernels is also beginning to attract attention. Known also for their ability to cleanly separate functionality at the component level, unikernels are developing a variety of new approaches to deploy cloud services. Traditional operating systems run multiple applications on a single machine, managing resources and isolating applications from one another. A unikernel runs a single application on a single virtual machine, relying instead on the hypervisor to isolate those virtual machines. Unikernels are constructed by using “library operating systems,” from which the developer selects only the minimal set of services required for an application to run. These sealed, fixed-purpose images run directly on a hypervisor without an intervening guest OS such as Linux. As well as improving upon container technologies, unikernels are also able to deliver impressive flexibility, speed and versatility for cross-platform environments, big data analytics and scale-out cloud computing. Like container-based solutions, this technology fulfills the promise of easy deployment, but unikernels also offer an extremely tiny, specialized runtime footprint that is much less vulnerable to attack. There are several up-and-coming open source projects to watch this year, including ClickOS, Clive, HaLVM, LING, MirageOS, Rump Kernels and OSv among others, with each of them placing emphasis on a different aspect of the unikernel approach.
    [Show full text]
  • User Manual Issue 2.0.2 September 2017
    The Embedded I/O Company TDRV015-SW-82 Linux Device Driver Reconfigurable FPGA Version 2.0.x User Manual Issue 2.0.2 September 2017 TEWS TECHNOLOGIES GmbH Am Bahnhof 7 25469 Halstenbek, Germany Phone: +49 (0) 4101 4058 0 Fax: +49 (0) 4101 4058 19 e-mail: [email protected] www.tews.com TDRV015-SW-82 This document contains information, which is Linux Device Driver proprietary to TEWS TECHNOLOGIES GmbH. Any Reconfigurable FPGA reproduction without written permission is forbidden. Supported Modules: TEWS TECHNOLOGIES GmbH has made any TAMC631 (TPLD001) effort to ensure that this manual is accurate and TAMC640 (TPLD002) complete. However TEWS TECHNOLOGIES GmbH TAMC641 (TPLD003) reserves the right to change the product described TAMC651 (TPLD004) in this document at any time without notice. TPMC632 (TPLD005) TEWS TECHNOLOGIES GmbH is not liable for any damage arising out of the application or use of the device described herein. 2011-2017 by TEWS TECHNOLOGIES GmbH Issue Description Date 1.0.0 First Issue March 14, 2011 1.0.1 SupportedModulesadded September30,2011 2.0.0 New API implemented March 7, 2012 2.0.1 IncludestatementinExampleCodescorrected August 18, 2015 Reference to Engineering Documentation removed 2.0.2 Filelistmodified(licenseadded) September28,2017 TDRV015-SW-82 - Linux Device Driver Page 2 of 75 Table of Contents 1 INTRODUCTION......................................................................................................... 4 2 INSTALLATION.........................................................................................................
    [Show full text]
  • Facility/370: Introduction
    File No. S370-20 Order No. GC20-1800=9 IDl\n \/:u+ •• ".1 I\n"n"': ..... ft IDIVI V IIlUQI .Via"'lllIlv Facility/370: Systems Introduction Release 6 PLC 4 This publication introduces VM/370, and is intended for anyone who is interested in VM/370. However, the reader should have a basic understanding of I BM data processing. VM/370 (Virtual Machine Facility/370) is a system control program (SCP) that tailors the resources and capabilities of a single System/370 computer to provide concurrent users their one unique (virtual) machine. VM/370 consists of a Control Program (CP), which manages the real computer, a Conversational Monitor System (CMS), which is a general-purpose conversational time-sharing system that executes in a virtual machine, a Remote Spooling Communications Subsystem (RSCS), which spools files to and from geographically remote locations, and a Interactive Problem Control System (I PCS), which provides problem analysis and management faci I ities. The first section of the publication is an introduction; it describes what VM/370 can do. The second, third, fourth, and fifth sections describe the Control Program, Conversational Monitor System, Remote Spooling Communications Subsystem, and Interactive Problem Control System respectively. The appendixes include information about VM/370 publication-to-audience relationship and VM/370-related publications for CMS users. , This publication is a prerequisite for the VM/370 system library. --...- --- ---.-- ------- ------ --..- --------- -~-y- Page of GC20-1800-9 As Updated Aug 1, 1979 by TNL GN25-0U89 ~b Edition (Karch 1919) This edition (GC20-1800-~ together with Technical Newsletter GN25-0489. dated August 1, 1919, applies to Release 6 PLC 4 (Program Level Change) of IBM Virtual Machine Facility/310 and to all subsequent releases until otherwise indicated in new editions or Technical Newsletters.
    [Show full text]
  • Amigaos 3.2 FAQ 47.1 (09.04.2021) English
    $VER: AmigaOS 3.2 FAQ 47.1 (09.04.2021) English Please note: This file contains a list of frequently asked questions along with answers, sorted by topics. Before trying to contact support, please read through this FAQ to determine whether or not it answers your question(s). Whilst this FAQ is focused on AmigaOS 3.2, it contains information regarding previous AmigaOS versions. Index of topics covered in this FAQ: 1. Installation 1.1 * What are the minimum hardware requirements for AmigaOS 3.2? 1.2 * Why won't AmigaOS 3.2 boot with 512 KB of RAM? 1.3 * Ok, I get it; 512 KB is not enough anymore, but can I get my way with less than 2 MB of RAM? 1.4 * How can I verify whether I correctly installed AmigaOS 3.2? 1.5 * Do you have any tips that can help me with 3.2 using my current hardware and software combination? 1.6 * The Help subsystem fails, it seems it is not available anymore. What happened? 1.7 * What are GlowIcons? Should I choose to install them? 1.8 * How can I verify the integrity of my AmigaOS 3.2 CD-ROM? 1.9 * My Greek/Russian/Polish/Turkish fonts are not being properly displayed. How can I fix this? 1.10 * When I boot from my AmigaOS 3.2 CD-ROM, I am being welcomed to the "AmigaOS Preinstallation Environment". What does this mean? 1.11 * What is the optimal ADF images/floppy disk ordering for a full AmigaOS 3.2 installation? 1.12 * LoadModule fails for some unknown reason when trying to update my ROM modules.
    [Show full text]
  • A Linux in Unikernel Clothing Lupine
    A Linux in Unikernel Clothing Hsuan-Chi Kuo+, Dan Williams*, Ricardo Koller* and Sibin Mohan+ +University of Illinois at Urbana-Champaign *IBM Research Lupine Unikernels are great BUT: Unikernels lack full Linux Support App ● Hermitux: supports only 97 system calls Kernel LibOS + App ● OSv: ○ Fork() , execve() are not supported Hypervisor Hypervisor ○ Special files are not supported such as /proc ○ Signal mechanism is not complete ● Small kernel size ● Rumprun: only 37 curated applications ● Heavy ● Fast boot time ● Community is too small to keep it rolling ● Inefficient ● Improved performance ● Better security 2 Can Linux behave like a unikernel? 3 Lupine Linux 4 Lupine Linux ● Kernel mode Linux (KML) ○ Enables normal user process to run in kernel mode ○ Processes can still use system services such as paging and scheduling ○ App calls kernel routines directly without privilege transition costs ● Minimal patch to libc ○ Replace syscall instruction to call ○ The address of the called function is exported by the patched KML kernel using the vsyscall ○ No application changes/recompilation required 5 Boot time Evaluation Metrics Image size Based on: Unikernel benefits Memory footprint Application performance Syscall overhead 6 Configuration diversity ● 20 top apps on Docker hub (83% of all downloads) ● Only 19 configuration options required to run all 20 applications: lupine-general 7 Evaluation - Comparison configurations Lupine Cloud Operating Systems [Lupine-base + app-specific options] OSv general Linux-based Unikernels Kernel for 20 apps
    [Show full text]
  • Governance in Collaborative Open Source Software Development Organizations: a Comparative Analysis of Two Case Studies
    Governance in Collaborative Open Source Software Development Organizations: A Comparative Analysis of two Case Studies Master’s thesis Faculty of Business, Economics and Social Sciences University of Bern submitted to Dr. Matthias Stürmer Research Center for Digital Sustainability Institute of Information Systems by Winkelmann, Rahel from Siselen 11th semester Matriculation nr.: 09-127-788 Study address: Huberstrasse 22 3008 Bern (Tel. 078 758 58 96) (e-Mail: [email protected]) Bern, 20.01.2015 Abstract While loose cooperation among several actors is common in the open source sector, companies merging into a professionally governed collaborative open source software development organization across industries is an emerging phenomenon. The purpose of this thesis is to shed light on this new approach of software development by creating a framework for building a collaborative open source software development organization. A comparative analysis examines the governance models of two different collaborative open source software development organizations from the organizational, financial and legal perspective and reveals the autonomous and the affiliated organization type and their key characteristics. Based on these findings and by means of four expert interviews a framework consisting of eight criteria that need to be considered in order to build a collaborative open source software development organization is created. Zusammenfassung In der Open Source Branche ist es gängig, dass sich verschiedene Akteure zur Softwareentwicklung zu losen Konsortien zusammenschliessen. Unternehmen, welche sich im professionellen Rahmen zu einer Organisation zusammenschliessen um gemeinsam Open Source Software zu entwickeln, sind jedoch ein neues Phänomen. Der Zweck dieser Arbeit ist es Aufschluss über diesen neuen Ansatz von Softwareentwicklung zu geben.
    [Show full text]
  • Scalability of VM Provisioning Systems
    Scalability of VM Provisioning Systems Mike Jones, Bill Arcand, Bill Bergeron, David Bestor, Chansup Byun, Lauren Milechin, Vijay Gadepally, Matt Hubbell, Jeremy Kepner, Pete Michaleas, Julie Mullen, Andy Prout, Tony Rosa, Siddharth Samsi, Charles Yee, Albert Reuther Lincoln Laboratory Supercomputing Center MIT Lincoln Laboratory Lexington, MA, USA Abstract—Virtual machines and virtualized hardware have developed a technique based on binary code substitution been around for over half a century. The commoditization of the (binary translation) that enabled the execution of privileged x86 platform and its rapidly growing hardware capabilities have (OS) instructions from virtual machines on x86 systems [16]. led to recent exponential growth in the use of virtualization both Another notable effort was the Xen project, which in 2003 used in the enterprise and high performance computing (HPC). The a jump table for choosing bare metal execution or virtual startup time of a virtualized environment is a key performance machine execution of privileged (OS) instructions [17]. Such metric for high performance computing in which the runtime of projects prompted Intel and AMD to add the VT-x [19] and any individual task is typically much shorter than the lifetime of AMD-V [18] virtualization extensions to the x86 and x86-64 a virtualized service in an enterprise context. In this paper, a instruction sets in 2006, further pushing the performance and methodology for accurately measuring the startup performance adoption of virtual machines. on an HPC system is described. The startup performance overhead of three of the most mature, widely deployed cloud Virtual machines have seen use in a variety of applications, management frameworks (OpenStack, OpenNebula, and but with the move to highly capable multicore CPUs, gigabit Eucalyptus) is measured to determine their suitability for Ethernet network cards, and VM-aware x86/x86-64 operating workloads typically seen in an HPC environment.
    [Show full text]
  • Faux Disk Encryption: Realities of Secure Storage on Mobile Devices August 4, 2015 – Version 1.0
    NCC Group Whitepaper Faux Disk Encryption: Realities of Secure Storage On Mobile Devices August 4, 2015 – Version 1.0 Prepared by Daniel A. Mayer — Principal Security Consultant Drew Suarez — Senior Security Consultant Abstract In this paper, we discuss the challenges mobile app developers face in securing data stored on devices including mobility, accessibility, and usability requirements. Given these challenges, we first debunk common misconceptions about full-disk encryption and show why it is not sufficient for many attack scenarios. We then systematically introduce the more sophisticated secure storage techniques that are available for iOS and Android respectively. For each platform, we discuss in-depth which mechanisms are available, how they technically operate, and whether they fulfill the practical security and usability requirements. We conclude the paper with an analysis of what still can go wrong even when current best-practices are followed and what the security and mobile device community can do to address these shortcomings. Table of Contents 1 Introduction ......................................................................... 3 2 Challenges in Secure Mobile Storage .................................................. 4 3 Threat Model Considerations ......................................................... 5 4 Secure Data Storage on iOS .......................................................... 6 4.1 Fundamentals of iOS Data Protection .................................................. 7 4.2 Filesystem Encryption ..............................................................
    [Show full text]
  • Z/OS ♦ Z Machines Hardware ♦ Numbers and Numeric Terms ♦ the Road to Z/OS ♦ Z/OS.E ♦ Z/OS Futures ♦ Language Environment ♦ Current Compilers ♦ UNIX System Services
    Mainframes The Future of Mainframes Is Now ♦ z/Architecture ♦ z/OS ♦ z Machines Hardware ♦ Numbers and Numeric Terms ♦ The Road to z/OS ♦ z/OS.e ♦ z/OS Futures ♦ Language Environment ♦ Current Compilers ♦ UNIX System Services by Steve Comstock The Trainer’s Friend, Inc. http://www.trainersfriend.com 800-993-8716 [email protected] Copyright © 2002 by Steven H. Comstock 1 Mainframes z/Architecture z/Architecture ❐ The IBM 64-bit mainframe has been named "z/Architecture" to contrast it to earlier mainframe hardware architectures ♦ S/360 ♦ S/370 ♦ 370-XA ♦ ESA/370 ♦ ESA/390 ❐ Although there is a clear continuity, z/Architecture also brings significant changes... ♦ 64-bit General Purpose Registers - so 64-bit integers and 64-bit addresses ♦ 64-bit Control Registers ♦ 128-bit PSW ♦ Tri-modal addressing (24-bit, 31-bit, 64-bit) ♦ Over 140 new instructions, including instructions to work with ASCII and UNICODE strings Copyright © 2002 by Steven H. Comstock 2 z/Architecture z/OS ❐ Although several operating systems can run on z/Architecture machines, z/OS is the premier, target OS ❐ z/OS is the successor to OS/390 ♦ The last release of OS/390 was V2R10, available 9/2000 ♦ The first release of z/OS was V1R1, available 3/2001 ❐ z/OS can also run on G5/G6 and MP3000 series machines ♦ But only in 31-bit or 24-bit mode ❐ Note these terms: ♦ The Line - the 16MiB address limit of MVS ♦ The Bar - the 2GiB limit of OS/390 ❐ For some perspective, realize that 16EiB is... ♦ 8 billion times 2GiB ♦ 1 trillion times 16MiB ❐ The current release of z/OS is V1R4; V1R5 is scheduled for 1Q2004 Copyright © 2002 by Steven H.
    [Show full text]