NESL, NDPC (A Home- Grown Language), and Racket

Total Page:16

File Type:pdf, Size:1020Kb

NESL, NDPC (A Home- Grown Language), and Racket Electrical Engineering and Computer Science Department Technical Report NU-EECS-16-12 September 8, 2016 Hybrid Runtime Systems Kyle C. Hale Abstract As parallelism continues to increase in ubiquity—from mobile devices and GPUs to datacenters and supercomputers—parallel runtime systems occupy an increasingly impor- tant role in the system software stack. The needs of parallel runtimes and the increasingly sophisticated languages and compilers they support do not line up with the services pro- vided by general-purpose OSes. Furthermore, the semantics available to the runtime are lost at the system-call boundary in such OSes. Finally, because a runtime executes at user- level in such an environment, it cannot leverage hardware features that require kernel-mode privileges—a large portion of the functionality of the machine is lost to it. These limita- tions warp the design, implementation, functionality, and performance of parallel runtimes. I make the case for eliminating these compromises by transforming parallel runtimes into This work was made possible by support from the United States National Science Foundation through grants CCF-1533560 and CNS-0709168, from the United States Department of Energy through grant number DE- SC0005343, and from Sandia National Laboratories through the Hobbes Project, which is funded by the 2013 Exascale Operating and Runtime Systems Program under the Office of Advanced Scientific Computing Research in the DoE’s Office of Science. hybrid runtimes (HRTs), runtimes that run as kernels, and that enjoy full hardware access and control over abstractions to the machine. The primary claim of this dissertation is that the hybrid runtime model can provide significant benefits to parallel runtimes and the applications that run on top of them. I demonstrate that it is feasible to create instantiations of the hybrid runtime model by doing so for four different parallel runtimes, including Legion, NESL, NDPC (a home- grown language), and Racket. These HRTs are enabled by a kernel framework called Nautilus, which is a primary software contribution of this dissertation. A runtime ported to Nautilus that acts as an HRT enjoys significant performance gains relative to its ROS counterpart. Nautilus enables these gains by providing fast, light-weight mechanisms for runtimes. For example, with Legion running a mini-app of importance to the HPC commu- nity, we saw speedups of up to forty percent. We saw further improvements by leveraging hardware (interrupt control) that is not available to a user-space runtime. In order to bridge Nautilus with a “regular OS” (ROS) environment, I introduce a con- cept we developed called the hybrid virtual machine (HVM). Such bridged operation allows an HRT to leverage existing functionality within a ROS with low overheads. This simplifies the construction of HRTs. In addition to Nautilus and the HVM, I introduce an event system called Nemo, which allows runtimes to leverage events both with a familiar interface and with mechanisms that are much closer to the hardware. Nemo enables event notification latencies that outperform Linux user-space by several orders of magnitude. Finally, I introduce Multiverse, a system that implements a technique called automatic hybridization. This technique allows runtime developers to more quickly adopt the HRT model by starting with a working HRT system and incrementally moving functionality from a ROS to the HRT. Keywords: Hybrid Runtimes, Parallelism, Operating Systems, High-Performance Com- puting NORTHWESTERN UNIVERSITY Hybrid Runtime Systems A DISSERTATION SUBMITTED TO THE GRADUATE SCHOOL IN PARTIAL FULFILLMENT OF THE REQUIREMENTS for the degree DOCTOR OF PHILOSOPHY Field of Computer Science By Kyle C. Hale EVANSTON, ILLINOIS August 2016 2 c Copyright by Kyle C. Hale 2016 All Rights Reserved 3 Thesis Committee Peter A. Dinda Northwestern University Committee Chair Nikos Hardavellas Northwestern University Committee Member Russ Joseph Northwestern University Committee Member Fabián E. Bustamante Northwestern University Committee Member Arthur B. Maccabe Oak Ridge National Laboratory External Committee Member 4 Abstract Hybrid Runtime Systems Kyle C. Hale As parallelism continues to increase in ubiquity—from mobile devices and GPUs to datacenters and supercomputers—parallel runtime systems occupy an increasingly impor- tant role in the system software stack. The needs of parallel runtimes and the increasingly sophisticated languages and compilers they support do not line up with the services provided by general-purpose OSes. Furthermore, the semantics available to the runtime are lost at the system-call boundary in such OSes. Finally, because a runtime executes at user-level in such an environment, it cannot leverage hardware features that require kernel-mode privileges—a large portion of the functionality of the machine is lost to it. These limitations warp the design, implementation, functionality, and performance of parallel runtimes. I make the case for eliminating these compromises by transforming parallel runtimes into hybrid runtimes (HRTs), runtimes that run as kernels, and that enjoy full hardware access and control over abstractions to the machine. The primary claim 5 of this dissertation is that the hybrid runtime model can provide significant benefits to parallel runtimes and the applications that run on top of them. I demonstrate that it is feasible to create instantiations of the hybrid runtime model by doing so for four different parallel runtimes, including Legion, NESL, NDPC (a home- grown language), and Racket. These HRTs are enabled by a kernel framework called Nautilus, which is a primary software contribution of this dissertation. A runtime ported to Nautilus that acts as an HRT enjoys significant performance gains relative to its ROS counterpart. Nautilus enables these gains by providing fast, light-weight mechanisms for runtimes. For example, with Legion running a mini-app of importance to the HPC community, we saw speedups of up to forty percent. We saw further improvements by leveraging hardware (interrupt control) that is not available to a user-space runtime. In order to bridge Nautilus with a “regular OS” (ROS) environment, I introduce a concept we developed called the hybrid virtual machine (HVM). Such bridged operation allows an HRT to leverage existing functionality within a ROS with low overheads. This simplifies the construction of HRTs. In addition to Nautilus and the HVM, I introduce an event system called Nemo, which allows runtimes to leverage events both with a familiar interface and with mechanisms that are much closer to the hardware. Nemo enables event notification latencies that outperform Linux user-space by several orders of magnitude. Finally, I introduce Multiverse, a system that implements a technique called automatic hybridization. This technique allows runtime developers to more quickly adopt the HRT model by starting with a working HRT system and incrementally moving functionality from a ROS to the HRT. 6 Acknowledgments When I first embarked on the road to research, I considered myself an ambitious individual. Little did I know that personal ambitions and aspirations comprise a relatively minor fraction of the conditions required to pursue an academic career. I have found that the incredible amount of support so generously offered by my mentors, my friends, and my family has been far more important than anything else. I cannot hope to thank everyone who has shaped me as an individual and led me to where I am, but I will attempt here to name a few that stand out. I first thank my advisor, Professor Peter Dinda, for his encouragement and advice during my studies at Northwestern. I find it inadequate to just call him my advisor, as it understates the role he has played in guiding my development, both intellectually and per- sonally. Peter embodies all the qualities of a true mentor, from his seemingly inexhaustible supply of patience to his extraordinary ability to cultivate others’ strengths. Before meet- ing him, I had never encountered a teacher so genuinely dedicated to advocating for his students and their success. I can only hope to uphold the same standards of mentorship. I would also like to thank my other dissertation committee members. Nikos Hardavellas has always had an open door for brainstorming sessions and advice. Russ Joseph similarly 7 has always been willing to take the time to help, always with an uplifting note of humor and a smile. Fabián Bustamante has throughout been a source of candid conversation and genuine advice. Barney Maccabe has been an indispensable colleague and has always made me feel welcome in the HPC community. Two very important people who gave me the benefit of the doubt when I started doing research deserve special thanks. Steve Keckler and Boris Grot not only advised me as an undergraduate, they also helped me discover confidence in myself that I never thought I had. I also owe thanks to Calvin Lin and the Turing Scholars Program at The University of Texas for planting the seeds for my interest in research in computer systems. To FranCee Brown McClure and all of the other Ronald E. McNair scholars, I could not have made it to this point without you. I must also thank Kevin Pedretti and Patrick Bridges for helping make my stint in New Mexico an enjoyable one, and for making me feel at home in the Southwest. To Jack Lange I am grateful for being able to enjoy the solidarity of a fellow Texan who also braved the cold Chicago winters to earn a doctorate. Several members of the Prescience Lab, both former and present, deserve thanks for their assistance, including Conor Hetland, Lei Xia, and Chang Bae. While the friends that have supported me through the years are too numerous to name, some stand out in helping me survive graduate school. I have Leonidas Spinoulas, Luis Pestaña, Armin Kappeler, and Jaime Espinosa to thank for broadening my horizons and bringing me out of my shell. I am ever grateful to John Rula for being a fantastic roommate, a loyal friend, and a willing participant in what others would consider outlandish intel- lectual discussions.
Recommended publications
  • A Microkernel API for Fine-Grained Decomposition
    A Microkernel API for Fine-Grained Decomposition Sebastian Reichelt Jan Stoess Frank Bellosa System Architecture Group, University of Karlsruhe, Germany freichelt,stoess,[email protected] ABSTRACT from the microkernel APIs in existence. The need, for in- Microkernel-based operating systems typically require spe- stance, to explicitly pass messages between servers, or the cial attention to issues that otherwise arise only in dis- need to set up threads and address spaces in every server for tributed systems. The resulting extra code degrades per- parallelism or protection require OS developers to adopt the formance and increases development effort, severely limiting mindset of a distributed-system programmer rather than to decomposition granularity. take advantage of their knowledge on traditional OS design. We present a new microkernel design that enables OS devel- Distributed-system paradigms, though well-understood and opers to decompose systems into very fine-grained servers. suited for physically (and, thus, coarsely) partitioned sys- We avoid the typical obstacles by defining servers as light- tems, present obstacles to the fine-grained decomposition weight, passive objects. We replace complex IPC mecha- required to exploit the benefits of microkernels: First, a nisms by a simple function-call approach, and our passive, lot of development effort must be spent into matching the module-like server model obviates the need to create threads OS structure to the architecture of the selected microkernel, in every server. Server code is compiled into small self- which also hinders porting existing code from monolithic sys- contained files, which can be loaded into the same address tems. Second, the more servers exist | a desired property space (for speed) or different address spaces (for safety).
    [Show full text]
  • Building a Reliable Storage Stack
    Building a Reliable Storage Stack Ph.D. Thesis David Cornelis van Moolenbroek Vrije Universiteit Amsterdam, 2016 This work was supported by the European Research Council Advanced Grant 227874. This work was carried out in the ASCI graduate school. ASCI dissertation series number 355. Copyright © 2016 by David Cornelis van Moolenbroek. ISBN 978-94-028-0240-5 Cover design by Eva Dienske. Printed by Ipskamp Printing. VRIJE UNIVERSITEIT Building a Reliable Storage Stack ACADEMISCH PROEFSCHRIFT ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, op gezag van de rector magnificus prof.dr. V. Subramaniam, in het openbaar te verdedigen ten overstaan van de promotiecommissie van de Faculteit der Exacte Wetenschappen op maandag 12 september 2016 om 11.45 uur in de aula van de universiteit, De Boelelaan 1105 door David Cornelis van Moolenbroek geboren te Amsterdam promotor: prof.dr. A.S. Tanenbaum To my parents Acknowledgments This book marks the end of both a professional and a personal journey–one that has been long but rewarding. There are several people whom I would like to thank for accompanying and helping me along the way. First and foremost, I would like to thank my promotor, Andy Tanenbaum. While I was finishing up my master project under his supervision, he casually asked me “Would you like to be my Ph.D student?” during one of our meetings. I did not have to think long about the answer. Right from the start, he warned me that I would now have to conduct original research myself; only much later did I understand the full weight of this statement.
    [Show full text]
  • Free Software Philosophy, History and Practice
    Introduction (and some quick reminders) History and philosophy Legal aspects Free Software Philosophy, history and practice Luca Saiu [email protected] http://ageinghacker.net GNU Project Screencast version Paris, October 2014 Luca Saiu <[email protected]> Free Software — Philosophy, history and practice Introduction (and some quick reminders) History and philosophy Legal aspects Introducing myself I’m a computer scientist living and working somewhere around Paris... ...and a GNU maintainer. I’m also an associate member of the Free Software Foundation, a fellow of the Free Software Foundation Europe and an April adherent. So I’m not an impartial observer. Luca Saiu <[email protected]> Free Software — Philosophy, history and practice Introduction (and some quick reminders) History and philosophy Legal aspects Contents 1 Introduction (and some quick reminders) 2 History and philosophy The hacker community The GNU Project and the Free Software movement Linux and the Open Source movement 3 Legal aspects Copyright Free Software licenses Luca Saiu <[email protected]> Free Software — Philosophy, history and practice Introduction (and some quick reminders) History and philosophy Legal aspects Reminders about software — source code vs. machine code Source code vs. machine code Quick demo Luca Saiu <[email protected]> Free Software — Philosophy, history and practice Programs are linked to libraries static libraries shared libraries Libraries (or programs) request services to the kernel Programs invoke with other programs Programs communicate with other programs... ...on the same machine ...over the network We’re gonna see that this has legal implications. Introduction (and some quick reminders) History and philosophy Legal aspects Reminders about software — linking In practice, programs don’t exist in isolation.
    [Show full text]
  • Ebook - Informations About Operating Systems Version: August 15, 2006 | Download
    eBook - Informations about Operating Systems Version: August 15, 2006 | Download: www.operating-system.org AIX Internet: AIX AmigaOS Internet: AmigaOS AtheOS Internet: AtheOS BeIA Internet: BeIA BeOS Internet: BeOS BSDi Internet: BSDi CP/M Internet: CP/M Darwin Internet: Darwin EPOC Internet: EPOC FreeBSD Internet: FreeBSD HP-UX Internet: HP-UX Hurd Internet: Hurd Inferno Internet: Inferno IRIX Internet: IRIX JavaOS Internet: JavaOS LFS Internet: LFS Linspire Internet: Linspire Linux Internet: Linux MacOS Internet: MacOS Minix Internet: Minix MorphOS Internet: MorphOS MS-DOS Internet: MS-DOS MVS Internet: MVS NetBSD Internet: NetBSD NetWare Internet: NetWare Newdeal Internet: Newdeal NEXTSTEP Internet: NEXTSTEP OpenBSD Internet: OpenBSD OS/2 Internet: OS/2 Further operating systems Internet: Further operating systems PalmOS Internet: PalmOS Plan9 Internet: Plan9 QNX Internet: QNX RiscOS Internet: RiscOS Solaris Internet: Solaris SuSE Linux Internet: SuSE Linux Unicos Internet: Unicos Unix Internet: Unix Unixware Internet: Unixware Windows 2000 Internet: Windows 2000 Windows 3.11 Internet: Windows 3.11 Windows 95 Internet: Windows 95 Windows 98 Internet: Windows 98 Windows CE Internet: Windows CE Windows Family Internet: Windows Family Windows ME Internet: Windows ME Seite 1 von 138 eBook - Informations about Operating Systems Version: August 15, 2006 | Download: www.operating-system.org Windows NT 3.1 Internet: Windows NT 3.1 Windows NT 4.0 Internet: Windows NT 4.0 Windows Server 2003 Internet: Windows Server 2003 Windows Vista Internet: Windows Vista Windows XP Internet: Windows XP Apple - Company Internet: Apple - Company AT&T - Company Internet: AT&T - Company Be Inc. - Company Internet: Be Inc. - Company BSD Family Internet: BSD Family Cray Inc.
    [Show full text]
  • Amigaos 3.2 FAQ 47.1 (09.04.2021) English
    $VER: AmigaOS 3.2 FAQ 47.1 (09.04.2021) English Please note: This file contains a list of frequently asked questions along with answers, sorted by topics. Before trying to contact support, please read through this FAQ to determine whether or not it answers your question(s). Whilst this FAQ is focused on AmigaOS 3.2, it contains information regarding previous AmigaOS versions. Index of topics covered in this FAQ: 1. Installation 1.1 * What are the minimum hardware requirements for AmigaOS 3.2? 1.2 * Why won't AmigaOS 3.2 boot with 512 KB of RAM? 1.3 * Ok, I get it; 512 KB is not enough anymore, but can I get my way with less than 2 MB of RAM? 1.4 * How can I verify whether I correctly installed AmigaOS 3.2? 1.5 * Do you have any tips that can help me with 3.2 using my current hardware and software combination? 1.6 * The Help subsystem fails, it seems it is not available anymore. What happened? 1.7 * What are GlowIcons? Should I choose to install them? 1.8 * How can I verify the integrity of my AmigaOS 3.2 CD-ROM? 1.9 * My Greek/Russian/Polish/Turkish fonts are not being properly displayed. How can I fix this? 1.10 * When I boot from my AmigaOS 3.2 CD-ROM, I am being welcomed to the "AmigaOS Preinstallation Environment". What does this mean? 1.11 * What is the optimal ADF images/floppy disk ordering for a full AmigaOS 3.2 installation? 1.12 * LoadModule fails for some unknown reason when trying to update my ROM modules.
    [Show full text]
  • The Server Virtualization Landscape, Circa 2007
    ghaff@ illuminata.com Copyright © 2007 Illuminata, Inc. single user license Gordon R Haff Illuminata, Inc. TM The Server Virtualization Bazaar, Circa 2007 Inspired by both industry hype and legitimate customer excitement, many Research Note companies seem to have taken to using the “virtualization” moniker more as the hip phrase of the moment than as something that’s supposed to convey actual meaning. Think of it as “eCommerce” or “Internet-enabled” for the Noughts. The din is loud. It doesn’t help matters that virtualization, in the broad sense of “remapping physical resources to more useful logical ones,” spans a huge swath of Gordon Haff technologies—including some that are so baked-in that most people don’t even 27 July 2007 think of them as virtualization any longer. Personally licensed to Gordon R Haff of Illuminata, Inc. for your personal education and individual work functions. Providing its contents to external parties, including by quotation, violates our copyright and is expressly forbidden. However, one particular group of approaches is capturing an outsized share of the limelight today. That would, of course, be what’s commonly referred to as “server virtualization.” Although server virtualization is in the minds of many inextricably tied to the name of one company—VMware—there are many companies in this space. Their offerings include not only products that let multiple virtual machines (VMs) coexist on a single physical server, but also related approaches such as operating system (OS) virtualization or containers. In the pages that follow, I offer a guide to today’s server virtualization bazaar— which at first glance can perhaps seem just a dreadfully confusing jumble.
    [Show full text]
  • Dualbooting Amigaos 4 and Amigaos 3.5/3.9
    Dualbooting AmigaOS 4 and AmigaOS 3.5/3.9 By Christoph Gutjahr. Licensed under the GNU Free Documentation License This tutorial explains how to turn a classic Amiga into a dualboot system that lets you choose the desired operating system - AmigaOS 4 or AmigaOS 3.5/3.9 - at every cold start. A "cold start" happens when... 1. the computer has just been switched on 2. you press the key combination Control-Amiga-Amiga for more than ten seconds while running AmigaOS 3 3. you press Control-Alt-Alt (instead of Control-Amiga-Amiga) under AmigaOS 4 During a "warm reboot" (e.g. by shortly pressing Control-Amiga-Amiga), the operating system that is currently used will be booted again. Requirements This tutorial is only useful for people using AmigaOS 3.5 or 3.9 in addition to AmigaOS 4. If you're using an older version of OS 3, you can not use the scripts described below. The Amiga in question should have two boot partitions - one for AmigaOS 4 and one for AmigaOS 3.5/3.9, both should be below the famous 4 GB barrier. The OS 4 partition must have a higher boot priority. Two different solutions There are two different approaches for dualbooting: the first one described below will display a simple 'boot menu' at every cold boot, asking the user to select the OS he wants to boot. The other solution explained afterwards will always boot into AmigaOS 4, unless the user enters the "Early Startup Menu" and selects the OS 3 partition as the boot drive.
    [Show full text]
  • The Flask Security Architecture: System Support for Diverse Security Policies
    The Flask Security Architecture: System Support for Diverse Security Policies Ray Spencer Secure Computing Corporation Stephen Smalley, Peter Loscocco National Security Agency Mike Hibler, David Andersen, Jay Lepreau University of Utah http://www.cs.utah.edu/flux/flask/ Abstract and even many types of policies [1, 43, 48]. To be gen- erally acceptable, any computer security solution must Operating systems must be flexible in their support be flexible enough to support this wide range of security for security policies, providing sufficient mechanisms for policies. Even in the distributed environments of today, supporting the wide variety of real-world security poli- this policy flexibility must be supported by the security cies. Such flexibility requires controlling the propaga- mechanisms of the operating system [32]. tion of access rights, enforcing fine-grained access rights and supporting the revocation of previously granted ac- Supporting policy flexibility in the operating system is cess rights. Previous systems are lacking in at least one a hard problem that goes beyond just supporting multi- of these areas. In this paper we present an operating ple policies. The system must be capable of supporting system security architecture that solves these problems. fine-grained access controls on low-level objects used to Control over propagation is provided by ensuring that perform higher-level functions controlled by the secu- the security policy is consulted for every security deci- rity policy. Additionally, the system must ensure that sion. This control is achieved without significant perfor- the propagation of access rights is in accordance with mance degradation through the use of a security decision the security policy.
    [Show full text]
  • Filesystems HOWTO Filesystems HOWTO Table of Contents Filesystems HOWTO
    Filesystems HOWTO Filesystems HOWTO Table of Contents Filesystems HOWTO..........................................................................................................................................1 Martin Hinner < [email protected]>, http://martin.hinner.info............................................................1 1. Introduction..........................................................................................................................................1 2. Volumes...............................................................................................................................................1 3. DOS FAT 12/16/32, VFAT.................................................................................................................2 4. High Performance FileSystem (HPFS)................................................................................................2 5. New Technology FileSystem (NTFS).................................................................................................2 6. Extended filesystems (Ext, Ext2, Ext3)...............................................................................................2 7. Macintosh Hierarchical Filesystem − HFS..........................................................................................3 8. ISO 9660 − CD−ROM filesystem.......................................................................................................3 9. Other filesystems.................................................................................................................................3
    [Show full text]
  • AIX Workload Partition Management in IBM AIX Version
    Front cover Workload Partition Management in IBM AIX Version 6.1 Presents updated technical planning information for AIX V6.1 TL2 Covers new partition mobility, isolation, NIM support, and WPAR Manager features Provides walk-through examples for AIX system administrators Shane Brandon Anirban Chatterjee Henning Gammelmark Vijayasekhar Mekala Liviu Rosca Arindom Sanyal ibm.com/redbooks International Technical Support Organization Workload Partition Management in IBM AIX Version 6.1 December 2008 SG24-7656-00 Note: Before using this information and the product it supports, read the information in “Notices” on page ix. First Edition (December 2008) This edition applies to AIX 6.1 TL2. © Copyright International Business Machines Corporation 2008. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . ix Trademarks . x Preface . xi The team that wrote this book . xi Acknowledgements . xiii Become a published author . xiv Comments welcome. xiv Part 1. Introduction . 1 Chapter 1. Introduction to AIX workload partitions . 3 1.1 Workload management and partitioning in AIX systems. 4 1.1.1 AIX Workload Manager . 4 1.1.2 Logical partitions . 5 1.1.3 PowerVM (formerly Advanced POWER Virtualization) . 6 1.2 AIX6 Workload Partitions . 7 1.2.1 Global environment . 9 1.2.2 System WPAR . 10 1.2.3 Application WPAR. 10 1.3 WPAR isolation and security . 10 1.3.1 Processes . 11 1.3.2 Users. 11 1.3.3 Resources . 11 1.4 Other WPAR features . 12 1.4.1 Checkpoint/restart . 12 1.4.2 Live application mobility .
    [Show full text]
  • Run-Time Reconfigurable RTOS for Reconfigurable Systems-On-Chip
    Run-time Reconfigurable RTOS for Reconfigurable Systems-on-Chip Dissertation A thesis submited to the Faculty of Computer Science, Electrical Engineering and Mathematics of the University of Paderborn and to the Graduate Program in Electrical Engineering (PPGEE) of the Federal University of Rio Grande do Sul in partial fulfillment of the requirements for the degree of Dr. rer. nat. and Dr. in Electrical Engineering Marcelo G¨otz November, 2007 Supervisors: Prof. Dr. rer. nat. Franz J. Rammig, University of Paderborn, Germany Prof. Dr.-Ing. Carlos E. Pereira, Federal University of Rio Grande do Sul, Brazil Public examination in Paderborn, Germany Additional members of examination committee: Prof. Dr. Marco Platzner Prof. Dr. Urlich R¨uckert Dr. Mario Porrmann Date: April 23, 2007 Public examination in Porto Alegre, Brazil Additional members of examination committee: Prof. Dr. Fl´avioRech Wagner Prof. Dr. Luigi Carro Prof. Dr. Altamiro Amadeu Susin Prof. Dr. Leandro Buss Becker Prof. Dr. Fernando Gehm Moraes Date: October 11, 2007 Acknowledgements This PhD thesis was carried out, in his great majority, at the Department of Computer Science, Electrical Engineering and Mathematics of the University of Paderborn, during my time as a fellow of the working group of Prof. Dr. Franz J. Rammig. Due to a formal agreement between Federal University of Rio Grande do Sul (UFRGS) and University of Paderborn, I had the opportunity to receive a bi-national doctoral degree. Therefore, I had to accomplished, in addition, the PhD Graduate Program in Electrical Engineering of UFRGS. So, I hereby acknowledge, without mention everyone personally, all persons involved in making this agreement possible.
    [Show full text]
  • IBM Powervm Virtualization Introduction and Configuration
    Front cover IBM PowerVM Virtualization Introduction and Configuration Understand PowerVM features and capabilities Plan, implement, and set up PowerVM virtualization Updated to include new POWER7 technologies Mel Cordero Lúcio Correia Hai Lin Vamshikrishna Thatikonda Rodrigo Xavier ibm.com/redbooks International Technical Support Organization IBM PowerVM Virtualization Introduction and Configuration June 2013 SG24-7940-05 Note: Before using this information and the product it supports, read the information in “Notices” on page xxi. Sixth Edition (June 2013) This edition applies to: Version 7, Release 1 of AIX Version 7, Release 1 of IBM i Version 2, Release 2, Modification 2, Fixpack 26 of the Virtual I/O Server Version 7, Release 7, Modification 6 of the HMC Version AL730, release 95 of the POWER7 System Firmware Version AL740, release 95 of the POWER7 System Firmware © Copyright International Business Machines Corporation 2004, 2013. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Figures . xi Tables . xix Notices . xxi Trademarks . xxii Preface . xxiii Authors . xxiii Now you can become a published author, too! . xxvi Comments welcome. xxvi Stay connected to IBM Redbooks . .xxvii Summary of changes . xxix June 2013, Sixth Edition. xxix Part 1. Overview . 1 Chapter 1. PowerVM technologies. 3 1.1 The value of PowerVM . 4 1.2 What is PowerVM . 4 1.2.1 New PowerVM features . 6 1.2.2 PowerVM editions . 7 1.2.3 Activating the PowerVM feature . 12 1.3 The POWER Hypervisor . 15 1.4 Logical partitioning technologies . 17 1.4.1 Dedicated LPAR .
    [Show full text]