INSTITUT F¨UR INFORMATIK ARM Virtualization Using Vmware Esxi
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
UNICORE OPTIMIZATION William Jalby
UNICORE OPTIMIZATION William Jalby LRC ITA@CA (CEA DAM/ University of Versailles St-Quentin-en-Yvelines) FRANCE 1 Outline The stage Key unicore performance limitations (excluding caches) Multimedia Extensions Compiler Optimizations 2 Abstraction Layers in Modern Systems Application Algorithm/Libraries CS Programming Language Original Compilers/Interpreters domain of Operating System/Virtual Machines Domain of the computer recent architect Instruction Set Architecture (ISA) computer architecture (‘50s-’80s) Microarchitecture (‘90s) Gates/Register-Transfer Level (RTL) Circuits EE Devices Physics Key issue Application Algorithm/Libraries Understand the We have to take into relationship/interaction account the between Architecture intermediate layers Microarchitecture and Applications/Algorithms Microarchitecture KEY TECHNOLOGY: Don’t forget also the lowest layers Performance Measurement and Analysis Performance Measurement and Analysis AN OVERLOOKED ISSUE HARDWARE VIEW: mechanism description and a few portions of codes where it works well (positive view) COMPILER VIEW: aggregate performance number (SPEC), little correlation with hardware Lack of guidelines for writing efficient programs Uniprocessor Performance From Hennessy and Patterson, Computer Architecture: A Quantitative Approach, 4th edition, October, 2006 - VAX: 25%/year 1978 to 1986 - RISC + x86: 52%/year 1986 to 2002 RISC + x86: ??%/year 2002 to present Trends Unicore Performance REF: Mikko Lipasti-University of [source: Intel] Wisconsin Modern Unicore Stage KEY PERFORMANCE -
A Lightweight Virtualization Layer with Hardware-Assistance for Embedded Systems
PONTIFICAL CATHOLIC UNIVERSITY OF RIO GRANDE DO SUL FACULTY OF INFORMATICS COMPUTER SCIENCE GRADUATE PROGRAM A LIGHTWEIGHT VIRTUALIZATION LAYER WITH HARDWARE-ASSISTANCE FOR EMBEDDED SYSTEMS CARLOS ROBERTO MORATELLI Dissertation submitted to the Pontifical Catholic University of Rio Grande do Sul in partial fullfillment of the requirements for the degree of Ph. D. in Computer Science. Advisor: Prof. Fabiano Hessel Porto Alegre 2016 To my family and friends. “I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones.” (Linus Torvalds) ACKNOWLEDGMENTS I would like to express my sincere gratitude to those who helped me throughout all my Ph.D. years and made this dissertation possible. First of all, I would like to thank my advisor, Prof. Fabiano Passuelo Hessel, who has given me the opportunity to undertake a Ph.D. and provided me invaluable guidance and support in my Ph.D. and in my academic life in general. Thank you to all the Ph.D. committee members – Prof. Carlos Eduardo Pereira (dissertation proposal), Prof. Rodolfo Jardim de Azevedo, Prof. Rômulo Silva de Oliveira and Prof. Tiago Ferreto - for the time invested and for the valuable feedback provided.Thank you Dr. Luca Carloni and the other SLD team members at the Columbia University in the City of New York for receiving me and giving me the opportunity to work with them during my ‘sandwich’ research internship. Eu gostaria de agraceder minha esposa, Ana Claudia, por ter estado ao meu lado durante todo o meu período de doutorado. O seu apoio e compreensão foram e continuam sendo muito importantes para mim. -
Computer Architecture Techniques for Power-Efficiency
MOCL005-FM MOCL005-FM.cls June 27, 2008 8:35 COMPUTER ARCHITECTURE TECHNIQUES FOR POWER-EFFICIENCY i MOCL005-FM MOCL005-FM.cls June 27, 2008 8:35 ii MOCL005-FM MOCL005-FM.cls June 27, 2008 8:35 iii Synthesis Lectures on Computer Architecture Editor Mark D. Hill, University of Wisconsin, Madison Synthesis Lectures on Computer Architecture publishes 50 to 150 page publications on topics pertaining to the science and art of designing, analyzing, selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals. Computer Architecture Techniques for Power-Efficiency Stefanos Kaxiras and Margaret Martonosi 2008 Chip Mutiprocessor Architecture: Techniques to Improve Throughput and Latency Kunle Olukotun, Lance Hammond, James Laudon 2007 Transactional Memory James R. Larus, Ravi Rajwar 2007 Quantum Computing for Computer Architects Tzvetan S. Metodi, Frederic T. Chong 2006 MOCL005-FM MOCL005-FM.cls June 27, 2008 8:35 Copyright © 2008 by Morgan & Claypool All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in printed reviews, without the prior permission of the publisher. Computer Architecture Techniques for Power-Efficiency Stefanos Kaxiras and Margaret Martonosi www.morganclaypool.com ISBN: 9781598292084 paper ISBN: 9781598292091 ebook DOI: 10.2200/S00119ED1V01Y200805CAC004 A Publication in the Morgan & Claypool Publishers -
Installation Guide for UNICORE Server Components
Installation Guide for UNICORE Server Components Installation Guide for UNICORE Server Components UNICORE Team July 2015, UNICORE version 7.3.0 Installation Guide for UNICORE Server Components Contents 1 Introduction1 1.1 Purpose and Target Audience of this Document.................1 1.2 Overview of the UNICORE Servers and some Terminology...........1 1.3 Overview of this Document............................2 2 Installation of Core Services for a Single Site3 2.1 Basic Scenarios..................................3 2.2 Preparation....................................5 2.3 Installation....................................6 2.4 Security Settings................................. 15 2.5 Installation of the Perl TSI and TSI-related Configuration of the UNICORE/X server....................................... 18 2.6 The Connections Between the UNICORE Components............. 20 3 Operation of a UNICORE Installation 22 3.1 Starting...................................... 22 3.2 Stopping...................................... 22 3.3 Monitoring.................................... 22 3.4 User Management................................. 22 3.5 Testing your Installation............................. 23 4 Integration of Another Target System 24 4.1 Configuration of the UNICORE/X Service.................... 24 4.2 Configuration of Target System Interface..................... 25 4.3 Addition of Users to the XUUDB........................ 26 4.4 Additions to the Gateway............................. 26 5 Multi-Site Installation Options 26 5.1 Multiple Registries............................... -
Intel's High-Performance Computing Technologies
Intel’s High-Performance Computing Technologies 11th ECMWF Workshop Use of HIgh Performance Computing in Meteorology Reading, UK 26-Oct-2004 Dr. Herbert Cornelius Advanced Computing Center Intel EMEA Advanced Computing on Intel® Architecture Intel HPC Technologies October 2004 HPC continues to change … *Other brands and names are the property of their respective owners •2• Advanced Computing on Intel® Architecture Intel HPC Technologies October 2004 Some HPC History 1960s 1970s 1980s 1990s 2000s HPC Systems 1970s 1980s 1990s 2000s Processor proprietary proprietary COTS COTS Memory proprietary proprietary COTS COTS Motherboard proprietary proprietary proprietary COTS Interconnect proprietary proprietary proprietary COTS OS, SW Tools proprietary proprietary proprietary mixed COTS: Commercial off the Shelf (industry standard) *Other brands and names are the property of their respective owners •3• Advanced Computing on Intel® Architecture Intel HPC Technologies October 2004 High-Performance Computing with IA Source: http://www.top500.org/lists/2004/06/2/ Source: http://www.top500.org/lists/2004/06/5/ 4096 (1024x4) Intel® Itanium® 2 processor based system 2500 (1250x2) Intel® Xeon™ processor based system 22.9 TFLOPS peak performance 15.3 TFLOPS peak performance PNNL RIKEN 9 1936 Intel® Itanium® 2 processor cluster 7 2048 Intel® Xeon™ processor cluster 11.6 / 8.6 TFLOPS Rpeak/Rmax 12.5 / 8.7 TFLOPS Rpeak/Rmax *Other brands and names are the property of their respective owners •4• Advanced Computing on Intel® Architecture Intel HPC Technologies -
Full Virtualization on Low-End Hardware: a Case Study
Full Virtualization on Low-End Hardware: a Case Study Adriano Carvalho∗, Vitor Silva∗, Francisco Afonsoy, Paulo Cardoso∗, Jorge Cabral∗, Mongkol Ekpanyapongz, Sergio Montenegrox and Adriano Tavares∗ ∗Embedded Systems Research Group Universidade do Minho, 4800–058 Guimaraes˜ — Portugal yInstituto Politecnico´ de Coimbra (ESTGOH), 3400–124 Oliveira do Hospital — Portugal zAsian Institute of Technology, Pathumthani 12120 — Thailand xUniversitat¨ Wurzburg,¨ 97074 Wurzburg¨ — Germany Abstract—Most hypervisors today rely either on (1) full virtua- a hypervisor-specific, often proprietary, interface. Full virtua- lization on high-end hardware (i.e., hardware with virtualization lization on low-end hardware, on the other end, has none of extensions), (2) paravirtualization, or (3) both. These, however, those disadvantages. do not fulfill embedded systems’ requirements, or require legacy software to be modified. Full virtualization on low-end hardware Full virtualization on low-end hardware (i.e., hardware with- (i.e., hardware without virtualization extensions), on the other out dedicated support for virtualization) must be accomplished end, has none of those disadvantages. However, it is often using mechanisms which were not originally designed for it. claimed that it is not feasible due to an unacceptably high Therefore, full virtualization on low-end hardware is not al- virtualization overhead. We were, nevertheless, unable to find ways possible because the hardware may not fulfill the neces- real-world quantitative results supporting those claims. In this paper, performance and footprint measurements from a sary requirements (i.e., “the set of sensitive instructions for that case study on low-end hardware full virtualization for embedded computer is a subset of the set of privileged instructions” [1]). -
Consideration About Enabling Hypervisor in Open-Source firmware
Consideration about enabling hypervisor in open-source firmware OSFC 2019 Piotr Król 1 / 18 About me Piotr Król Founder & Embedded Systems Consultant open-source firmware @pietrushnic platform security [email protected] trusted computing linkedin.com/in/krolpiotr facebook.com/piotr.krol.756859 OSFC 2019 CC BY 4.0 | Piotr Król 2 / 18 Agenda Introduction Terminology Hypervisors Bareflank Hypervisor as coreboot payload Demo Issues and further work OSFC 2019 CC BY 4.0 | Piotr Król 3 / 18 Introduction Goal create firmware that can start multiple application in isolated virtual environments directly from SPI flash Motivation to improve virtualization and hypervisor-fu to understand hardware capabilities and limitation in area of virtualization to satisfy market demand OSFC 2019 CC BY 4.0 | Piotr Król 4 / 18 Terminology Virtualization is the application of the layering principle through enforced modularity, whereby the exposed virtual resource is identical to the underlying physical resource being virtualized. layering - single abstraction with well-defined namespace enforced modularity - guarantee that client of the layer cannot bypass it and access abstracted resources examples: virtual memory, RAID VM is an abstraction of complete compute environment Hypervisor is a software that manages VMs VMM is a portion of hypervisor that handle CPU and memory virtualization Edouard Bugnion, Jason Nieh, Dan Tsafrir, Synthesis Lectures on Computer Architecture, Hard- ware and Software Support for Virtualization, 2017 OSFC 2019 CC BY 4.0 | Piotr Król 5 / -
Implementing Production Grids William E
Implementing Production Grids William E. Johnston a, The NASA IPG Engineering Team b, and The DOE Science Grid Team c Contents 1 Introduction: Lessons Learned for Building Large-Scale Grids ...................................... 3 5 2 The Grid Context .................................................................................................................. 3 The Anticipated Grid Usage Model Will Determine What Gets Deployed, and When. 7 3.1 Grid Computing Models ............................................................................................................ 7 3 1.1 Export Existing Services .......................................................................................................... 7 3 1.2 Loosely Coupled Processes ..................................................................................................... 7 3.1 3 WorlqTow Managed Processes .............................................................................. 8 3.1 4 Distributed-Pipelined / Coupled processes ............................................................................. 9 3.1 5 Tightly Coupled Processes ........................................................................ 9 3.2 Grid Data Models ..................................................................................................................... 10 3.2.1 Occasional Access to Multiple Tertiary Storage Systems ..................................................... 11 3.2.2 Distributed Analysis of Massive Datasets Followed by Cataloguing and Archiving ........... -
UNICORE D2.3 Platform Requirements
H2020-ICT-2018-2-825377 UNICORE UNICORE: A Common Code Base and Toolkit for Deployment of Applications to Secure and Reliable Virtual Execution Environments Horizon 2020 - Research and Innovation Framework Programme D2.3 Platform Requirements - Final Due date of deliverable: 30 September 2019 Actual submission date: 30 September 2019 Start date of project 1 January 2019 Duration 36 months Lead contractor for this deliverable Accelleran NV Version 1.0 Confidentiality status “Public” c UNICORE Consortium 2020 Page 1 of (62) Abstract This is the final version of the UNICORE “Platform Requirements - Final” (D2.3) document. The original version (D2.1 Requirements) was published in April 2019. The differences between the two versions of this document are detailed in the Executive Summary. The goal of the EU-funded UNICORE project is to develop a common code-base and toolchain that will enable software developers to rapidly create secure, portable, scalable, high-performance solutions starting from existing applications. The key to this is to compile an application into very light-weight virtual machines - known as unikernels - where there is no traditional operating system, only the specific bits of operating system functionality that the application needs. The resulting unikernels can then be deployed and run on standard high-volume servers or cloud computing infrastructure. The technology developed by the project will be evaluated in a number of trials, spanning several applica- tion domains. This document describes the current state of the art in those application domains from the perspective of the project partners whose businesses encompass those domains. It then goes on to describe the specific target scenarios that will be used to evaluate the technology within each application domain, and how the success of each trial will be judged. -
Adaptive Virtual Machine Scheduling and Migration for Embedded Real-Time Systems
Adaptive Virtual Machine Scheduling and Migration for Embedded Real-Time Systems Stefan Groesbrink, M.Sc. A thesis submitted to the Faculty of Computer Science, Electrical Engineering and Mathematics of the University of Paderborn in partial fulfillment of the requirements for the degree of doctor rerum naturalium (Dr. rer. nat.) 2015 Abstract Integrated architectures consolidate multiple functions on a shared electronic control unit. They are well suited for embedded real-time systems that have to imple- ment complex functionality under tight resource constraints. Multicore processors have the potential to provide the required computational capacity with reduced size, weight, and power consumption. The major challenges for integrated architectures are robust encapsulation (to prevent that the integrated systems corrupt each other) and resource management (to ensure that each system receives sufficient resources). Hypervisor-based virtualization is a promising integration architecture for complex embedded systems. It refers to the division of the hardware resources into multiple isolated execution environments (virtual machines), each hosting a software system of operating system and application tasks. This thesis addresses the hypervisor’s management of the resource computation time, which has to enable multiple real-time systems to share a multicore processor with all of them completing their computations as demanded. State of the art ap- proaches realize the sharing of the processor by assigning exclusive processor cores or fixed processor shares to each virtual machine. For applications with a computation time demand that varies at run-time, such static solutions result in a low utilization, since the pessimistic worst-case demand has to be reserved at all times, but is often not needed. -
DC DMV Communication Related to Reinstating Suspended Driver Licenses and Driving Privileges (As of December 10, 2018)
DC DMV Communication Related to Reinstating Suspended Driver Licenses and Driving Privileges (As of December 10, 2018) In accordance with District Law L22-0175, Traffic and Parking Ticket Penalty Amendment Act of 2017, the DC Department of Motor Vehicles (DC DMV) has reinstated driver licenses and driving privileges for residents and non-residents whose credential was suspended for one of the following reasons: • Failure to pay a moving violation; • Failure to pay a moving violation after being found liable at a hearing; or • Failure to appear for a hearing on a moving violation. DC DMV is mailing notification letters to residents and non-residents affected by the law. District residents who have their driver license or learner permit, including commercial driver license (CDL), reinstated and have outstanding tickets are boot eligible if they have two or more outstanding tickets. If a District resident has an unpaid moving violation in a different jurisdiction, then his or her driving privileges may still be suspended in that jurisdiction until the moving violation is paid. If the resident’s driver license or CDL is not REAL ID compliant (i.e., there is a black star in the upper right-hand corner) and expired, then to renew the credential, the resident will need to provide DC DMV with: • One proof of identity; • One proof of Social Security Number; and • Two proofs of DC residency. If the resident has a name change, then additional documentation, such as a marriage license, divorce order, or name change court order is required. DC DMV only accepts the documents listed on its website at www.dmv.dc.gov. -
Hard Real-Time Scheduling on Virtualized Embedded Multi-Core Systems
Faculteit Toegepaste Ingenieurswetenschappen Elektronica-ICT Hard real-time scheduling on virtualized embedded multi-core systems Proefschrift voorgelegd tot het behalen van de graad van Doctor in de Toegepaste Ingenieurswetenschappen aan de Universiteit Antwerpen te verdedigen door Yorick De Bock Peter Hellinckx Jan Broeckhove Antwerpen, 2018 Jury Chairman Prof. dr. ing. Maarten Weyn Supervisors Prof. dr. Peter Hellinckx Prof. dr. Jan Broeckhove Members Prof. dr. Paul De Meulenaere Prof. dr. ing. Sebastian Altmeyer, University of Amsterdam, The Netherlands Prof. dr. Rahul Mangharam, University of Pennsylvania, USA Contact Yorick De Bock IDLab, Faculty of Applied Engineering, University of Antwerp Groenenborgerlaan 171, 2020 Antwerpen, Belgium M: [email protected] T: +32 3265 1887 Copyright c 2018 Yorick De Bock All rights reserved. Abstract Due to the new industrial revolution or innovation in Cyber-Physical Systems and Internet- of-Things in general, the number of embedded devices in such environments increases exponentially. Controlling this increase of computing resources requires the use of new technologies such as multi-core processors and embedded virtualization. Combining both technologies makes it possible to reduce the number of the embedded systems by com- bining multiple separated systems into one powerful system. Those powerful systems hosts multiple tasks and are in most cases under the control of an operating system. The execution order of the tasks is orchestrated by the scheduler of this operating system. The scheduler uses a scheduling algorithm to assign a priority to each task. A wide variety of scheduling algorithms exists and it is the responsibility of the software developer to select the best fit for a specific application.