Flexiscale Next Generation Data Centre Management

Total Page:16

File Type:pdf, Size:1020Kb

Flexiscale Next Generation Data Centre Management FlexiScale Next Generation Data Centre Management Gihan Munasinghe Paul Anderson Xcalibre Communications, School of Informatics, Livingston, UK University of Edinburgh, UK [email protected] [email protected] Abstract— Data centres and server farms are rapidly becoming ing”1. Conventionally, this still involves dedicated hardware a key infrastructure component for businesses of all sizes. – as the load increases, more dedicated machines would be However, matching the available resources to the changing level allocated (from a pool of idle machines) and some mechanism of demand is a major challenge. used to “load-balance” between them. FlexiScale is a data centre architecture which is designed to deliver a guaranteed QoS level for the exported services. It does Whilst this approach is a scalable solution, it suffers from a this by autonomically reconfiguring the infrastructure to cater for number of problems. There is still a considerable inefficiency fluctuations in the demand. FlexiScale is based on virtualisation in allocating dedicated servers – at any one time, a large technology which provides location- and hardware-transparent percentage of the machines may be running at a very low services. It currently uses Virtual Iron [2] as the management load average. It can also take a significant time to load and platform, and a XEN-based virtualisation platform [5]. In this paper, we describe our experiences and difficulties in reconfigure additional servers which means that there is quite a implementing the FlexiScale architecture. Phase I is currently in high latency in responding to requests for increased resources. production - this provides a scalable, fault tolerant hardware The use of “virtual machines” is becoming a popular architecture. Phase II is currently at the prototype stage - this solution to this problem. Several virtual machines can co-exist is capable of automatically adding and removing virtual servers on the same physical hardware if their resource requirements to maintain a guaranteed QoS level. are sufficiently low. If the requirements of a VM increase, then it can be “migrated” to a different physical machine which I. INTRODUCTION has less contention for the resource (of course, it would also be possible to migrate one of the other VMs to make more Catering for fluctuations in demand, is one of the most sig- resources available to the existing physical machine). This has nificant problems for many business IT departments. Internet clear cost benefits, as well as an ability to react more quickly services, in particular can be subject to very rapid and extreme to changing demands. changes. For example, a very small company may easily see Phase I of the FlexiScale project provides a data centre a hundred-fold increase in the web server load following a architecture based on migrating virtual machines. This in- television advertisement. cludes the virtualisation platform, shared storage, and a control Traditionally, services will be allocated to dedicated servers, infrastructure. These provide fault-tolerant virtual machines and the “solution” this problem is simply to over-provision which the customer can manipulate manually, or via a well- the hardware. But this is far from ideal – peak loads are still defined API. likely to swamp the allocated hardware, and the normal load Phase II is intended to add autonomic capability to the will lead to idle machines which are expensive to own, and production system by monitoring QoS levels and reconfiguring (increasingly) to power. Of course, increasing or decreasing the system automatically to maintain the required service level. the resources allocated to any service involves reassigning the dedicated hardware, and perhaps even physical reallocation. II. UNDERLYING TECHNOLOGIES Bill LeFebvre describes a good example of of this [7] – on FlexiScale is built on top of existing virtualisation and 9/11 the load on the CNN news service was such that they storage technologies: needed to increase their number of servers from 10 (at 08.45) to 52 (by 13.00)! A. Hardware virtualisation Many businesses outsource their backend data centre to Virtual machines access the physical hardware through companies which have the technical and infrastructure re- a layer known as the hypervisor. There are two types of sources to manage them (Xcalibre [4] is one such company). hypervisor: This provides an economy of scale. In particular, the ability to • A Type 1 (or native or bare-metal) Hypervisor runs share resources between different customers means that more directly on the hardware platform. The guest operating resources can be made available to handle peak loads on any one particular service. This is the basis of “Utility comput- 1http://en.wikipedia.org/wiki/Utility computing system runs on top of the hypervisor. XEN [5] is a type one Hypervisor. • A Type 2 (or hosted) Hypervisor runs within an operating system environment - ie. guest operating system runs two levels above the hardware.VMware Server [3] (formerly known as GSX) is an example of a type two hypervisor. Currently, FlexiScale uses a XEN-based hypervisor2 to provide hardware virtualisation on Intel VT or AMD-V pro- cessors. This is a type one Hypervisor, chosen to provide maximum performance. B. Migration FlexiScale relies on the ability to perform live migration of Fig. 1. Customer Control Panel (1) virtual servers between physical machines (without stopping and restarting the servers). III. FlexiScale PHASE I The FlexiScale architecture is modular and can accommo- The objective of FlexiScale phase I has been to build date different implementations of this functionality. Currently, a solid infrastructure. This provides customers with fault- we use Virtual Iron (VI) [2]. This is built on top of the tolerant virtual servers which they can manipulate manually. It XEN Hypervisor and works as an external management layer is also intended to provide a solid basis for the development of for the virtual servers. A “management station” supports phase II which will support the autonomic migration. Phase creation, removal, migration, and starting/stopping of virtual I has been in production use since October 2007 and now servers. Crucially, VI also supplies a Java API which allows supports hundreds of virtual servers. the data centre to be managed programmatically. This API The key component of phase I is the management station. provides access to all the functions of the management station, This unifies the underlying technologies and forms the inter- including live migration. face between the user and the infrastructure. The control panel C. Storage allows the user to stop, start and reboot virtual servers, as well as reconfigure the server specifications (memory etc). It FlexiScale relies on a centralised storage back-end to main- also manages the networking and storage allocation, including tain all persistent data during VM migration. This allows VLAN configurations, DHCP access, firewalling, and disk failed nodes to be instantly rebooted on some other physical creation in the Netapp . hardware. When a user starts their virtual server, the management All stored data currently comes from a centralised SAN station performs a number of steps: back-end which stores both operating system boot images and 1) Find a processing node with the appropriate RAM customer data. We currently use a NetApp FAS3050 which capacity is a hybrid SAN/NAS device. This has a maximum storage 2) Load the appropriate boot image from the SAN capacity of 168TB spread over 336 drives. We use an active- 3) Create a virtual server in the physical node allocating active configuration with two heads that fail-over instantly in the appropriate RAM case of a HW fault. 4) Mount the appropriate disk images from SAN. This forms a “single-point of failure” and needs to be 5) Start the virtual server with boot image. extremely robust: The interface to the management station is available via a • The disk shelves have a passive back-plane, redundant SOAP API, as well as an interactive web page – see figures power supplies and fans 1 and 2. • The shelves are dual-connected to the heads via FC The management station also provides fault-tolerance. Fail- connectors ure of a physical node is detected by monitoring a heart beat, • The disk shelves run in RAID-DP (or RAID 6) with a and the management station will redistribute the virtual servers spare disk per shelves for fast rebuilds from the failed node among other physical nodes. Since the • The heads run in active-active mode with an Infiniband data is shared via the SAN, the failure time of a service is interconnect that allows to failover instantly should one limited to the boot time of the new instance. head fail The load on the physical servers is monitored and this can be • The system is actively monitored by NetApp with a 4hr balanced by migrating virtual servers between physical nodes. delivery of spares Currently (phase I) this is a manual process. • Each NetApp head is connected to two switches which allows for failure in the cabling or switching architecture A. Some difficulties We faced a number of practical difficulties during the 2Supplied by VI - http://www.virtualiron.com/products/open source.cfm development of the first phase: Fig. 2. Customer Control Panel (2) 1) LUN Limits: The ability to migrate any virtual machine IV. FlexiScale PHASE II to any physical machine is crucial. This means that every Phase II of the FlexiScale project is intended to provide physical server needs to be capable of attaching every virtual customers with a guaranteed quality of service (QoS). The disk (LUN) in the entire cluster. However, the Linux kernel has response times of services will be monitored and the allocated some limitations in the iSCSI implementation which mean that resources will be increased or decreased to match the level of this would not be possible at the anticipated scale. We were demand.
Recommended publications
  • Cloud Computing: a Taxonomy of Platform and Infrastructure-Level Offerings David Hilley College of Computing Georgia Institute of Technology
    Cloud Computing: A Taxonomy of Platform and Infrastructure-level Offerings David Hilley College of Computing Georgia Institute of Technology April 2009 Cloud Computing: A Taxonomy of Platform and Infrastructure-level Offerings David Hilley 1 Introduction Cloud computing is a buzzword and umbrella term applied to several nascent trends in the turbulent landscape of information technology. Computing in the “cloud” alludes to ubiquitous and inexhaustible on-demand IT resources accessible through the Internet. Practically every new Internet-based service from Gmail [1] to Amazon Web Services [2] to Microsoft Online Services [3] to even Facebook [4] have been labeled “cloud” offerings, either officially or externally. Although cloud computing has garnered significant interest, factors such as unclear terminology, non-existent product “paper launches”, and opportunistic marketing have led to a significant lack of clarity surrounding discussions of cloud computing technology and products. The need for clarity is well-recognized within the industry [5] and by industry observers [6]. Perhaps more importantly, due to the relative infancy of the industry, currently-available product offerings are not standardized. Neither providers nor potential consumers really know what a “good” cloud computing product offering should look like and what classes of products are appropriate. Consequently, products are not easily comparable. The scope of various product offerings differ and overlap in complicated ways – for example, Ama- zon’s EC2 service [7] and Google’s App Engine [8] partially overlap in scope and applicability. EC2 is more flexible but also lower-level, while App Engine subsumes some functionality in Amazon Web Services suite of offerings [2] external to EC2.
    [Show full text]
  • Secure Service Provisioning in a Public Cloud
    Mälardalen University Press Licentiate Theses No. 157 SECURE SERVICE PROVISIONING IN A PUBLIC CLOUD Mudassar Aslam 2012 School of Innovation, Design and Engineering Copyright © Mudassar Aslam, 2012 ISBN 978-91-7485-081-9 ISSN 1651-9256 Printed by Mälardalen University, Västerås, Sweden Populärvetenskaplig sammanfattning Utvecklingen av molntekniker möjliggör utnyttjande av IT-resurser över Internet, och kan innebära många fördelar för såväl företag som privat- personer. Dock innebär denna nya modell för användandet av resurser att säkerhetsfrågor uppstår, frågor som inte existerat i traditionell resur- shantering på datorer. I avhandlingen fokuserar vi på säkerhetsfrågor som rör en användare av molntjänster (t.ex. en organisation, myndighet etc.), när användaren vill leasa molntjänster i form av Virtuella maskiner (VM) från en publik leverantör av Infrastructure-as-a-Service (IaaS). Det finns många säkerhetsområden i molnsystem: att hålla data hemliga, att resurserna är korrekta, att servicen är den utlovade, att säkerheten kan kontrolleras, etc. I denna avhandling fokuserar vi på säkerhetsproblem som resulterar i att tillit saknas mellan aktörerna i molnsystem, och som därmed hindrar säkerhetskänsliga användare från att använda molntjänster. Från en behovsanalys ur säkerhetsperspektiv föreslår vi lösningar som möjliggör tillit i publika IaaS-moln. Våra lösningar rör i huvudsak säker livscykelhantering av virtuella maskiner, inklusive mekanismer för säker start och säker migrering av virtuella maskiner. Lösningarna säkerställer att användarens VM alltid är skyddad i molnet genom att den endast tillåts exekveras på pål- itliga (trusted) plattformar. Detta sker genom att använda tekniker för s.k. trusted computing (pålitlig datoranvändning), vilket innebär att användaren på distans kan kontrollera om plattformen är tillförlitlig eller inte.
    [Show full text]
  • Delivery Services Model of Cloud Computing: a Perspective Overview
    International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 8, August 2012 pp. 5873{5884 DELIVERY SERVICES MODEL OF CLOUD COMPUTING: A PERSPECTIVE OVERVIEW Feng-Tse Lin and Chieh-Hung Huang Department of Applied Mathematics Chinese Culture University No. 55, Hwa-Kang Road, Yang-Min-Shan, Taipei 111, Taiwan [email protected] Received March 2011; revised August 2011 Abstract. Cloud computing is a consequence of economic, commercial, cultural and technological conditions that have combined to cause a disruptive shift in the IT industry towards a service-based economy. It is a style of computing where massively scalable IT- enabled capabilities are provided as a service over the network and give rise to the \As a Service" business. The evolution of Cloud computing can handle massive data as per on demand service. Supporting this transition is a range of technologies from cluster- ing to virtualization. This study presents an expanded delivery services model of Cloud computing for enterprise and business. The characteristics and the challenges of Cloud computing are analyzed and discussed. The offerings from some Cloud service providers are also outlined. Keywords: Cloud computing, Virtualization, Cloud infrastructure, Cloud services, Everything-as-a-Service 1. Introduction. In the IBM technical white paper of Cloud computing, the concept of Cloud computing has developed from earlier ideas such as grid and utility computing, and aims to provide a completely Internet-driven, dynamic and scalable service-oriented IT environment, which can be accessed from anywhere using any web-capable device [6]. With the Cloud computing technology, user's computer no longer has to do all the heavy computing process or data storage.
    [Show full text]
  • Cloud Computing: a Review of Paas, Iaas, Saas Services and Providers
    Lámpsakos | No. 7 | PP. 47-57 | enero-junio | 2012 | ISSN: 2145-4086 | Medellín - Colombia CLOUD COMPUTING: A REVIEW OF PAAS, IAAS, SAAS SERVICES AND PROVIDERS CLOUD COMPUTING: UNA REVISIÓN DE LOS SERVICIOS Y PROVEEDORES PAAS, IAAS, SAAS Mg. María Salas-Zárate Mg. Luis Colombo-Mendoza Instituto Tecnológico de Orizaba, México Instituto Tecnológico de Orizaba, México [email protected] [email protected] (Review Article. Received el 17/10/2011. Accepted el 19/12/2011) Abstract. Cloud computing has become an Resumen. La computación en la nube se ha convertido important factor for businesses, developers, workers, en uno de los factores relevantes para las empresas, because it provides tools and Web applications that desarrolladores y trabajadores, porque proporciona allows storing information on external servers. Also, herramientas y aplicaciones web que permite almacenar Cloud computing offers advantages such as: cost información en servidores externos. Además, la reduction, information access from anywhere, to computación en la nube ofrece ventajas tales como: mention but a few. Nowadays, there are several reducción de costos, acceso a la información desde Cloud computing providers such as: Google Apps, cualquier lugar, por mencionar sólo algunos. Hoy en Zoho, AppEngine, Amazon E2C, among others. día hay varios proveedores de computación en la nube These providers offer Software, Infrastructure or como: Google Apps, Zoho, AppEngine, E2C Amazon, Platform as a Service. Taking this into account, this entre otros. Estos proveedores ofrecen software, paper presents a general review of Cloud computing infraestructura o plataforma como un servicio. En providers in order to allow users, enterprises, and este trabajo se presenta una revisión general de los developers select the one that meets their needs.
    [Show full text]
  • Cloud Computing in a Nutshell
    UNIT-I Cloud Computing in a Nutshell Cloud Computing Cloud computing is a general term for anything that involves delivering hosted services over the Internet. “Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet). The name comes from the common use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts remote services with a user's data, software and computation.” Cloud Benefits Pay as you go Focus on business rather than IT Elasticity - Scale up and down based on business need Cloud Models Deployment Models : Public Cloud, Private Cloud, Hybrid Cloud, Community Cloud Service Models : SaaS, PaaS, IaaS Essential Characteristics On Demand Self-Service: Allows for provisioning of computing resources automatically as needed. Broad Network Access: Access to cloud resources is over the network using standard mechanisms provided through multi-channels. Resource Pooling: The vendors’ resources are capable of being pooled to serve multiple clients using a multi-tenant model, with different physical and virtual resources in a dynamic way. Example of resources include; computation capabilities, storage and memory. Rapid Elasticity: Allows for rapid capability provisioning, for quick scaling out and scaling in of capabilities. The capability available for provisioning to the client seems to be unlimited and that it can be purchased as demanded. Measured Service: Allows
    [Show full text]
  • Malla Reddy College of Engineering and Technology
    MALLA REDDY COLLEGE OF ENGINEERING AND TECHNOLOGY Compiled By, Faculty of Cloud Computing Department of CSE CLOUD COMPUTING Objectives 1. To understand the various distributed system models and evolving computing paradigms 2. To gain knowledge in virtualization of computer resources 3. To realize the reasons for migrating into cloud 4. To introduce the various levels of services that can be achieved by a cloud. 5. To describe the security aspects in cloud and the services offered by a cloud. UNIT- I Cloud Computing Fundamentals: Definition of Cloud computing, Roots of Cloud Computing , Layers and Types of Clouds, Desired Features of a Cloud, Cloud Infrastructure Management, Infrastructure as a Service Providers, Platform as a Service Providers. Computing Paradigms: High-Performance Computing, Parallel Computing, Distributed Computing, Cluster Computing, Grid Computing. UNIT- II Migrating into a Cloud: Introduction, Broad Approaches to Migrating into the Cloud, the Seven-Step Model of Migration into a Cloud, Enriching the ‘Integration as a Service’ Paradigm for the Cloud Era, the Onset of Knowledge Era the Evolution of SaaS, Evolution of Saas. UNIT- III Infrastructure as a Service (IAAS) & Platform (PAAS): Virtual machines provisioning and Migration services, Virtual Machines Provisioning and Manageability, Virtual Machine Migration Services, VM Provisioning and Migration in Action. On the Management of Virtual machines for Cloud Infrastructures- Aneka—Integration of Private and Public Clouds. UNIT- IV Software as a Service (SAAS) &Data Security in the Cloud: Software as a Service SAAS), Google App Engine – Centralizing Email Communications- Collaborating via Web- Based Communication Tools-An Introduction to the idea of Data Security. UNIT- V SLA Management in cloud computing: Traditional Approaches to SLO Management, Types of SLA, Life Cycle of SLA, SLA Management in Cloud.
    [Show full text]
  • GFD-I Open Cloud Computing Interface Thijs Metsch, Sun
    GFD-I Thijs Metsch, Sun Microsystems Open Cloud Computing Interface September 16, 2009 Open Cloud Computing Interface - Use cases and requirements for a Cloud API 1. Introduction ............................................................................................................ 1 2. OCCI Use Cases ................................................................................................... 2 2.1. SLA-aware cloud infrastructure using SLA@SOI ............................................ 2 2.2. Service Manager to control the Life cycle of Services ..................................... 2 2.3. Interoperability across Cloud Infrastructures using OpenNebula ...................... 4 2.4. AJAX web front-end directly calling API ........................................................ 5 2.5. Single technical integration to support multiple service providers ..................... 5 2.6. Wrapping EC2 in OCCI ............................................................................... 6 2.7. Automated Business Continuity and Disaster Recovery .................................. 6 2.8. Simple scripting of cloud from Unix shell ....................................................... 6 2.9. Typical web hosting cluster .......................................................................... 6 2.10. Manage cloud resources from a centralized dashboard ................................ 7 2.11. Compute Cloud ......................................................................................... 7 2.12. Multiple Allocation .....................................................................................
    [Show full text]
  • An Applied Evaluation and Assessment of Cloud Computing Platforms
    An Applied Evaluation and Assessment of Cloud Computing Platforms Daniel H¨ogberg January 21, 2012 Master's Thesis in Computing Science, 30 credits Supervisor at CS-UmU: Mikael R¨annar Examiner: Fredrik Georgsson Ume˚a University Department of Computing Science SE-901 87 UMEA˚ SWEDEN Abstract Cloud computing is an emerging paradigm with the potential to change the way computing resources are used by enabling the long held idea of utility computing. This thesis aims to conduct a survey of the cloud computing platforms that are currently available and to com- pare and evaluate the alternatives. Criteria that are important to consider when choosing between cloud platforms are defined and used to compare a set of selected platforms. A case management application called Wera is also migrated to platforms to test the migration processes and the platforms in practice. An experience gained from performing migrations to several Infrastructure-as-a-Service platforms is that they are very much alike. The storage models and features available may differ but the functionality offered is essentially the same. The fact that the area is still new is very visible when working with the platforms, but even though the platforms are still evolving, they are useful. Disruptions in the availability are rare and it is surprisingly easy to migrate an application to an Infrastructure-as-a-Service platform and have it run in the cloud. Employing Platform-as-a-Service offerings requires a greater effort to get started but using them there is even more to gain by tasks like patching and automatic scaling being transferred to the provider.
    [Show full text]
  • United States, European Union, United Kingdom, France, and China
    Over the past decade, many data, information, or computational grids were built in various parts of the world. It summarizes five representative grid computing systems built in the United States, European Union, United Kingdom, France, and China. We call these national grids, because they are essentially government-funded projects pushing for grand challenge applications that demand high-performance computing and high-bandwidth communication networks. Here treat the EU countries as a single entity. Most national grids are built by linking supercomputer centers and major computer ensembles together with Internet backbones and high-bandwidth WANs or LANs. More details can be found in the cited subsequent sections. International Grid Projects Grid applications cannot be restricted to geographical boundaries. As summarized , several global-scale grid projects were launched or are still active in use today. These projects promote volunteer computing, utility computing, and specific software applications that utilizes grid infrastructure. International grids involve both government and industrial funding. The European Union has been a major player in grid computing. The most famous EU grid projects are the EGEE, DataGrid, and BEinGrid. In the industrial sector, we have seen grid providers including Sun Microsystems, IBM, HP, etc. International grids are built with fix-term projects. Some of them are no longer active to provide public services at the end of funding. Assignment Questions: 1. Explain in detail about OGSA . 2.Explain in detail about the requirements of OGSA. 3.Breifly explain about the data intensive grid service models. 4.Explain briefly about the various OGSA services. 5.Describe in detail about grid migration services and its security services.
    [Show full text]
  • IBM Data Center Networking Planning for Virtualization and Cloud Computing
    Front cover IBM Data Center Networking Planning for Virtualization and Cloud Computing Drivers for change in the data center IBM systems management networking capabilities The new data center design landscape Marian Friedman Michele Girola Mark Lewis Alessio M. Tarenzio ibm.com/redbooks International Technical Support Organization IBM Data Center Networking: Planning for Virtualization and Cloud Computing May 2011 SG24-7928-00 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. First Edition (May 2011) © Copyright International Business Machines Corporation 2011. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii Preface . ix The team who wrote this book . x Now you can become a published author, too! . xi Comments welcome. xi Stay connected to IBM Redbooks . xii Chapter 1. Drivers for a dynamic infrastructure. 1 1.1 Key operational challenges . 3 1.1.1 Costs and service delivery . 4 1.1.2 Energy efficiency. 5 1.1.3 Business resiliency and security . 5 1.1.4 Changing applications and business models . 5 1.1.5 Harnessing new technologies to support the business . 6 1.1.6 Evolving business models. 6 1.2 Cloud computing can change how IT supports business . 7 1.2.1 The spectrum of cloud solutions . 8 1.3 Benefits and challenges of cloud computing . 10 1.4 Perceived barriers to cloud computing . 12 1.5 Implications for today’s CIO . 16 1.6 Dynamic infrastructure business goals . 16 1.6.1 Reduce cost .
    [Show full text]
  • Improving Virtualization Security by Splitting Hypervisor Into Smaller Components
    Improving Virtualization Security by Splitting Hypervisor into Smaller Components Wuqiong Pan1;2, Yulong Zhang2, Meng Yu2, and Jiwu Jing1 1 State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China {wqpan,jing}@lois.cn 2 Department of Computer Science, Virginia Commonwealth University, Richmond, VA, 23284 USA {wpan,zhangy44,myu}@vcu.edu Abstract. In cloud computing, the security of infrastructure is deter- mined by hypervisor (or Virtual Machine Monitor, VMM) designs. Un- fortunately, in recent years, many attacks have been developed to com- promise the hypervisor, taking over all virtual machines running above the hypervisor. Due to the functions a hypervisor provides, it is very hard to reduce its size. Including a big hypervisor in the Trusted Computing Base (TCB) is not acceptable for a secure system design. Several secure, small, and innovative hypervisor designs, e.g., TrustVisor, CloudVisor, etc., have been proposed to solve the problem. However, these designs either have reduced functionalities or pose strong restrictions to the vir- tual machines. In this paper, we propose an innovative hypervisor design that splits hypervisor's functions into a small enough component in the TCB, and other components to provide full functionalities. Our design can significantly reduce the TCB size without sacrificing functionalities. Our experiments also show acceptable costs of our design. Keywords: VMM, Hypervisor, Cloud computing, TCB 1 Introduction Virtualization techniques allow multiple operating systems (OSs) to run concur- rently on a host computer. By sharing hardware, resource utilization can greatly be improved. Virtualization is also the key technology of cloud computing. Some software, such as Xen [1], can provide hardware virtualization by adding a new software layer called hypervisor beneath all Virtual Machines (VMs).
    [Show full text]
  • Improving Virtualization Security by Splitting Hypervisor Into Smaller Components Wuqiong Pan, Yulong Zhang, Meng Yu, Jiwu Jing
    Improving Virtualization Security by Splitting Hypervisor into Smaller Components Wuqiong Pan, Yulong Zhang, Meng Yu, Jiwu Jing To cite this version: Wuqiong Pan, Yulong Zhang, Meng Yu, Jiwu Jing. Improving Virtualization Security by Splitting Hypervisor into Smaller Components. 26th Conference on Data and Applications Security and Privacy (DBSec), Jul 2012, Paris, France. pp.298-313, 10.1007/978-3-642-31540-4_23. hal-01534758 HAL Id: hal-01534758 https://hal.inria.fr/hal-01534758 Submitted on 8 Jun 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License Improving Virtualization Security by Splitting Hypervisor into Smaller Components Wuqiong Pan1;2, Yulong Zhang2, Meng Yu2, and Jiwu Jing1 1 State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China {wqpan,jing}@lois.cn 2 Department of Computer Science, Virginia Commonwealth University, Richmond, VA, 23284 USA {wpan,zhangy44,myu}@vcu.edu Abstract. In cloud computing, the security of infrastructure is deter- mined by hypervisor (or Virtual Machine Monitor, VMM) designs. Un- fortunately, in recent years, many attacks have been developed to com- promise the hypervisor, taking over all virtual machines running above the hypervisor.
    [Show full text]