
THOMASKNAUTH ENERGYEFFICIENT CLOUDCOMPUTING: TECHNIQUES&TOOLS DISSERTATION TECHNISCHEUNIVERSITÄTDRESDEN FAKULTÄTINFORMATIK Dissertation zur Erlangung des akademischen Grades Doktoringenieur (Dr.-Ing.) Vorgelegt an der Technischen Universität Dresden Fakultät Informatik Eingereicht von Dipl.-Inf. Thomas Knauth geboren am 27. April 1983 in Hoyerswerda Betreuender Hochschullehrer: Prof. Dr. Christof Fetzer Technische Universität Dresden Gutachter: Prof. Dr. Rüdiger Kapitza Technische Universität Braunschweig Fachreferent: Prof. Dr. Hermann Härtig Technische Universität Dresden Statusvortrag: 19.03.2013 Eingereicht am: 07.10.2014 Verteidigt am: 16.12.2014 Acknowledgements First and foremost, I thank Prof. Christof Fetzer for providing the environment and funding to pursue my doctoral studies as part of his research group. Things did not always progress smoothly, especially in the beginning, but fortunately the final result is still something we can be a wee bit proud of. Second, I thank Prof. Rüdiger Kapitza for serving as the thesis’ external reviewer. He also usually had an open ear to listen and discuss my concerns and problems when I felt like I needed the opinion of a senior researcher. Third, I thank Matti Hiltunen whom I was fortunate to meet and collaborate with during the later stages of my PhD studies. Without him, I would most likely still have no clue about Software Defined Networking and have one less poster paper in my publication record. I also thank the motivated people who have commented on countless early paper drafts: Börn Döbel, Stephan Diestelhorst, Christoph Seidl, Mario Schwalbe, Peter Okech and Lenar Yazdanov (in no particular order). Also, I thank my fellow PhD students, old and new, from the System’s Engineering chair. We suffered through many a rejected paper, but also scored some collaborative victories. The life of a graduate student is a roller coaster, sometimes seem- ing an endless uphill struggle, but you guys made it much more bearable. Further, I am deeply indebted to the poor souls who read and commented on earlier versions of this document. What is true for programming, “Given enough eyes, all bugs are shallow”, is also true for any written work. Without your help, Robert, Dmitry, Christoph, Thordis, and Björn, the overall quality would not be what it is now. Lastly, I thank my parents for supporting me in every possible way and my girlfriend, Thordis, for suffering through my whining when the whole thing seemed pointless and doomed again. Contents 1 Introduction 9 1.1 Motivation 9 1.2 Thesis Outline 10 2 Background and State of the Art 13 2.1 Power Saving Techniques 21 2.2 Cloud Computing and Virtualization 25 2.3 Background Summary and Outlook 31 3 Research Cluster Usage Analysis 33 3.1 Introduction 33 3.2 Analysis 34 3.3 Conclusion 38 4 Online Scheduling with Known Resource Reservation Times 39 4.1 Introduction 39 4.2 Problem 41 4.3 Model 42 4.4 Schedulers 52 4.5 Evaluation 55 4.6 Related work 68 4.7 Discussion 70 4.8 Conclusion 72 8 thomas knauth 5 On-Demand Resource Provisioning in Cloud Environments 75 5.1 Introduction 75 5.2 Problem 77 5.3 Architecture 79 5.4 Fast Resume of Virtual Machines 88 5.5 Evaluation 92 5.6 Related Work 102 5.7 Conclusion 105 6 Efficient Block-Level Synchronization of Binary Data 107 6.1 Introduction 107 6.2 Problem and Motivation 108 6.3 Design and Implementation 109 6.4 Evaluation 113 6.5 Related Work 126 6.6 Conclusion 128 7 Conclusion 129 7.1 Research Cluster Usage 129 7.2 Online Scheduling with Known Resource Reservation Times 130 7.3 On-Demand Resource Provisioning in Cloud Environments 131 7.4 Efficient Block-Level Synchronization of Binary Data 132 7.5 Outlook 132 8 Bibliography 135 1 Introduction 1.1 Motivation This thesis presents work done at the confluence of two established topics in the area of systems engineering. Hardware virtualization, the first of the two topics, became mainstream in the early 2000s when solutions to virtualize the ubiquitous x86 instruction set started to be widely available. It has since moved on to become an established component of modern day computing. With a wide array of virtualization products available from different vendors, virtualization is routinely used by small businesses and warehouse- sized computing installations alike. In particular cloud computing, as a new model to deliver IT resources, relies heavily on fast and efficient virtualization. The second topic, energy efficiency, started to receive attention at the start of the millennium, but became a really hot topic in the latter part of the first decade. As the demand for compute and storage capacity continues to grow, so does the size and density of data centers. Modern compute facilities routinely consume power on the order of multiple megawatts, requiring careful planning and management of their energy supply. Spurred by the general debate on climate change, people suddenly started to worry about all the megawatts consumed by these outsized IT factories. The research question overarching all the work presented here is if and how hardware virtualization and related technologies can be used to improve the energy efficiency within today’s IT landscape. Hard- ware virtualization allows to create multiple virtual machines, all “emulated” in software, on a single physical servers. Each virtual machine provides the illusion of a dedicated physical server. The ease and low cost of creating new machines has changed the way in which IT administrators manage their infrastructure. Virtualiza- tion allows to co-locate multiple applications on the same hardware while ensuring strong isolation between them. In particular the cloud computing industry, where IT resources are rented instead of owned, uses virtualization ubiquitously. By focusing on energy efficiency aspects, we will present chal- lenges and propose solutions in the areas of (i) virtual machine 10 thomas knauth scheduling within cloud data centers, (ii) on-demand resource provisioning within cloud data centers, and (iii) efficient state syn- 1 When powering off a physical server, chronization as a solution to the “local state” problem 1. The main the local state becomes inaccessible. goal with any of the techniques and tools presented here is to save Hence, before powering down the server, the state must be replicated to energy by turning off unused servers. another location to remain available. 1.2 Thesis Outline The thesis is divided into 7 chapters, including this introduction. The following paragraphs give a brief overview of each chapter’s content. Chapter 2 presents a brief history of energy efficiency and vir- tualization within the context of modern day computing. Energy efficiency has received a lot of interest in the research community and the wider public in the last couple of years. While energy effi- ciency has been a concern for embedded and portable computing devices for much longer, the focus on energy efficiency of the IT industry as a whole is more recent. With more and more facets of our daily lives depending on and utilizing interconnected devices, warehouse-sized facilities are required to store and process all our digital assets. Chapter 2 introduces the factors governing energy efficiency of modern day data centers, as well as highlights the role that virtualization and cloud computing play in this. In Chapter 3 we analyze a 50 server cluster owned and operated by our research group. Based on four and a half years of usage data, we determine the cluster’s overall usage and energy saving potential. While the research cluster is different from a general purpose internet-scale data center in terms of usage patterns and workloads, we find the same degree of overall low utilization as in other environments. Our results regarding the cluster’s energy use and overall utilization agree with previously published reports on the huge potential to better utilize the servers and save significant amounts of energy by powering off unused servers. In Chapter 4 we investigate the benefits of specifying the lease time upfront when requesting virtual machines in a cloud environ- ment. Typically, the cloud customer requests computing resources from the cloud provider in the form of readily-sized CPU, memory, and disk configurations. What the cloud provider does not know, however, is the time frame for which the customer intends to use the resources. We determine if and how much specifying the lease time helps the cloud provider to optimize the mapping of virtual to physical resources. We are particularly interested in the effect of how co-locating VMs with similar remaining lease times reduces the number of powered up physical machines. The goal is to power off servers frequently and for as long as possible to save energy. energy efficient cloud computing: techniques and tools 11 Chapter 5 describes a system to automatically suspend and re- sume virtual machines based on their network activity. Idle virtual machines, those not having received or sent any data over the net- work for some time, are suspended to free resources for active VMs. Suspending a VM frees up main memory which is often the first resource to become a bottleneck in cloud environments. The au- tomatic and explicit swapping out of VMs is combined with a fast restore mechanism: when incoming network traffic is destined for a suspended VM, the fast VM resume mechanism ensures a swift re- instantiation. The important parts of the VM’s state are read prior to restarting its execution, while the less important bits are read lazily in the background later on. In addition to the swift re-activation of VMs, we also present a novel way to detect (in)activity at the network layer. Our solution utilizes Software Defined Networking (SDN) and in particular flow table entries to determine communication activity.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages147 Page
-
File Size-