Addressing Shared Resource Contention in Datacenter Servers

Addressing Shared Resource Contention in Datacenter Servers

ADDRESSING SHARED RESOURCE CONTENTION IN DATACENTER SERVERS by Sergey Blagodurov Diploma with Honors, Moscow Engineering Physics Institute (State University), 2008 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy in the School of Computing Science Faculty of Applied Sciences © Sergey Blagodurov 2013 SIMON FRASER UNIVERSITY Summer 2013 All rights reserved. However, in accordance with the Copyright Act of Canada, this work may be reproduced without authorization under the conditions for “Fair Dealing.” Therefore, limited reproduction of this work for the purposes of private study, research, criticism, review and news reporting is likely to be in accordance with the law, particularly if cited appropriately. APPROVAL Name: Sergey Blagodurov Degree: Doctor of Philosophy Title of Thesis: Addressing Shared Resource Contention in Datacenter Servers Examining Committee: Dr. Greg Mori Associate Professor & Associate Director Chair Dr. Alexandra Fedorova Senior Supervisor Associate Professor Dr. Jian Pei Supervisor Professor Dr. Arrvindh Shriraman Internal Examiner Assistant Professor Dr. Diwakar Krishnamurthy External Examiner Associate Professor, Electrical and Computer Engineering University of Calgary Date Defended/Approved: ii Partial Copyright Licence iii Abstract Servers are major energy consumers in modern datacenters. Much of that energy is wasted because applications compete for shared resources and suffer severe performance penalties due to resource contention. Contention for shared resources remains an unsolved problem in existing datacenters despite significant research efforts dedicated to this problem in the past. The goal of this work is to investigate how and to what extent contention for shared resource can be mitigated via work- load scheduling. Scheduling is an attractive tool, because it does not require extra hardware and is relatively easy to integrate into the system. I have designed and implemented multiple Open Source and proprietary schedulers during my work on this dissertation. Most notably, I introduced the Distributed Intensity Online (DIO) sched- uler to target the shared resource contention in the memory hierarchy of Uniform Memory Access (UMA) systems, followed by the Distributed Intensity NUMA Online (DINO) scheduler that I de- signed to improve performance and decrease power consumption on Non Uniform Memory Access (NUMA) servers. As part of my internship in HP Labs, I designed a work conserving scheduler that prioritizes access to the multiple CPU cores on an industry level multicore server, thus managing contention for CPU and improving server power efficiency. Finally, the Clavis2D framework extends the contention awareness to the datacenter level and provides a comprehensive cluster scheduling solution that simultaneously takes into account multiple performance- and power-related goals. My dissertation work utilizes state-of-the-art industry level datacenter infrastructure, does not require any modification or prior knowledge about the workload and provides significant perfor- mance and energy benefits on-the-fly. Keywords: Multicore processors; scheduling; shared resource contention; performance evalua- tion iv Dedicated to the loving memory of my grandmother, Maria Pavlovna Blagodurova. v ”Success is not final, failure is not fatal: it is the courage to continue that counts.” — The name of the person who said these words goes here. There are hundreds of websites on the Internet that attribute this quote to Sir Winston Churchill without providing any reference [14]. I liked the quotation, so I wanted to include it at the forefront of my PhD work. Satisfied by my finding, I got curious about exactly when and where Churchill said that. The direct source that attributes these words to him is a book called ”The Prodigal Project: Genesis” by Ken Abraham and Daniel Hart [37]. However, Richard Langworth, a Churchill historian, in the appendix to his book ”Churchill By Himself” states that, contrary to the wide spread belief, Churchill actually never said that [8]. According to the Wikipedia [43], the above words may be a misattribution of a similar quote from Don Shula, an American football player and coach: ”Success is not forever and failure isn’t fatal.” Enjoy the rest of the thesis. vi Acknowledgments Working with many wonderful people was sine qua non to my dissertation. First and foremost, I want to mention Sasha, my Doctoral advisor of five years. Plainly speaking, Sasha is the smartest, most graceful person I know! She is a wonderful advisor, caring and understanding. PhD is notable for its stresses, so having an advisor that is attentive to your matters helps enormously. She did a tremendous amount of work teaching me to be a productive, independent scientist. I simply could not thank her enough for spending all those countless days with me brainstorming ideas, helping me to prepare and conduct experiments, present the results, read and write excellent scientific papers. If I had to choose again whom to work with for my PhD, I would choose her for sure! She achieved a lot, but I am confident she has yet more stellar career in front of her. I am very proud that my PhD is the first one that is almost entirely a product of her wonderful supervision. For several years, I have been working with many brilliant scientists at Hewlett-Packard Lab- oratories, most notably with Martin Arlitt, Daniel Gmach and Cullen Bash. Martin was my first industrial mentor. He showed me that it is important to think about practicality of your research, the passion that I shared with him ever since. He later introduced me to Daniel and Cullen both of whom were working in the same team. Together, we were involved in the Net-Zero Energy dat- acenter Big Bet at HP Labs. This is a multi-disciplinary project that involved both engineers and computer scientists. It helped me to realize just how important your research could be if it is part of a project of such scale. For these reasons, I strongly believe that a Computer Science PhD student benefits in a big way from being mentored in industry in addition to their academic endeavours. I am also grateful to Dr. Jian Pei and Dr. Fabien Hermenier for providing helpful insights on many parts of my Thesis work. My work would be impossible without collaboration with other students. I would like to thank Daniel Shelepov, Juan Carlos Saez, Ananth Narayan, Mohammad Dashti, Tyler Dwyer, and Jessica Jiang for discussing various research topics with me and helping me evaluate and evolve my ideas. vii A special thank you goes to Sergey Zhuravlev, a brilliant collaborator in the early years of my PhD. We did a lot of work together and delivered some fantastic results thanks to his bright mind and keen attitude to research. Finally, I would like to thank Yaroslav Litus, Nasser Ghazali-Beiklar, Jay Kulkarni and Fabien Gaud for being my good friends throughout my studies. Although we did not have a chance to work together (yet!), their emotional support nourished me a lot, helping me to get through these challenging times. Cheers to you all, this was a great ride! viii Contents Approval ii Partial Copyright License iii Abstract iv Dedication v Quotation vi Acknowledgments vii Contents ix List of Tables xiv List of Figures xv I Prologue 1 1 Introduction 2 1.1 New Challenges for Datacenter Workload Management . .2 1.2 Thesis contributions . .6 II Addressing contention locally within each server 8 2 Addressing contention for memory hierarchy in UMA systems 9 ix 2.1 Introduction . .9 2.2 Classification Schemes . 13 2.2.1 Methodology . 13 2.2.2 The Classification Schemes . 17 2.3 Factors Causing Performance Degradation on multicore systems . 23 2.3.1 Discussion of Performance-Degrading Factors Breakdown . 29 2.4 Scheduling Algorithms . 29 2.4.1 Distributed Intensity (DI) . 30 2.4.2 Distributed Intensity Online (DIO) . 34 2.5 Evaluation on Real Systems . 35 2.5.1 Evaluation Platform . 35 2.5.2 Results for scientific workloads . 36 2.5.3 Results for the LAMP workloads . 41 2.6 Minimizing power consumption on multicore systems with resource contention . 45 2.7 Conclusions . 51 3 Addressing contention for memory hierarchy in NUMA systems 54 3.1 Introduction . 54 3.2 Why existing algorithms do not work on NUMA systems . 57 3.2.1 Quantifying causes of contention . 58 3.2.2 Why existing contention management algorithms hurt performance . 61 3.3 A Contention-Aware Scheduling Algorithm for NUMA Systems . 62 3.3.1 DI-Plain . 62 3.3.2 DI-Migrate . 63 3.3.3 DINO . 66 3.4 Memory migration . 71 3.4.1 Designing the migration strategy . 71 3.4.2 Implementation of the memory migration algorithm . 72 3.5 Evaluation . 73 3.5.1 Workloads . 73 3.5.2 Effect of K .................................. 74 3.5.3 DINO vs. other algorithms . 75 3.5.4 Discussion . 76 x 3.6 Conclusions . 77 4 Addressing contention for server CPU cores 78 4.1 Introduction . 78 4.2 System overview . 81 4.2.1 Workload description . 81 4.2.2 Experimental Testbed . 82 4.3 Driving server utilization up . 83 4.3.1 Workload Collocation using Static Weights . 84 4.3.2 Impact of Critical Workload Size on Performance . 86 4.3.3 Providing Different Weights to Critical Workloads . 87 4.3.4 Impact of Kernel Version on Performance . 88 4.3.5 The Issue of Idle Power Consumption . 88 4.3.6 Pinning VCPUs of Critical Workloads . 89 4.3.7 Summary of Collocation with Static Weights . 91 4.4 Dynamic Prioritization of Critical SLAs . 92 4.4.1 Model for Dynamical CPU Weights . 92 4.4.2 Providing Fairness to Critical Workloads . 93 4.4.3 Achieving Workload Isolation with Dynamic Prioritization . 94 4.4.4 Improving Critical Performance with Dynamic Pinning . 95 4.5 Conclusion . 96 III Addressing contention globally on the cluster level 97 5 Addressing cluster contention via multi-objective scheduling 98 5.1 Introduction .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    208 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us