Demystifying Cloud Benchmarking

Demystifying Cloud Benchmarking

Demystifying Cloud Benchmarking Tapti Palit Yongming Shen Michael Ferdman Department of Computer Science Department of Computer Science Department of Computer Science Stony Brook University Stony Brook University Stony Brook University [email protected] [email protected] [email protected] Abstract—The popularity of online services has grown expo- inherently more challenging because it must take into account nentially, spurring great interest in improving server hardware the quality of service (latency of requests) and not just the peak and software. However, conducting research on servers has tra- raw throughput. ditionally been challenging due to the complexity of setting up representative server configurations and measuring their perfor- Several recent projects [1], [2] have begun to address the mance. Recent work has eased the effort of benchmarking servers challenges of benchmarking servers faced by the research by making benchmarking software and benchmarking instructions community. These efforts have made great strides in identifying readily available to the research community. the relevant server workloads and codifying performance mea- Unfortunately, the existing benchmarks are a black box; their surement methodologies. They collected the necessary bench- users are expected to trust the design decisions made in the construction of these benchmarks with little justification and few marking tools and have been disseminating instructions for cited sources. In this work, we have attempted to overcome this setting up server systems under test, configuring software that problem by building new server benchmarks for three popular simulates client requests, and generating synthetic data sets and network-intensive workloads: video streaming, web serving, and statistical distributions of request patterns. As a result, the effort object caching. This paper documents the benchmark construction needed to benchmark server systems for a typical researcher has process, describes the software, and provides the resources we used to justify the design decisions that make our benchmarks gone down drastically. representative for system-level studies. Unfortunately, although acknowledging their benefits, we identified a core drawback of the existing server benchmarking I. INTRODUCTION tools. While they provide the software and installation direc- tions, the existing benchmark suites do not readily provide The past two decades have seen a proliferation of online justification for the myriad decisions made in the construction services. The Internet has transitioned from being merely a of the benchmarks. The existing benchmark tools are essentially useful tool to becoming a dominant part of life and culture. a black box; users of these tools are called upon to implicitly To support this phenomenal growth, the handful of computers trust the decisions that were made in their design. The design that were once used to service the entire online community have choices for the existing benchmarks and the justifications for transformed into clouds comprising hundreds of thousands of these choices are not clearly cited and therefore do not allow servers in stadium-sized data centers. Servers operate around users to make a decision regarding the actual relevance of the clock, handling millions of requests and providing access these benchmarks. Moreover, when some of the design choices to petabytes of data to users across the globe. become dated, a revision of the benchmarks to make them rep- Driven by the constantly rising demand for more servers and resentative of the new server environment becomes necessary. larger data centers, academia and industry have directed sig- However, in the existing benchmarks, these decisions remain nificant efforts toward improving the state-of-the-art in server undiscovered without deep investigation of the tools. hardware architecture and software design. However, these This paper chronicles our experience in setting up new efforts often face a number of obstacles that arise due to the server benchmarks with explicitly justified design choices. In difficulty in measuring the performance of server systems. this work, we concentrate on benchmarking three of the most While traditional benchmarks used to measure computer per- popular network-intensive online applications: formance are widely accepted and easy to use, benchmarking of server systems calls for considerably more expertise. Server • Video Streaming services dominating the Internet network software is complex to configure and operate, and requires traffic [3]. tuning the server’s operating system and server software to • Web Serving of dynamic Web2.0 pages performed by the achieve peak performance. Whereas typical datasets for tra- most popular websites in the world [4]. ditional benchmarks are intuitive to identify, the datasets for • Object Caching of data used extensively in all popular server systems are diverse and include not only the contents cloud services [5]. of the data, but also the frequency and the access patterns To the extent practical, we document the tools that we use, to that data. Moreover, while the performance of traditional leveraging prior work and expertise. Wherever possible, we benchmarks is easily defined as the time taken to complete explicitly describe the decisions that were made in the construc- a unit of work, quantifying the performance of a server is tion of our benchmarks and cite the motivation and the sources that led to those decisions. Finally, we have integrated our workload from the CloudSuite [1] was built for RTSP, both the benchmarks with the CloudSuite [1] and led the re-engineering server and the client infrastructure of the benchmark needed of all benchmarks in CloudSuite 3 using Dockers [6], a mech- to be replaced to make the workload representative of modern anism that makes it easy to use the benchmarks and documents video-streaming services. all of the configuration choices in clear machine-readable form. The rest of this paper is organized as follows. We motivate B. Web Serving replacing the existing benchmarks in Section II and describe the In older generation websites, the user’s interaction consisted key considerations in server benchmark design in Section III. In primarily of reading content and periodically clicking on URLs Section IV, we describe our benchmarks and detail the design that changed the reading content of the entire page. Although decisions we made in their creation. Section V present the submission of web forms was supported, this was a relatively overview of our Docker-based benchmark deployment setup. rare operation, and the submitted data rarely if at all affected We present several surprising results and compare our bench- the website content, both for the user submitting the form and marks to prior work in Section VI. We review the related work for other users. On the other hand, Web2.0 websites are more in Section VII and conclude in Section VIII. responsive and provide a richer user experience, similar to II. MOTIVATING THE CHANGE GUI-style applications. These websites cause the browser to dynamically fetch small chunks of data and integrate it into We begin the presentation of our workloads by motivating parts of the webpage, instead of loading the entire webpage our decision to create new benchmarks. After using the existing content at once. Because of this, the amount of data transferred CloudSuite benchmarks [1] in several studies, we realized that per request is significantly less than older web applications a number of decisions made in these benchmarks do not match where the whole page had to be reloaded for each request. the current state of practice. These realizations initially moti- The CloudSuite [1] Web Serving workload was intended vated us to fix some of the problems we observed. However, to represent the new Web2.0 paradigm. However, the Olio after fixing a number of problems with the Web Serving and project [10] used for this benchmark was written as a bench- Data Caching benchmarks, we realized that it is easier to use the mark and was never in use in a production environment. In fact, knowledge we gained in this process (and some of the existing a large fraction of the requests made in this benchmark were tools) to re-implement the benchmarks, forcing us through the requests for static images, a task that is commonly outsourced process of considering many of the implicit design decisions to content distribution networks (CDNs) in production systems. along the way. Moreover, the fraction of dynamic requests in the benchmark It is worth noting that the focus of our new benchmarks is corresponded to the situation circa 2008. We replaced the system-level evaluation and performance measurement. Micro- benchmark website with a more representative package of a architectural study of the CloudSuite workloads demonstrated production system, with a larger fraction of small dynamic that vastly different cloud workloads have similar micro- requests, that aligns more closely with modern practices. Impor- architectural characteristics [1]. In this work, we target mak- tantly, while we deemed the server-side software inappropriate, ing the system-level behavior representative of real-world be- the Faban [11] framework used to emulate web clients in the havior, while the micro-architectural behavior is expected to original CloudSuite benchmark is a good fit and we leveraged remain largely similar to the original CloudSuite workloads. it in the creation of our own web

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us