Lessons Learnt from Software Tuning of a Memcached-Backed, Multi-Tier, Web Cloud Application

Lessons Learnt from Software Tuning of a Memcached-Backed, Multi-Tier, Web Cloud Application

Lessons Learnt from Software Tuning of a Memcached-Backed, Multi-Tier, Web Cloud Application Muhammad Wajahat, Salman Masood, Abhinav Sau, Anshul Gandhi Stony Brook University fmwajahat, smasood, asau, [email protected] Abstract—Cloud computing has largely replaced dedicated and increase application throughput. Prior work on improving physical computing systems by providing critical features such the throughput of Memcached has focused on leveraging as elasticity and on-demand access to resources. However, hardware solutions (e.g., RDMA [5], GPUs [6], etc.), which despite its many benefits, the cloud does have its limitations, are infeasible for cloud users. Further, Memcached is usually such as limited or no control over the hardware and limited deployed as part of a service chain, such as a multi-tiered customization options. Users who deploy applications on the web application. We know that “a chain is only as strong as cloud only have control over software tuning and optimizations the weakest link”; while there has been a lot of research on since the infrastructure is managed by the provider. optimizing Memcached performance (see Section 2.3), it is also critical to ensure that other links in the chain, such as In this paper, we analyze cloud-deployed Web applications web server-Memcached and load balancer-web server, are that are multi-tiered and employ Memcached as the object not a bottleneck. caching layer. Memcached is a high performance memory caching system and, if there are no other bottlenecks in the The primary goal of this work is to investigate techniques system, the overall application performance should be dictated for optimal design and configuration of a Memcached- by Memcached. However, we show that other components of backed multi-tiered web application deployed in the cloud. the system such as web servers, load balancers, and some Specifically, we ask “How can we maximize the throughput underlying system configurations, severely impact applica- of a Memcached-backed cloud application?” tion performance. We analyze these components and provide To address this question, we explore software tuning and guidelines on their implementation and parameter tuning to programming models that can be easily leveraged by cloud minimize resource waste in the cloud. users. We first set up a customizable multi-tier web appli- cation complete with a load generator, load balancer, web 1. Introduction servers, Memcached servers, and a database (Section 3). In the past decade, cloud computing has replaced conven- Next, we study the application performance and investigate tional dedicated computing systems by providing on demand software tuning and communication models at relevant com- access to economical resources and services, including Vir- ponents to mitigate bottlenecks and maximize throughput. tual Machines (VMs). This has allowed companies, espe- We use extensive experimental evaluation to find the best cially online service providers, to avoid upfront investment possible configuration of the components. We also switch in infrastructure and instead focus on their applications. from synchronous to asynchronous components one by one Despite its many benefits, however, cloud computing does from upstream to downstream tiers and evaluate perfor- have its limitations. Tenants have limited or no control mance improvements. To emulate real world scenarios as over the underlying hardware and its customization options, closely as possible, we use the value size and popularity which are managed by the cloud service provider. Conse- distributions as reported by Facebook [3]. We evaluate quently, users who deploy applications on the cloud have to Apache (synchronous) vs Nginx (asynchronous) for load rely on software tuning options to maximize performance. balancer and web server tiers, and optimize their respective Further, since VM placement is handled by the provider, configuration options. users have to carefully deploy multi-tier applications to Our experiments reveal that, with default configuration val- avoid bottlenecks at individual tiers that might impact the ues, the other components in a Memcached service chain entire service chain. Without proper software tuning and can significantly limit end-to-end throughput. Even after bottleneck mitigation, tenants have to resort to capacity optimization, our peak application throughput is only about overprovisioning to meet their performance needs; such half of what the Memcached can achieve by itself, without overprovisioning lowers resource usage efficiency. other tiers in the service chain. Next, to optimize the service Consider the popular object caching service, Mem- chain, we find that it is typically enough to consider a cached [1], which is widely used by online service providers couple of tuning parameters at each component. Finally, (e.g., Facebook [2], [3] and Wikipedia [4]) to cache the when properly optimized, asynchronous components are not results of database queries or API calls in memory to always superior to synchronous components; the choice Web servers Memcached servers caching popular data in memory (DRAM). In a cloud- deployed setting, each server would be hosted on a VM. Note that the database is connected to the web servers and Arriving Load Balancer jobs Database not the Memcached; this is because data is requested via the client library at the web servers. In this distributed setting, the end-to-end throughput of the Figure 1. Illustration of a multi-tier Memcached-backed application. application could be limited by the throughput at each of the components, and not just Memcached. For example, if between them is non-trivial and depends on the number of the LB cannot sustain high request rates, then it becomes servers in each tier of the chain. the bottleneck. Since Memcached can typically provide high The rest of the paper is organized as follows. We discuss throughput, it is important to carefully eliminate perfor- relevant background and prior work on Memcached and mance bottlenecks at other components to realize high end- multi-tier Memcached applications in Section 2. We then to-end application throughput. If not, application owners describe our multi-tier Memcached-backed cloud web ap- may resort to expensive overprovisioning of VMs at non- plication in Section 3. Experimental results detailing our Memcached tiers to increase throughput, leading to lower tuning and implementation efforts are described in Section 4. resource and energy efficiency at the data center level. Finally, we conclude in Section 5. 2.3. Prior work on Memcached optimization 2. Background and Prior Work There is much prior work on improving the performance of a single Memcached node. Most of the improvements 2.1. Memcached overview have been realized by mitigating the network bottlenecks Memcached [1] is a distributed in-memory caching system or addressing the shared lock in the caching system, e.g., that efficiently scales to large memory capacities via mul- CPHash [7] (concurrent hash table and message passing) and tiple nodes due to its simple design. Memcached is a key- MemC3 [8] (smarter hashing and locking mechanisms). value (KV) store that typically sits in front of the database Recently, efforts have been made to analyze and optimize the tier. Clients request data from the Memcached tier via a end-to-end performance of Memcached-backed application. client-side library, such as libMemcached. Typical library Atikoglu et al. [3] analyze Facebook’s Memcached deploy- functions include KV read (get) and write (set). The li- ment; their analysis reveals that Facebook’s Memcached brary hashes the requested key and determines which Mem- workload is read-heavy (30:1 read/write ratio) and has a cached node is responsible for caching the associated KV moderate hit-rate of 81.4% despite power-law distributed pair. In case of a read request, the KV pair is fetched from request popularity. Hart et al. [9] study the impact of Mem- the faster (memory access) Memcached node, if cached. cached on overall site performance of Facebook by creating Else, the client library can decide to request the KV pair a Memcached performance model; the model is then used to from the slower (disk access) database, and optionally insert predict throughput on sequential and parallel architectures the retrieved pair into Memcached. Write requests proceed with high accuracy. Li et al. [10] analyze the underlying similarly; the client can choose to additionally write the KV causes of high tail latency in unoptimized server systems pair to the database. Note that the client library, and not running Nginx and Memcached. They first find a theoretical Memcached, determines which node to contact. baseline for performance using queueing theoretic models, A single Memcached node can provide substantial through- and then upon finding the actual latency distributions to put. In our experimental setup, a Memcached node deployed be much higher than the theoretical baseline, systematically on a modestly-sized VM in OpenStack can provide in excess identify and quantify the problem sources. of a million operations/second (see Section 3). Several of the above efforts require hardware changes that 2.2. Memcached-backed applications are infeasible for cloud tenants. As such, it becomes critical to explore software techniques to improve end-to-end per- In production systems, including Facebook [2], [3], Mem- formance by focusing on all components of the Memcached- cached is typically deployed as part of a multi-tier system backed

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us