Nessie: a Decoupled, Client-Driven Key-Value Store Using RDMA

Nessie: a Decoupled, Client-Driven Key-Value Store Using RDMA

TO APPEAR IN: IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 1 Nessie: A Decoupled, Client-Driven Key-Value Store Using RDMA Benjamin Cassell*, Tyler Szepesi*, Bernard Wong*, Tim Brecht*, Jonathan Ma*, Xiaoyi Liu* {becassel, stszepes, bernard, brecht, jonathan.ma, x298liu}@uwaterloo.ca Abstract—Key-value storage systems are an integral part of many data centre applications, but as demand increases so does the need for high performance. This has motivated new designs that use Remote Direct Memory Access (RDMA) to reduce communication overhead. Current RDMA-enabled key-value stores (RKVSes) target workloads involving small values, running on dedicated servers on which no other applications are running. Outside of these domains, however, there may be other RKVS designs that provide better performance. In this paper, we introduce Nessie, an RKVS that is fully client-driven, meaning no server process is involved in servicing requests. Nessie also decouples its index and storage data structures, allowing indices and data to be placed on different servers. This flexibility can decrease the number of network operations required to service a request. These design elements make Nessie well-suited for a different set of workloads than existing RKVSes. Compared to a server-driven RKVS, Nessie more than doubles system throughput when there is CPU contention on the server, improves throughput by 70% for PUT-oriented workloads when data value sizes are 128 KB or larger, and reduces power consumption by 18% at 80% system utilization and 41% at 20% system utilization compared with idle power consumption. Index Terms—Computer networks, data storage systems, distributed computing, distributed systems, key-value stores, RDMA ✦ 1 INTRODUCTION server-driven. This approach is susceptible to performance By eliminating slow disk and SSD accesses, distributed in- degradation in environments with multiplexed resources, as memory storage systems, such as memcached [17], Redis [6], an increase in CPU usage from a co-resident application or and Tachyon [24], can reduce an application’s request ser- VM can cause context switching that delays the processing vice time by more than an order of magnitude [27]. In the of requests, and significantly reduces the throughput of the absence of disks and SSDs, however, an operating system’s RKVS. This delay is amplified when a server-thread is pre- network stack is often the next largest source of application empted, preventing progress for all clients communicating latency. With network operations taking tens or hundreds with that server thread. of microseconds to complete, the performance bottleneck The second environment that current designs have not of large-scale applications has shifted, creating a need for focused on are those where the RKVS is required to store better networking solutions. large data values. Existing RKVSes use designs that couple The drive for increasing performance has led recent in- indexing and data storage, resulting in rigid data placement PUTs memory storage systems to use Remote Direct Memory and unavoidable data transfers for [11]. As data value Access (RDMA) [14], [21], [26], [30]. RDMA enables direct sizes grow, network bandwidth becomes a bottleneck for access to memory on a remote node, and provides addi- performance. This presents issues for a variety of work- tional performance benefits such as kernel bypass and zero- loads, for example when using an RKVS as a memory copy data transfers. These RDMA-based systems are able to manager for a distributed computation framework such provide lower latencies and higher throughput than systems as Apache Spark [31] which manages data units that are using TCP or UDP. hundreds of megabytes in size. When indexing and data Although existing RDMA-enabled key-value stores storage are coupled, writing new data of this size results in (RKVSes) perform well on workloads involving small data significant overhead as the data is sent over the network to sizes and dedicated servers, we have identified three en- the node responsible for its key. vironments for which these designs are not well-suited. The third environment includes those where reducing The first environment includes those where CPUs can not energy consumption is important. As previously explained, be dedicated for exclusive use by the RKVS. This includes server-driven RKVSes use polling threads alongside RDMA applications where it is advantageous for distributed com- to decrease operation latency. With the current speed of putation and storage to be integrated into the same node, network interfaces, however, a single polling thread per or where physical CPUs are shared by multiple virtual ma- machine is often insufficient to saturate the link. Instead, chines. Existing RKVSes achieve low latency by using dedi- these RKVSes must dedicate multiple polling threads to cated CPUs on the server to poll on incoming requests [14], handling requests. While this is useful when operating [21]. We refer to these systems, in which client requests are at peak load, polling during off-peak periods is energy- forwarded to a dedicated server thread for processing, as inefficient. In some parallelized data centre applications, it is possible to consolidate resources by shutting down VMs * Cheriton School of Computer Science, University of Waterloo during periods of inactivity. For a big memory storage sys- TO APPEAR IN: IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 2 tem, however, this means frequently reorganizing hundreds even be used in virtualized environments. RDMA inside a or possibly thousands of gigabytes of data. VM has been shown to, for larger data transfer sizes like In this paper, we present the design of Nessie, a new those we are focused on, provide performance comparable RKVS that addresses some of the limitations of current to that of using RDMA directly on hardware [13]. RKVSes. Nessie is fully client-driven, meaning requests are Communication over RDMA is performed using the satisfied entirely by the client using one-sided RDMA opera- verbs interface, consisting of one-sided and two-sided verbs. tions that do not involve the server process’ CPU. One-sided Two-sided verbs follow a message-based model where one reads and writes allow Nessie to eliminate the polling which process sends a message using the SEND verb and the is essential for providing high throughput in server-driven other process receives the message using a RECV verb. This RKVSes. This makes Nessie less susceptible to performance exchange involves CPU interaction on both the sending degradation caused by CPU interference in shared CPU and receiving side. The contents of the sent message are environments such as in the cloud. Furthermore, Nessie passed directly into an address in the receiving application’s only consumes CPU resources when clients are actively memory, specified by the RECV verb. making requests in the system, reducing Nessie’s power One-sided verbs allow direct access to pre-designated consumption during non-peak loads. Finally, Nessie’s de- regions of a remote server’s address space, without inter- coupled indexing and data storage allows indices to refer to acting with the remote CPU. The READ and WRITE verbs data stored on different nodes in the system. This enables can be used to retrieve and modify the contents of these more flexible data placement, for instance local writes, and regions, respectively. RDMA also provides two atomic one- can benefit write-heavy workloads by exploiting locality sided verbs: compare-and-swap (CAS), and fetch-and-add and reducing the volume of remote operations. (FAA). These verbs both operate on 64 bits of data, and are This paper makes three main contributions: atomic relative to other RDMA operations on the same NIC. • We describe the design of Nessie, and show how RDMA supports an asynchronous programming model its decoupled, fully client-driven design allows it to in which verbs are posted to a send and receive queuing operate without server-side interaction. pair. By default, operations post an event to a completion • We evaluate a prototype of Nessie. For compari- queue when they have finished. An application may block son purposes, we also evaluate NessieSD, a server- or poll on this queue while waiting for operations to com- driven version of Nessie that uses polling, similar plete. Alternatively, as noted by the authors of FaRM [14], to HERD [21]. We futhermore evaluate NessieHY, a most RDMA interfaces guarantee that an operation’s bytes hybrid version of Nessie that uses client-driven GETs are read or written in address order. Applications may take and server-driven PUTs similar to FaRM [14]. advantage of this fact to poll on the last byte of a region • We identify three problematic workloads for existing being used to service RDMA operations, as once the last RKVSes: Workloads in shared CPU environments, byte has been updated, the operation is complete. workloads with large data values, and workloads where energy consumption is important. We show that Nessie performs significantly better than server- 2.2 Key-Value Stores driven approaches for these workloads. Key-value stores (KVSes) are increasingly popular storage solutions that map input identifiers (keys) to stored content 2 BACKGROUND AND RELATED WORK (values). They are frequently employed as tools in appli- This work is a follow-up to our previous position paper [29]. cation infrastructure and as components of other storage We extend our preliminary design and introduce new mech- systems [26]. Most KVSes provide an interface consisting GET PUT DELETE GET anisms and performance optimizations. Additionally we of , and operations. accepts a key PUT provide a full system implementation and evaluation. In this and obtains it’s associated value. inserts or updates DELETE section, we provide an overview of RDMA technologies and the value for a key, and removes a key and any survey other in-memory key-value stores. associated values from the store. Some KVSes, such as Redis [6], provide persistent stor- age while others, including memcached [17], are used 2.1 RDMA for lossy caching.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us