Taming the Killer Microsecond

Taming the Killer Microsecond

Taming the Killer Microsecond Shenghsun Cho, Amoghavarsha Suresh, Tapti Palit, Michael Ferdman and Nima Honarmand Department of Computer Science, Stony Brook University Stony Brook, NY, USA fshencho, amsuresh, tpalit, mferdman, [email protected] Abstract—Modern applications require access to vast datasets grained accesses (e.g., on-chip caches and DRAM) or at low latencies. Emerging memory technologies can enable faster millisecond-level bulk-access devices (e.g., spinning disks). access to significantly larger volumes of data than what is possible Hardware techniques, such as multi-level on-chip caches, today. However, these memory technologies have a significant caveat: their random access latency falls in a range that cannot be prefetching, superscalar and out-of-order execution, hardware effectively hidden using current hardware and software latency- multi-threading, and memory scheduling are effective in hiding hiding techniques—namely, the microsecond range. Finding the nanosecond-level latencies encountered in the memory hierar- root cause of this “Killer Microsecond” problem, is the subject chy. On the other hand, millisecond-level latencies of disks of this work. Our goal is to answer the critical question of why and networks are hidden through OS-based context switching existing hardware and software cannot hide microsecond-level latencies, and whether drastic changes to existing platforms are and device management. necessary to utilize microsecond-latency devices effectively. While these techniques are effective for conventional mem- We use an FPGA-based microsecond-latency device emulator, ory and storage devices, the computing landscape, especially a carefully-crafted microbenchmark, and three open-source data- in data centers and warehouse-scale computers, is introducing intensive applications to show that existing systems are indeed new devices that operate in the gap between memory and incapable of effectively hiding such latencies. However, after uncovering the root causes of the problem, we show that simple storage—i.e., at latencies in the microsecond range. A vari- changes to existing systems are sufficient to support microsecond- ety of such technologies are being deployed today, such as latency devices. In particular, we show that by replacing on- Flash memories (latencies in the tens of microseconds) [1] demand memory accesses with prefetch requests followed by fast and 40-100 Gb/s Infiniband and Ethernet networks (single- user-mode context switches (to increase access-level parallelism) digit microseconds) [2]–[4]. New devices, such as 3D XPoint and enlarging hardware queues that track in-flight accesses (to accommodate many parallel accesses), conventional architectures memory by Intel and Micron (hundreds of nanoseconds) [5], can effectively hide microsecond-level latencies, and approach are being introduced over the next several years. the performance of DRAM-based implementations of the same Unfortunately, existing micro-architectural techniques that applications. In other words, we show that successful usage of handle fine-grained accesses with nanosecond-level latencies microsecond-level devices is not predicated on drastically new cannot hide microsecond delays, especially in the presence hardware and software architectures. Index Terms—Killer microseconds, Emerging storage, Data- of pointer-based serial dependence chains commonly found intensive applications, FPGA in modern server workloads [6]. At the same time, OS-based mechanisms themselves incur overheads of several microsec- I. INTRODUCTION onds [7], and while such overheads are acceptable for large Accessing vast datasets is at the heart of today’s computing bulk transfers in traditional storage, they would fundamentally applications. Data-intensive workloads such as web search, defeat the benefits of emerging low-latency devices. advertising, machine translation, data-driven science, and fi- This fact, dubbed the “Killer Microsecond” problem [8], [9], nancial analytics have created unprecedented demand for fast although clearly understood at an intuitive level, completely access to vast amounts of data, drastically changing the storage lacks quantification and examination in technical literature. landscape in modern computing. The need to quickly access In particular, in the context of microsecond-latency storage, large datasets in a cost-effective fashion is driving innovation which is the target of this work, it is not clear what aspects across the computing stack, from the physics and architecture of current hardware and software platforms are responsible of storage hardware, all the way to the system software, for their inability to hide microsecond-level latencies. As libraries, and applications. a result, it is not known whether effective usage of such The traditional challenge in accessing data is the increased devices is preconditioned on fundamental hardware/software latency as the data size grows. This trend is observed at changes in existing systems, or simply requires straightforward virtually all levels of storage, whether the data reside in on- modifications. In this paper, we undertake a quantitative study chip storage, main memory, disk, or remote servers. Therefore, of modern systems to find an answer to this question. architects have designed a variety of mechanisms to hide data As we will discuss in Section II, we believe that adoption access latency by amortizing it across bulk transfers, caching and disruptive use of microsecond-latency storage devices in hot data in fast storage, and overlapping data transfer with data-intensive applications is incumbent on their integration as independent computation or data access. memory, capable of serving fine-grained (i.e., cache-line size) Current latency-hiding mechanisms and storage interfaces data accesses whose latency must be hidden from application were designed to deal with either nanosecond-level fine- developers by the hardware/software platform. To investigate the limits of such integration in modern systems, we developed TABLE I: Common hardware and software latency-hiding a flexible hardware/software platform to expose the bottle- mechanisms in modern systems. necks of state-of-the-art systems that prevent such usage of Paradigm HW Mechanisms SW Mechanisms microsecond-latency devices. On-chip caches OS page cache Caching On the hardware side, we developed an FPGA-based storage Prefetch buffers 64-128B cache lines Multi KB transfers from device emulator with controllable microsecond-level latency. Bulk transfer This hardware allows us to experiment with different device- disk and network Super-scalar execution Kernel-mode context switch interfacing mechanisms (Section III) commonly found in Out-of-order execution User-mode context switch modern systems. On the software side, we used a synthetic Overlapping Branch speculation microbenchmark that can be configured to exhibit varying Prefetching degrees of memory-level parallelism (MLP) and compute- Hardware multithreading to-memory instruction ratios—both of which are well-known factors in the performance of data-intensive applications. storage technologies have already stagnated or are expected to We used the microbenchmark and our FPGA platform do so in the near future [10], [11]. to find the bottlenecks of a cutting-edge Xeon-based server Given these trends, the current solution to the “big-data- platform in hiding microsecond-level latencies. Then, to verify at-low-latency” problem is to shard the data across many that the identified bottlenecks are relevant to real workloads, in-memory servers and interconnect them using high-speed we ran three open-source data-intensive applications on our networks. In-memory data stores [12], in-memory processing platform, and compared their performance with that of our frameworks [13], [14], in-memory key-value stores [15], [16], microbenchmark, showing that they follow the same general and RAM Clouds [17] are all examples of this trend. In these trends and suffer from the same bottlenecks. systems, although the storage medium (i.e., DRAM) provides Our results indicate that, although current systems can- access latencies of tens of nanoseconds, a remote access still not effectively integrate microsecond-latency devices without takes a few hundreds to thousands of microseconds. This hardware and software modifications, the required changes are extra latency is commonly attributed to the software stack not drastic either. In particular, we show that simple software (networking and OS scheduling), network device interface, and modifications such as changing on-demand device accesses queuing effects in both user and kernel spaces [18], [19]. into prefetch instructions combined with low-overhead user- Fortunately, new non-volatile memory (NVM) and network- mode context switching, and simple hardware changes such as ing technologies can provide a way out of this situation. Low- properly sizing hardware structures that track pending memory latency high-density NVM devices, such as 3D XPoint [5], accesses (e.g., the prefetch queues) can go a long way in will allow large amounts of data to be stored on a single effectively hiding microsecond-level latencies. server, without requiring a lot of expensive and power-hungry We also show that software-managed queue-based inter- DRAM. Furthermore, where multiple servers are needed—for faces, such as those used in NVMe and RDMA devices, are not capacity, availability, quality-of-service, or other reasons—fast a scalable solution for microsecond-latency

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us