
Distributed Shared Persistent Memory Yizhou Shan Shin-Yeh Tsai Yiying Zhang Purdue University Purdue University Purdue University [email protected] [email protected] [email protected] ABSTRACT magnetic memories (STTMs), and the memristor will provide byte Next-generation non-volatile memories (NVMs) will provide byte addressability, persistence, high density, and DRAM-like perfor- addressability, persistence, high density, and DRAM-like perfor- mance [85]. These developments are poised to radically alter the mance. They have the potential to benefit many datacenter applica- landscape of memory and storage technologies and have already tions. However, most previous research on NVMs has focused on inspired a host of research projects [5, 15, 16, 22, 57, 62, 88, 89, 95]. using them in a single machine environment. It is still unclear how However, most previous research on NVMs has focused on using to best utilize them in distributed, datacenter environments. them in a single machine environment. Even though NVMs have We introduce Distributed Shared Persistent Memory (DSPM), a the potential to greatly improve the performance and reliability of new framework for using persistent memories in distributed data- large-scale applications, it is still unclear how to best utilize them center environments. DSPM provides a new abstraction that allows in distributed, datacenter environments. applications to both perform traditional memory load and store This paper takes a significant step towards the goal of using instructions and to name, share, and persist their data. NVMs in distributed datacenter environments. We propose Dis- We built Hotpot, a kernel-level DSPM system that provides low- tributed Shared Persistent Memory (DSPM), a framework that pro- latency, transparent memory accesses, data persistence, data relia- vides a global, shared, and persistent memory space using a pool bility, and high availability. The key ideas of Hotpot are to integrate of machines with NVMs attached at the main memory bus. Appli- distributed memory caching and data replication techniques and cations can perform native memory load and store instructions to to exploit application hints. We implemented Hotpot in the Linux access both local and remote data in this global memory space and kernel and demonstrated its benefits by building a distributed graph can at the same time make their data persistent and reliable. DSPM engine on Hotpot and porting a NoSQL database to Hotpot. Our can benefit both single-node persistent-data applications that want evaluation shows that Hotpot outperforms a recent distributed to scale out efficiently and shared-memory applications that want shared memory system by 1.3× to 3.2× and a recent distributed to add durability to their data. PM-based file system by 1.5× to 3.0×. Unlike traditional systems with separate memory and storage layers [23, 24, 80, 81], we propose to use just one layer that incor- CCS CONCEPTS porates both distributed memory and distributed storage in DSPM. DSPM’s one-layer approach eliminates the performance overhead • Software and its engineering → Distributed memory; Dis- of data marshaling and unmarshaling, and the space overhead of tributed systems organizing principles; • Computer systems storing data twice. With this one-layer approach, DSPM can poten- organization → Reliability; Availability; tially provide the low-latency performance, vast persistent memory KEYWORDS space, data reliability, and high availability that many modern data- center applications demand. Persistent Memory, Distributed Shared Memory Building a DSPM system presents its unique challenges. Adding ACM Reference Format: “Persistence” to Distributed Shared Memory (DSM) is not as simple Yizhou Shan, Shin-Yeh Tsai, and Yiying Zhang. 2017. Distributed Shared Per- as just making in-memory data durable. Apart from data durability, sistent Memory. In Proceedings of SoCC ’17, Santa Clara, CA, USA, September DSPM needs to provide two key features that DSM does not have: 24–27, 2017, 15 pages. persistent naming and data reliability. In addition to accessing data https://doi.org/10.1145/3127479.3128610 in PM via native memory loads and stores, applications should be able to easily name, close, and re-open their in-memory data 1 INTRODUCTION structures. User data should also be reliably stored in NVM and sustain various types of failures; they need to be consistent both Next-generation non-volatile memories (NVMs), such as 3DX- within a node and across distributed nodes after crashes. To make point [36], phase change memory (PCM), spin-transfer torque it more challenging, DSPM has to deliver these guarantees without Permission to make digital or hard copies of all or part of this work for personal or sacrificing application performance in order to preserve the low- classroom use is granted without fee provided that copies are not made or distributed latency performance of NVMs. for profit or commercial advantage and that copies bear this notice and the full citation We built Hotpot, a DSPM system in the Linux kernel. Hotpot of- on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or fers low-latency, direct memory access, data persistence, reliability, republish, to post on servers or to redistribute to lists, requires prior specific permission and high availability to datacenter applications. It exposes a global and/or a fee. Request permissions from [email protected]. virtual memory address space to each user application and provides SoCC ’17, September 24–27, 2017, Santa Clara, CA, USA © 2017 Copyright held by the owner/author(s). Publication rights licensed to Associa- a new persistent naming mechanism that is both easy-to-use and tion for Computing Machinery. efficient. Internally, Hotpot organizes and manages data in aflexible ACM ISBN 978-1-4503-5028-0/17/09...$15.00 https://doi.org/10.1145/3127479.3128610 SoCC ’17, September 24–27, 2017, Santa Clara, CA, USA Y. Shan et al. way and uses a set of adaptive resource management techniques to and is an easy and efficient way for datacenter applications improve performance and scalability. to use PM. Hotpot builds on two main ideas to efficiently provide data re- • We propose a one-layer approach to build DSPM by integrat- liability with distributed shared memory access. Our first idea is ing memory coherence and data replication. The one-layer to integrate distributed memory caching and data replication by approach avoids the performance cost of two or more indi- imposing morphable states on persistent memory (PM) pages. rection layers. In DSM systems, when an application on a node accesses shared • We designed two distributed data commit protocols with data in remote memory on demand, DSM caches these data copies different consistency levels and corresponding recovery pro- in its local memory for fast accesses and later evicts them when re- tocols to ensure data durability, reliability, and availability. claiming local memory space. Like DSM, Hotpot caches application- • We built the first DSPM system, Hotpot, in the Linux kernel, accessed data in local PM and ensures the coherence of multiple and open source it together with several applications ported cached copies on different nodes. But Hotpot also uses these cached to it. data as persistent data replicas and ensures their reliability and crash • We demonstrated Hotpot’s performance benefits and ease consistency. of use with two real datacenter applications and extensive On the other hand, unlike distributed storage systems, which microbenchmark evaluation. We compared Hotpot with five creates extra data replicas to meet user-specified reliability require- existing file systems and distributed memory systems. ments, Hotpot makes use of data copies that already exist in the The rest of the paper is organized as follows. Section 2 presents system when they were fetched to a local node due to application and analyzes several recent datacenter trends that motivated our memory accesses. design of DSPM. We discuss the benefits and challenges of DSPM In essence, every local copy of data serves two simultaneous pur- in Section 3. Section 4 presents the architecture and abstraction of poses. First, applications can access it locally without any network Hotpot. We then discuss Hotpot’s data management in Section 5. delay. Second, by placing the fetched copy in PM, it can be treated We present our protocols and mechanisms to ensure data durability, as a persistent replica for data reliability. consistency, reliability, and availability in Section 6. Section 7 briefly This seemingly-straightforward integration is not simple. Main- discusses the network layer we built underlying Hotpot, and Section taining wrong or outdated versions of data can result in inconsistent 8 presents detailed evaluation of Hotpot. We cover related work in data. To make it worse, these inconsistent data will be persistent in Section 9 and conclude in Section 10. PM. We carefully designed a set of protocols to deliver data relia- bility and crash consistency guarantees while integrating memory 2 MOTIVATION caching and data replication. DSPM is motivated by three datacenter trends: emerging hardware Our second idea is to exploit application behaviors and intentions PM technologies, modern data-intensive applications’ data shar- in the DSPM setting. Unlike traditional memory-based applications, ing, persistence, and reliability needs, and the availability
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-