Optimized Memory Management for a Java-Based Distributed In-Memory System

Optimized Memory Management for a Java-Based Distributed In-Memory System

Optimized Memory Management for a Java-Based Distributed In-Memory System Stefan Nothaas Kevin Beineke Michael Schoettner Department of CS Operating Systems Department of CS Operating Systems Department of CS Operating Systems Heinrich-Heine-Universitat¨ Heinrich-Heine-Universitat¨ Heinrich-Heine-Universitat¨ Duesseldorf, Germany Duesseldorf, Germany Duesseldorf, Germany [email protected] [email protected] [email protected] Abstract—Several Java-based distributed in-memory systems synchronizing concurrent access. Combined with today’s low- have been proposed in the literature, but most are not aiming at latency networks, providing single-digit microseconds remote graph applications having highly concurrent and irregular access access times on a distributed scale, local access times must be patterns to many small data objects. DXRAM is addressing these challenges and relies on DXMem for memory management and kept low to ensure high performance which is challenging in concurrency control on each server. DXMem is published as an general but especially in Java. open-source library, which can be used by any other system, too. DXMem is the extended and optimized memory manage- In this paper, we briefly describe our previously published but ment of DXRAM [15], a Java-based distributed in-memory relevant design aspects of the memory management. However, key-value store. DXRAM implements a chord-like overlay the main contributions of this paper are the new extensions, optimizations, and evaluations. These contributions include an with dedicated nodes for storing and providing the metadata improved address translation which is now faster compared to which is a unique approach compared to the other systems the old solution with a translation cache. The coarse-grained not using dedicated servers for storing metadata for object concurrency control of our first approach has been replaced by lookup. Using DXNet [16], DXRAM supports Ethernet over a very efficient per-object read-write lock which allows a much TCP using Java NIO and low-latency InfiniBand networks better throughput, especially under high concurrency. Finally, we compared DXRAM for the first time to Hazelcast and Infinispan, using native verbs. DXMem provides low metadata overhead two state-of-the-art Java-based distributed cache systems using and low-latency memory management for highly concurrent real-world application-workloads and the Yahoo! Cloud Serving data access. Data is stored in native memory to avoid memory Benchmark in a distributed environment. The results of the and garbage collection overhead imposed by the standard Java experiments show that DXRAM outperforms both systems while heap. DXMem uses a fast and low-overhead 64-bit key to raw having a much lower metadata overhead for many small data objects. memory address mapping. Java objects are serialized to native Index Terms—Memory management, Cache storage, Dis- memory using a custom lightweight and fast serialization tributed computing implementation. DXMem does not use hashing but instead implements a custom chunk ID to native memory address I. INTRODUCTION translation. Thus, DXMem does not automatically distribute The ever-growing amounts of data, for example in big data data to servers, but an application decides where to store the applications, are addressed by aggregating resources in com- data which, of course, can be determined using a hashing modity clusters or the cloud [22]. This concerns applications algorithm as well. like social networks [14], [25], [26], search engines [21], [31], DXMem offers a low-overhead per-object read-write lock- simulations [32] or online data analytics [19], [36], [37]. To ing mechanism for concurrency management as well as mem- reduce local data access times, especially for graph-based ory defragmentation for long-running applications. On an applications processing billions of tiny data objects (< 128 average object size of 32 bytes, DXMem can store 100 million bytes) [17], [29], [34], backend systems like caches and key- objects with just 22% additional overhead (§VI-A). On a value stores keep all data in-memory. typical big data workload with 32 byte objects, 95% get Many systems for big data applications, such as frameworks and 5% put operations, DXMem achieves a local aggregated [24], [28], databases [3], [4], or backend storages/caches [5], throughput of 78 million operations per second (mops) with [7], [8], [30], are written in Java. However, many of them 128 threads which is an up to 28-fold increase compared to cannot handle small data objects (32 - 128 byte) efficiently Hazelcast [7] and Infinispan [8], two Java-based state-of-the- and introduce a considerable large metadata overhead on a per- art in-memory caches (§VI-B). Using the Yahoo! Cloud Ser- object basis. Compared to traditional disk storage solutions, vice Benchmark [18], we compared DXRAM with DXMem to RAM is more expensive and requires sophisticated memory Hazelcast and Infinispan (§VI-C), too. The results show that management. High concurrency in big data applications is the DXRAM scales well with up to 16 storage servers and 16 rule but adds additional challenges to ensure low access-times benchmark clients on real-world read-heavy workloads with for local and remote access and to provide mechanisms for tiny objects outperforming the other two systems. Our previous publication [23] has addressed the following Fast remote response times. Low remote latency on inter- contributions: node communication is ensured by low-latency network in- • The initial design of the low-overhead memory allocator terconnects, e.g., InfiniBand should be considered which in • The address translation (CIDTable) without per-chunk turn demand low local latency not to become the bottleneck locks instead. However, with many applications and frameworks in • An arena-based memory segmentation for coarse-grained Java, access to such low-level hardware is very challenging. concurrency control and defragmentation Fast and efficient (remote) object lookup. With billions The contributions of this paper are: of objects distributed across multiple nodes, object lookup • Reduced metadata overhead while supporting more stor- becomes a challenge, too. Often, a key-value design combined age per server (up to 8 TB, before 1 TB) with hashing is used to address this issue [3], [7], [8], and the • Low-overhead Java object to binary data serialization (the standard API provides CRUD operations (create, read, update, old design supported binary data only) delete). • Optimized address translation (faster than the old design Very small objects. Typical data models for big data with translation cache) applications include tables, sets, lists, and graph-structured • Efficient fine-grained locking for each stored object data [33]. For the latter, storing billions of objects becomes a • New experiments and comparisons with Infinispan, and challenge because the per-object overhead must be kept low. Hazelcast With the limited amount of main memory, storing more objects To evaluate the local memory manager performance of per node does not only require fewer nodes to store all data storage instances, we created a microbenchmark based on the but also increases locality and performance. design and workloads of the YCSB and implemented clients High concurrency. Simultaneously serving many concur- for the systems evaluated in this paper (§VI-B). DXMem is rent interactive user requests or using many threads to lower also published as a separate Java library that can be used by execution times of algorithms, e.g., graph traversal, high any Java application. DXRAM and DXMem are open-source concurrency is a must. On today’s multi-core hardware, con- and available at Github [6]. currency support and optimizations are inevitable. However, The remaining paper is structured as follows: Section II with concurrency data races must be considered and require presents the target application domains and their requirements. mechanisms to synchronize data access on concurrent mod- Section III presents related work. We give a brief top-down ification without limiting concurrency and increasing access overview of DXMem and its components in Section IV latency too much. before explaining them in detail in a bottom-up approach in the following sections. Starting with Section V, we ex- III. RELATED WORK plain important details about memory management in Java Large batch processing tasks are commonly used for big before elaborating on DXMem’s allocator in Section V-A. data processing, e.g. Hadoop [12], Spark [13], and rely on This section is followed by Section V-B which describes the input, output and intermediate data stored on distributed file CIDTable translating chunk ID to native memory addresses. systems. MemFS [35] is an in-memory runtime file system The design of the fine-granular locks is presented in Section that tries to overcome the drawbacks of general-purpose dis- V-C The evaluation and comparison of DXMem to Hazelcast tributed file systems by implementing a striping mechanism and Infinispan is presented in Section VI. Conclusions are to maximize I/O performance for distributed data and storing located in Section VII. data in-memory using memcached [20]. II. CHALLENGES AND REQUIREMENTS Common purpose memory management, algorithms, and This section briefly presents the target application domains allocators are widely studied in literature and are beyond the which were already introduced in Section I. Often, Big data scope of this paper but have been discussed and

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us