Towards an Unwritten Contract of Intel Optane SSD Kan Wu, Andrea Arpaci-Dusseau and Remzi Arpaci-Dusseau University of Wisconsin-Madison

Total Page:16

File Type:pdf, Size:1020Kb

Towards an Unwritten Contract of Intel Optane SSD Kan Wu, Andrea Arpaci-Dusseau and Remzi Arpaci-Dusseau University of Wisconsin-Madison Towards an Unwritten Contract of Intel Optane SSD Kan Wu, Andrea Arpaci-Dusseau and Remzi Arpaci-Dusseau University of Wisconsin-Madison Abstract Rule Impact Metric Cause Access with Low Request Scale 11x Latency SSD Controller/Interconnect Random Access is OK 7x (vs. Flash) Latency & Throughput 3D XPoint & Controller New non-volatile memory technologies offer unprece- Avoid Crowded Accesses 4.6x Latency & Throughput SSD Controller/Interconnect dented performance levels for persistent storage. However, to Control Overall Load 5x (vs. Flash) Latency 3D XPoint Memory Avoid Tiny Accesses 5x Throughput SSD Controller/Interconnect exploit their full potential, a deeper performance characteri- Issue 4KB Aligned Requests 1.2x Latency SSD Controller/Interconnect Forget Garbage Collection 15x (vs. Flash) Sustained Throughput 3D XPoint & Controller zation of such devices is required. In this paper, we analyze Table 1: Performance Impact of Rule Violations a NVM-based block device – the Intel Optane SSD – and formalize an “unwritten contract” of the Optane SSD. We rules that are critical for Optane SSD users. First, to obtain show that violating this contract can result in 11x worse read low latency, users should issue small requests ( > 4 KB) latency and limited throughput (only 20% of peak bandwidth) and keep a small number of outstanding IOs (Access with regardless of parallelism. We present that this contract is rel- Low Request Scale rule). Second, different from HDD/Flash evant to features of 3D XPoint memory and Intel Optane SSDs, Optane SSD clients should not consider sequential SSD’s controller/interconnect design. Finally, we discuss the workloads special (Random Access is OK rule). Third, to implications of the contract. avoid contention among requests, clients should not issue parallel accesses to a single chunk (4KB) (Avoid Crowded 1 Introduction Accesses rule). Fourth, to achieve optimal latency, the user New NVM technologies provide unprecedented performance needs to control the overall load of both reads and writes levels for persistent storage. Such devices offer significantly (Control Overall Load rule). Fifth, to exploit the bandwidth lower latency than Flash-based SSD and can be a cost- of Optane SSD, clients should never issue requests less than effective alternative to DRAM. One excellent example is 4KB (Avoid Tiny Accesses rule). Sixth, to get the best la- Intel’s 3D XPoint memory [1,25] from Intel and Micron, avail- tency, requests issued to Optane SSD should align to eight able on the open market under the brand name Optane [11]. sectors (Issue 4KB Aligned Requests rule). Finally, when It is available in various form factors, including Optane mem- serving sustained workloads, there is no cost of garbage col- ory [9] (a caching layer between DRAM and block device), lection in Optane SSD (Forget Garbage Collection rule). Optane SSD [10] (a block device), and Optane DC Persis- Overall, these rules are relevant to features of 3D XPoint tent Memory [8]. Among these devices, the Optane SSD is memory and Optane SSD’s controller/interconnect design. currently the most cost-effective and widely available option. The unwritten contract provides numerous implications for Optane SSD offers numerous opportunities for applications. systems and applications. According to the Access with Low For example, Intel’s Memory Direct Technology (IMDT) [7] Request Scale rule and the Control Overall Load rule, the enables the use of Optane SSD as a DRAM alternative. Use design of heterogeneous storage systems including Optane cases of IMDT/Optane SSD include Memcached [4], Re- SSD and Flash SSD must be carefully considered. Many dis [5], and Spark [6]. Evaluations of those scenarios demon- rules (e.g., Random Access is OK) present opportunities and strate the potential role of Optane SSD as a cost-effective new challenges to external data structure design for Optane alternative of DRAM. Optane SSDs are also deployed to sup- SSD. Finally, the Random Access is OK rule may enable new port key workloads in Facebook [22, 23], both as a caching applications on Optane SSD that don’t work well on existing layer between DRAM and Flash SSD for RocksDB and for technologies. crucial workloads such as machine learning. According to The rest of this paper is structured as follows. We describe Eisenman et al. [23], Facebook stores embeddings of trained the unwritten contract and explain the insights for each rule neural networks on Optane SSDs. in Section2. We provide the implications of the unwritten Using new technology effectively requires understanding contract in Section3, conclude in Section4 and point to its performance and reliability characteristics. For traditional potential research directions in Section5. devices, such as hard-drives and Flash-based SSDs, these are well known [16,17,20,21,26,30,33]. However, for new 2 The Unwritten Contract of Optane SSD devices like the Optane SSD, much remains unclear, and is In this section, we define the rules of the unwrit- thus the focus of our work. ten contract. We offer experiments (available at In terms of immediate performance, we summarize six https://github.com/sherlockwu/OptaneBench) to infer Controller optane better optane better 1.7 1.0 -0.1 -0.2 -0.2 -0.3 -0.3 -0.2 -0.1 -0.2 -0.2 -0.2 -0.2 -0.2 256 5.0 256 5.0 3D XPoint 3D XPoint 3D XPoint 1.7 1.8 0.6 -0.3 -0.3 -0.3 -0.4 -0.2 -0.1 -0.3 -0.2 -0.2 -0.2 -0.2 Memory Die Memory Die Memory Die 128 128 2.2 2.3 0.9 0.1 -0.3 -0.4 -0.3 2.5 0.0 -0.2 -0.3 -0.2 -0.2 -0.2 -0.2 2.5 64 64 3.2 3.6 1.4 0.4 -0.1 -0.2 -0.2 0.1 -0.1 -0.2 -0.2 -0.2 -0.2 -0.2 3D XPoint 3D XPoint 3D XPoint 32 32 0.0 0.0 … 3.9 4.3 2.4 1.1 0.4 0.2 0.3 0.3 0.0 -0.1 -0.1 -0.2 -0.1 -0.2 Memory Die Memory Die Memory Die 16 16 8 − 8 − Request Size (KB) 5.2 4.9 4.7 2.0 0.7 0.2 0.0 2.5 Request Size (KB) 0.5 0.3 -0.0 -0.5 -0.4 -0.4 -0.4 2.5 … … … 4 6.2 5.7 5.9 4.3 1.9 0.7 0.2 4 0.7 0.3 0.2 0.2 -0.0 -0.1 -0.1 −5.0 −5.0 3D XPoint 3D XPoint 3D XPoint 1 7.4 6.7 6.9 4.5 1.9 0.7 0.2 1 0.2 0.1 0.0 0.2 0.1 0.1 0.1 Memory Die Memory Die Memory Die 1 2 4 8 16 32 64 flash better 1 2 4 8 16 32 64 flash better Queue Depth Queue Depth Channel #7 Channel #2 Channel #1 (a) Read (b) Write Figure 2: RAID-like Architecture Figure 1: Avg Latency of random workloads, Optane vs. Flash 1000 3000 the rules. We summarize the impact and cause of each rule 2500 QD=16 800 in Table1. We begin with six rules related to immediate QD=8 QD=16 2000 performance like latency and throughput. We then present 600 1500 rules for sustainable performance. QD=8 400 QD=4 1000 Throughput (MB/s) QD=4 Throughput(MB/s) 200 QD=2 500 2.1 Access with Low Request Scale QD=2 QD=1 QD=1 0 0 The Optane SSD is based on 3D XPoint memory which is 0 64 128 192 256 0 64 128 192 256 said to provide up to 1,000 times lower latency than NAND Stride (chunk=8sectors) Stride (chunk=8sectors) Flash [2]. Does 3D XPoint memory always lead to better (a) Flash SSD (b) Optane 905P SSD performance for Optane SSD compared to Flash SSD? We Figure 3: Detecting Interleaving Degree answer this question with our first rule: to obtain low latency, again better in the opposite cases; however, the difference for Optane SSD users should issue small requests and maintain writes is not as high as for reads (Optane is up to 1.7x faster a small number of outstanding IOs. This rule is needed not than Flash). This result is due to Flash SSDs log-structured only to extract low latency but also enough to exploit the full layout and buffering optimizations. Flash SSD outperforms bandwidth of the Optane SSD. Optane SSD when request size> 4KB and QD> 2, which We uncovered this rule when quantitatively comparing includes most of our tested workloads. Intel Optane SSD 905P (960GB) with a “high-end” Flash SSD: Samsung 970 Pro (1TB) [12]. From our experiments, The device’s internal parallelism dictates its behavior when we found that Optane SSD does not show improvement for serving workloads with high request scale. We are thus mo- workloads with many outstanding requests. tivated to uncover the internals of the Optane SSD. Like Flash SSD, Optane SSD utilizes a RAID-like organization We compare Optane and Flash SSDs with random read- of memory dies (Figure2). Through a fine-grained experi- only and write-only workloads. Each workload has two vari- ment [18, 19] we examine a critical parameter for internal ables: request size and queue depth (QD) (or number of in- parallelism: the interleaving degree, or number of channels. flight I/Os). Figure1 compares the two devices in terms of We maintain a read stream with stride S from the devices, average access latency. The temperature T in each rectangle where S is the distance between two consecutive chunks (a shows the scaled difference between the latencies of the two chunk is 4KB or 8 sectors as shown in Section 2.6).
Recommended publications
  • Analyzing the Performance of Intel Optane DC Persistent Memory in Storage Over App Direct Mode
    Front cover Analyzing the Performance of Intel Optane DC Persistent Memory in Storage over App Direct Mode Introduces DCPMM Storage over Evaluates workload latency and App Direct Mode bandwidth performance Establishes performance Illustrates scaling performance for expectations for DCPMM DCPMM populations capacities Travis Liao Tristian "Truth" Brown Jamie Chou Abstract Intel Optane DC Persistent Memory is the latest memory technology for Lenovo® ThinkSystem™ servers. This technology deviates from contemporary flash storage offerings and utilizes the ground-breaking 3D XPoint non-volatile memory technology to deliver a new level of versatile performance in a compact memory module form factor. As server storage technology continues to advance from devices behind RAID controllers to offerings closer to the processor, it is important to understand the technological differences and suitable use cases. This paper provides a look into the performance of Intel Optane DC Persistent Memory Modules configured in Storage over App Direct Mode operation. At Lenovo Press, we bring together experts to produce technical publications around topics of importance to you, providing information and best practices for using Lenovo products and solutions to solve IT challenges. See a list of our most recent publications at the Lenovo Press web site: http://lenovopress.com Do you have the latest version? We update our papers from time to time, so check whether you have the latest version of this document by clicking the Check for Updates button on the front page of the PDF. Pressing this button will take you to a web page that will tell you if you are reading the latest version of the document and give you a link to the latest if needed.
    [Show full text]
  • Initial Experience with 3D Xpoint Main Memory
    Initial Experience with 3D XPoint Main Memory Jihang Liu Shimin Chen* State Key Laboratory of Computer Architecture State Key Laboratory of Computer Architecture Institute of Computing Technology, CAS Institute of Computing Technology, CAS University of Chinese Academy of Sciences University of Chinese Academy of Sciences [email protected] [email protected] Abstract—3D XPoint will likely be the first commercially available main memory NVM solution targeting mainstream computer systems. Previous database studies on NVM memory evaluate their proposed techniques mainly on simulated or emulated NVM hardware. In this paper, we report our initial experience experimenting with the real 3D XPoint main memory hardware. I. INTRODUCTION A new generation of Non-Volatile Memory (NVM) tech- nologies, including PCM [1], STT-RAM [2], and Memris- tor [3], are expected to address the DRAM scaling problem and become the next generation main memory technology Fig. 1. AEP machine. in future computer systems [4]–[7]. NVM has become a hot topic in database research in recent years. However, previous First, there are two CPUs in the machine. From the studies evaluate their proposed techniques mainly on simulated /proc/cpuinfo, the CPU model is listed as “Genuine In- or emulated hardware. In this paper, we report our initial tel(R) CPU 0000%@”. Our guess is that the CPU is a test experience with real NVM main memory hardware. model and/or not yet recognized by the Linux kernel. The 3D XPoint is an NVM technology jointly developed by CPU supports the new clwb instruction [9]. Given a memory Intel and Micron [8]. Intel first announced the technology address, clwb writes back to memory the associated cache in 2015.
    [Show full text]
  • After the 3D Xpoint Take-Off, Emerging NVM Keeps Growing1
    Press Release February 26, 2020 LYON, France After the 3D XPoint take-off, emerging NVM keeps growing1 OUTLINES: PCM2-based persistent memory hit the market in 2019. And embedded STT-MRAM3 has finally moved into mass production. The overall emerging NVM4 market is poised to reach over US$6 billion in 2025. The embedded business will account for 1/3 of it. Foundries and IDMs5 are accelerating the adoption of embedded emerging NVMs at advanced technology nodes. Rise of in-memory computing is triggering new players’ dynamics. “The stand-alone emerging NVM market will grow to over US$4 billion in 2025”, explains Simone Bertolazzi, PhD. Technology & Market Analyst at Yole Développement (Yole). “It will be driven by two key segments: low-latency storage (enterprise and client drives) and persistent memory (NVDIMMs).” 1 Extracted from Emerging Non-Volatile Memory, Yole Développement, 2020 2 PCM: Phase-Change Memory 3 STT- MRAM : Spin-Transfer Torque Magnetoresistive RAM 4 NVM: Non-Volatile Memory 5 IDM : Integrated Design Manufacturer Press Release The embedded emerging NVM entered the takeoff phase. The embedded market segment is showing a 118% CAGR6 between 2019 and 2025, reaching more than US$2 billion by 2025. In this dynamic context, the market research and strategy consulting company Yole, releases its annual memory report, Emerging Non-Volatile Memory. The 2020 edition presents an overview of the semiconductor memory market with stand-alone7 and embedded memories8. Yole’s memory team proposes today a deep understanding of emerging NVM applications with related market drivers, challenges, technology roadmap, players, and main trends. This report also offers detailed 2019-2025 market forecasts.
    [Show full text]
  • Transforming Business with Data
    WHITE PAPER Transforming Business with Data Enterprise infrastructure is pushing the boundaries of data in scope, size, and speed. The right infrastructure tools can help you make better business decisions by getting more value from data. Writing about the data explosion is the IT equivalent of “it was a dark and stormy night.” At this point, it’s an old narrative that misstates the challenge and actually understates the problem. Yes, there’s more data, but there are also more types of data, which further complicates how to store, activate, and monetize this modern-day resource that promises to transform companies and industries. Advanced analytics capabilities, like artificial intelligence (AI) and machine learning, make the machine-generated unstructured and semi-structured data coming from the Internet of Things (IoT) accessible and, thus, valuable. But many companies are still not realizing the full potential of even their conventional, structured data. Less than half of companies’ structured data is being actively used by decision-makers, and less than one percent of companies’ unstructured data is being analyzed or used at all.1 This data needs to be activated through better storage technologies so that companies can realize the full value of their data. One way in which Intel helps activate data is by innovatively disrupting data centers’ data tiers. Traditionally, IT organizations have faced a stark tradeoff with their storage: faster storage was much more expensive than slower storage. It only made economic sense to place the most-important, most-accessed data in fast storage. But if storage can be made faster—particularly less expensive storage— without a proportionate rise in price, more data can be analyzed and acted upon.
    [Show full text]
  • Arxiv:1908.03583V1 [Cs.DC] 9 Aug 2019 1 Introduction
    An Empirical Guide to the Behavior and Use of Scalable Persistent Memory Jian Yang Juno Kim Morteza Hoseinzadeh Joseph Izraelevitz Steven Swanson* Nonvolatile Systems Laboratory Computer Science & Engineering University of California, San Diego August 13, 2019 Abstract After nearly a decade of anticipation, scalable nonvolatile memory DIMMs are finally commercially available with the release of Intel’s 3D XPoint DIMM. This new nonvolatile DIMM supports byte-granularity accesses with access times on the order of DRAM, while also providing data storage that survives power outages. Researchers have not idly waited for real nonvolatile DIMMs (NVDIMMs) to arrive. Over the past decade, they have written a slew of papers proposing new programming models, file systems, libraries, and applications built to exploit the performance and flexibility that NVDIMMs promised to deliver. Those papers drew conclusions and made design decisions without detailed knowledge of how real NVDIMMs would behave or how industry would integrate them into computer architectures. Now that 3D XPoint NVDIMMs are actually here, we can provide detailed performance numbers, concrete guidance for programmers on these systems, reevaluate prior art for performance, and reoptimize persistent memory software for the real 3D XPoint DIMM. In this paper, we explore the performance properties and characteristics of Intel’s new 3D XPoint DIMM at the micro and macro level. First, we investigate the basic characteristics of the device, taking special note of the particular ways in which its performance is peculiar relative to traditional DRAM or other past methods used to emulate NVM. From these observations, we recommend a set of best practices to maximize the performance of the device.
    [Show full text]
  • Intel® Optane™ DC Persistent Memory – Telecom Use Case Workloads
    Application note Intel Corporation Intel® Optane™ DC Persistent Memory – Telecom Use Case Workloads Author 1 Introduction David Kennedy This document provides an overview of Intel® Optane™ DC persistent memory for 2nd generation Intel® Xeon® Scalable processors (formerly Cascade Lake). The document contains a description of the technology and discusses some use cases for telecom environments. The requirement for large memory footprints for artificial intelligence and media/streaming servers, as well as low latency access to large subscriber databases and routing tables in telecom networks provides a challenge for customers in deploying manageable nodes in the network. Often, instances of these databases are stored across multiple nodes, which increases the total cost of ownership and manageability costs. Also, due to the different latency requirements, there are often multiple copies of database instances across the network, plus data duplication for redundancy. Intel Optane DC persistent memory is highly beneficial for applications that require high capacity, low latency, and resiliency, which are applicable across a number of telecom specific use cases, including: • In-memory databases • Analytics • Virtualization • HPC clusters Intel Optane DC persistent memory provides the opportunity to: • Consolidate databases into smaller, easier to manage instances. • Position large datasets, such as network telemetry, right next to the CPU, directly in-memory, for faster big data analytics for network automation. • Enable a whole new class of shared, distributed data layer to enable cloud native infrastructure with ultra low latency, similar to DRAM, but without the cost of DRAM. • Cache rapidly emerging live linear video content or highest performance video-on- demand in a CDN with capacity-like storage, but performance-like memory, at a lower cost.
    [Show full text]
  • From Flash to 3D Xpoint: Performance Bottlenecks and Potentials in Rocksdb with Storage Evolution
    From Flash to 3D XPoint: Performance Bottlenecks and Potentials in RocksDB with Storage Evolution Yichen Jia Feng Chen Computer Science and Engineering Computer Science and Engineering Louisiana State University Louisiana State University [email protected] [email protected] Abstract—Storage technologies have undergone continuous much lower latency and higher throughput. Most importantly, innovations in the past decade. The latest technical advance- 3D XPoint SSD significantly alleviates many long-existing ment in this domain is 3D XPoint memory. As a type of Non- concerns on flash SSDs, such as the read-write speed disparity, volatile Memory (NVM), 3D XPoint memory promises great improvement in performance, density, and endurance over NAND slow random write, and endurance problems [26], [42]. Thus flash memory. Compared to flash based SSDs, 3D XPoint based 3D XPoint SSD is widely regarded as a pivotal technology for SSDs, such as Intel’s Optane SSD, can deliver unprecedented low building the next-generation storage system in the future. latency and high throughput. These properties are particularly On the software side, in order to accelerate I/O-intensive appealing to I/O intensive applications. Key-value store is such services in data centers, RocksDB [11], which is the state- an important application in data center systems. This paper presents the first, in-depth performance study of-art Log-structured Merge tree (LSM-tree) based key-value on the impact of the aforesaid storage hardware evolution store, has been widely used in various major data storage to RocksDB, a highly popular key-value store based on Log- and processing systems, such as graph databases [8], stream structured Merge tree (LSM-tree).
    [Show full text]
  • Introducing 3D Xpoint™ Memory
    Rigel V I S I O N O F P R O C E S S O R – M E M O R Y S Y S T E M S MICRO 48 J. Thomas Pawlowski, Chief Technologist, Fellow [email protected] ©2015 Micron Technology, Inc. All rights reserved. Information, products, and/or specifications are subject to change without notice. All information is provided on an “AS IS” basis without warranties of any kind. December 8, 2015 Statements regarding products, including regarding their features, availability, functionality, or compatibility, Title Slide - White are provided for informational purposes only and do not modify the warranty, if any, applicable to any product. Drawings may not be to scale. Micron, the Micron logo, and all other Micron trademarks are the Alternate layout property of Micron Technology, Inc. All other trademarks are the property of their respective owners. for first slide in the deck. Memory diversity . 3D XPoint™ Memory . Some scaling observations . Changing Relationships . Unabated performance . Micron Automata Processor pressure . Abstraction and Future . Memory Technologies Systems . Emerging Memory . 11 Key Points Agenda Two-column layout, © 2015 Micron Technology, Inc. December 8, 2015 to be used with any number of items. Memory Diversity December 8, 2015 White Segue Diversified Memory Markets Automotive Storage Graphics / Consumer Networks Server Mobile Personal Computing IMM Title Only Includes only the title and subtitle, with a large open © 2015 Micron Technology, Inc. December 8, 2015 space in the middle of the slide. Smartphones and Tablets Enterprise and
    [Show full text]
  • Nvme Technology – Powering the Connected Universe
    NVMe® Technology Powering the Connected Universe Amber Huffman Fellow & Chief Technologist of IP Engineering Group, Intel Corporation President, NVM Express, Inc. Agenda Fixing the Memory & Storage Hierarchy Refactoring for the Next Decade of Growth NVMe® Architecture Advancements What’s Next: Computational Storage Intel Corporation 2 Agenda Fixing the Memory & Storage Hierarchy Refactoring for the Next Decade of Growth NVMe® Architecture Advancements What’s Next: Computational Storage Intel Corporation 3 Intel Corporation 4 Memory and Storage Hierarchy Gaps COMPUTE 100s MB, ~1ns CACHE IN- PACKAGE 1s GB, ~10ns MEMORY MEMORY 10s GB, <100ns Memory 100s GB, <1usec CAPACITY GAP STORAGE PERFORMANCE GAP 1s TB, <10µsecs 10s TB, <100µsecs Storage SECONDARY STORAGE COST- PERFORMANCE GAP TERTIARY STORAGE 10s TB, <10 msecs Intel Corporation 5 Memory and Storage Hierarchy Gaps COMPUTE SRAM CACHE Bandwidth IN- PACKAGE DRAM (HBM) MEMORY MEMORY DRAM 3D NAND Flash SECONDARY STORAGE HDD TERTIARY STORAGE Capacity For illustrative purposes only Intel Corporation 6 Types of Memory DRAM 3D XPOINT INTEL 3D NAND One DRAM memory cell = 1 bit One 3D XPoint memory cell = 1 bit One 3D NAND memory cell = 1-4 bits Intel Corporation 7 Types of Memory Compared DRAM 3 D XPOINT INTEL 3 D NAND One DRAM memory cell = 1 bit One 3D XPoint memory cell = 1 bit One 3D NAND memory cell = 1-4 bits Intel Corporation 8 3D NAND Roadmap 128L Industry Target 96L QLC Gen 3 64L QLC Gen 2 32L TLC Gen 1 128L LAYERS 96L 64L 32L 2016 2017 2019 2020 Intel Corporation 9 3D NAND Roadmap 144L
    [Show full text]
  • 3D X Point Technology
    IETE Zonal Seminar “Recent Trends in Engineering & Technology” - 2017 Special Issue of International Journal of Electronics, Communication & Soft Computing Science and Engineering, ISSN: 2277-9477 3D X Point Technology Mr. Umang D. Dadmal Rakshanda S. Vinkare Prof. P. G. Kaushik Prof. S. A Mishra Abstract – In this era, memory is considered to be an important This memory read and writes operation is performed by parameter for storing data, Previous memory like RAM ROM electron beams which are part of cathode rays tubes. Initially Flash Drive etc. have flaws which can be overcome with 3D X capacity of storage of RAM is very low; it can store only few Point Memory Technology.3D X Point technology is a new class hundred bits near about to thousand bits. But in comparison of non-volatile memory. This technology sharply increasing vacuum tubes the size of RAM is very small and these work because of its high speed/performance. It is up to 1,000 times lower latency & exponentially greater endurance than NAND & very fast than vacuum tubes and more power efficient. 10 time denser than DRAM.. In This paper we have discussed For digital storage, Williams Tube used a series of electronic about its unique and transistor less Architecture design. The in CRT. The capacity of RAM is near about 128 Byte. most immediate application of 3D X point based products will be as a layer of storage between DRAM and SSD. Key word- NAND Flash Memory, DRAM, 3D X Point Memory I. INTRODUCTION 3D X point memory is an entirely new class of nonvolatile memory.
    [Show full text]
  • Analyzing the Performance of Intel Optane DC Persistent Memory in App Direct Mode in Lenovo Thinksystem Servers
    Front cover Analyzing the Performance of Intel Optane DC Persistent Memory in App Direct Mode in Lenovo ThinkSystem Servers Introduces DCPMM App Direct Explores performance capabilities of Mode persistent memory Establishes performance Discusses configurations for optimal expectations for given workloads DCPMM performance Tristian “Truth” Brown Travis Liao Jamie Chou Abstract Intel Optane DC Persistent Memory is the latest memory technology for Lenovo ThinkSystem servers. This technology deviates from contemporary flash storage offerings and utilizes the ground-breaking 3D XPoint non-volatile memory technology to deliver a new level of versatile performance in a compact memory module form factor. This paper focuses on the low-level hardware performance capabilities of Intel Optane DC Persistent Memory configured in App Direct Mode operation. This paper provides the reader with an understanding of workloads to produce the highest level of performance with this innovative technology. At Lenovo Press, we bring together experts to produce technical publications around topics of importance to you, providing information and best practices for using Lenovo products and solutions to solve IT challenges. See a list of our most recent publications at the Lenovo Press web site: http://lenovopress.com Do you have the latest version? We update our papers from time to time, so check whether you have the latest version of this document by clicking the Check for Updates button on the front page of the PDF. Pressing this button will take you to a web page that will tell you if you are reading the latest version of the document and give you a link to the latest if needed.
    [Show full text]
  • Privacy Protection Cache Policy on Hybrid Main Memory
    Privacy Protection Cache Policy on Hybrid Main Memory N.Y. Ahn and D.H. Lee cache still remains in NVM. After flushing, the dirty cache may be Abstract—We firstly suggest privacy protection cache policy updated by CPU. How to manage the dirty cache in NVM? In applying the duty to delete personal information on a hybrid general, data of NVM are managed by the mapping table. The main memory system. This cache policy includes generating mapping table is used to translate logical addresses to physical random data and overwriting the random data into the personal addresses. If the dirty cache is personal information, we need to information. Proposed cache policy is more economical and delete completely the personal information for the privacy effective regarding perfect deletion of data. protection. Index Terms— Hybrid Main Memory, Remanence, Data Remanence Non-Volatile Memory, 3D-Xpoint, Cache Policy, Overwrite, DIMM 3D Xpoint Digital Forensic, Privacy Protection Personal Personal Information Information Processor DDRx I. INTRODUCTION On, May 13, 2014, the Court of Justice of the European Union FIG.1 Hybrid Main Memory. (CJEU) announced the historical decision in personal information But, complete deletion of NVM cannot be achieved. This is protection, which is “the right to be forgotten” in the context of data because the physical deletion is too expensive in regard with time, processing on internet search engines [1], [2]. CJEU decided that the power consumption, etc. Generally, the mapping table of NVM is internet service providers (ISPs) of the search engines would be only changed to CPU request, still existing the original data, that is responsible for the processing of personal information in web pages personal information in NVM.
    [Show full text]