UNIVERSITY of CALIFORNIA, SAN DIEGO Model and Analysis of Trim

Total Page:16

File Type:pdf, Size:1020Kb

UNIVERSITY of CALIFORNIA, SAN DIEGO Model and Analysis of Trim UNIVERSITY OF CALIFORNIA, SAN DIEGO Model and Analysis of Trim Commands in Solid State Drives A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Electrical Engineering (Intelligent Systems, Robotics, and Control) by Tasha Christine Frankie Committee in charge: Ken Kreutz-Delgado, Chair Pamela Cosman Philip Gill Gordon Hughes Gert Lanckriet Jan Talbot 2012 The Dissertation of Tasha Christine Frankie is approved, and it is acceptable in quality and form for publication on microfilm and electronically: Chair University of California, San Diego 2012 iii TABLE OF CONTENTS Signature Page . iii Table of Contents . iv List of Figures . vi List of Tables . x Acknowledgements . xi Vita . xii Abstract of the Dissertation . xiv Chapter 1 Introduction . 1 Chapter 2 NAND Flash SSD Primer . 5 2.1 SSD Layout . 5 2.2 Greedy Garbage Collection . 6 2.3 Write Amplification . 8 2.4 Overprovisioning . 10 2.5 Trim Command . 11 2.6 Uniform Random Workloads . 11 2.7 Object Based Storage . 13 Chapter 3 Trim Model . 15 3.1 Workload as a Markov Birth-Death Chain . 15 3.1.1 Steady State Solution . 17 3.2 Interpretation of Steady State Occupation Probabilities . 18 3.3 Region of Convergence of Taylor Series Expansion . 22 3.4 Higher Order Moments . 23 3.5 Effective Overprovisioning . 24 3.6 Simulation Results . 26 Chapter 4 Application to Write Amplification . 35 4.1 Write Amplification Under the Standard Uniform Random Workload . 35 4.2 Write Amplification Analysis of Trim-Modified Uniform Random Workload . 37 4.3 Alternate Models for Write Amplification Under the Trim-Modified Uniform Random Workload . 41 4.4 Simulation Results and Discussion . 41 iv Chapter 5 Workload Variation: Hot and Cold Data . 47 5.1 Mixed Hot and Cold Data . 48 5.2 Separated Hot and Cold Data . 50 5.3 Simulation Results . 51 5.4 N Temperatures of Data . 54 Chapter 6 Workload Variation: Variable Sized Objects . 56 6.1 From LBAs to Objects . 57 6.2 Fixed Sized Objects . 58 6.3 Variable Sized Objects . 59 6.4 Simulation Results . 62 6.5 Application to Write Amplification . 67 Chapter 7 Conclusion . 72 Appendix A Asymptotic Expansion Analysis . 77 A.1 Useful Theorems and Facts . 77 A.2 Asymptotic Expansion Analysis . 80 A.3 An Accurate Gaussian Approximation . 81 A.4 A Higher Order PDF Approximation . 84 A.4.1 Approximating the Mean . 92 A.4.2 Approximating the Variance . 93 A.4.3 Approximating the Skew . 95 A.4.4 Approximating the Kurtosis . 96 A.5 Accuracy of the Truncation PDFs . 96 A.6 The K-L Divergence to and from the Normal Distribution 97 A.6.1 Kullback-Liebler Divergence . 99 A.6.2 Application to PX . 100 References . 104 v LIST OF FIGURES Figure 2.1: Schematic of greedy garbage collection. Data is written to the current writing block, which is the first block in the free block pool. When garbage collection is triggered, the block with fewest valid pages is selected from the occupied block queue. After its valid pages are copied to the current writing block, the selected block is erased and appended to the end of the free block pool. 7 Figure 3.1: Steady state distribution of the number of In-Use LBAs for the Trim-modified uniform random workload under various amounts of Trim, with u = 1000. Note that a higher percentage of Trim requests in the workload results in a lower mean but a higher variance. 19 Figure 3.2: Steady state distribution of the effective spare factor for the Trim-modified uniform random workload with 25% of requests Trim over 1000 user-allowed LBAs, and various amounts of man- ufacturer specified spare factor. Note the improvement in the effective spare factor over the manufacturer specified spare fac- tor has a larger variance for smaller manufacturer specified spare factor. 25 Figure 3.3: Comparison of analytic and simulation PDF, for 40% Trim. The analytic values closely match the simulation results. 26 Figure 3.4: A histogram of one run of simulation results for u = 1000 and q = 0:4, along with plots of the exact analytic PDF (Equa- tion 3.1) in red, the Stirling approximation of the PDF (Equa- tion 3.2) in black dots, and the Gaussian approximation of the PDF (Equation 3.3) in green. The line for the exact analytic PDF is not even visible because the line for the Gaussian ap- proximation of the PDF covers it completely. The Stirling ap- proximation of the PDF (black dots) lines up well with the Gaussian approximation and the (covered) analytic PDF. Even at this low value of u = 1000, the theoretical approximations are effectively identical to the exact analytic PDF. 27 vi Figure 3.5: A histogram of one run of simulation results for u = 25 and q = 0:3, along with plots of the exact analytic PDF (Equa- tion 3.1) in red, the Stirling approximation of the PDF (Equa- tion 3.2) in black, and the Gaussian approximation of the PDF (Equation 3.3) in green. A slight negative skew of the exact analytic PDF is evident when compared to the Gaussian ap- proximation. Note that the exact analytic PDF is the best fit of the histogram. A series of 64 Monte-Carlo simulations with these values result in an average skew of mean ± one standard deviation = −0:299 ± 0:004, which agrees well with the theo- 1 retically computed approximation value of −0:306 ± R σ3 = −0:306 ± R(0:029). The average simulation kurtosis is 0.064; the approximate theoretical is 0.070, both values are well within overlapping error bounds. The top graph is a plot of the numer- ically normalized equations as shown in the text. In the bottom graph, a half-bin shift correction has been made to the Stirling and Gaussian approximation densities as described in the text. 28 Figure 3.6: Theoretical results for utilization under various amounts of Trim. When more Trim requests are in the workload, a lower percent- age of LBAs are In-Use at steady state, but the variance is higher than seen in a workload with fewer Trim requests. 30 Figure 3.7: Theoretical results for the effective spare factor under various amounts of Trim, with a manufacturer specified spare factor of 0.11. When more Trim requests are in the workload, a higher effective spare factor is seen, but with a higher variance than seen in a workload with fewer Trim requests. 31 Figure 3.8: Mean and 3σ values of effective spare factor versus manufac- turer specified spare factor. This shows that an SSD with zero manufacturer specified spare factor effectively has a spare factor of 33% when the workload contains 25% Trim requests. This level of overprovisioning without the Trim command requires 50% more physical storage than the user is allowed to access. The ±3σ values were calculated using a small Tp = 1000, so they would be visible. Typical devices have a Tp >> 1000, resulting in a negligible variance. 32 vii Figure 3.9: Theoretical results comparing the effective spare factor to the manufacturer specified spare factor. The mean and 3σ standard deviations are shown. For this figure, Tp = 1280 was used so that the 3σ values are visible. Larger gains in the effective spare factor are seen for smaller specified spare factor. In this figure, we see that a workload of 10% Trim requests transforms an SSD with a manufacturer specified spare factor of zero into one with an effective spare factor of 0.11. To obtain this spare factor through physical overprovisioning alone would require 12.5% more physical space than the user is allowed to access! . 33 Figure 4.1: Equations needed to compute the empirical model-based algo- rithm for AHu (adapted from [1, 2]). w is the window size, and allows for the windowed greedy garbage collection variation. Setting w = Tp − r is needed for the standard greedy garbage np collection discussed in this dissertation. For an explanation of the theory behind these formulas, see [1, 2]. 36 Figure 4.2: Write amplification versus probability of Trim under a Trim- modified uniform random workload and greedy garbage collec- tion, when Sf = 0:2. The plots labeled Hu, Agarwal, and Xi- ang all show the Trim-modified models for write amplification. (T ) AAgarwal gives such an optimistic prediction that for large q it (T ) foresees fewer actual writes than requested writes! AHu gives a more realistic, but still optimistic prediction for write ampli- (T ) fication. AXiang gives the best, and generally quite accurate, prediction for write amplification. 42 Figure 4.3: Simulation and theoretical results for write amplification under various amounts of Trim and Sf = 0:1. The Trim-modified Xiang calculation is a very good match. The Trim-modified calculations for Hu and Agarwal are optimistic about the write amplification, with the Agarwal calculation being so optimistic as to give an impossible write amplification of less than 1 for high amounts of Trim! . 43 Figure 5.1: Write amplification for mixed hot and cold data. Physical spare factor is 0.2. Trim requests for cold data occur with probability 0.1; Trim requests for hot data occur with probability 0.2. Cold data is requested 10% of the time, with hot data requested the remaining 90% of the time. Notice that the theoretical values, based only on the level of effective overprovisioning, are optimistic when cold data accounts for more than 20% of the user space. 49 viii Figure 5.2: Write amplification for separated hot and cold data. Physical spare factor is 0.2.
Recommended publications
  • Technology of Enterprise Solid State Drive
    White Paper | April 2014 Technology of Enterprise Solid State Drive Recently, large capacity and high-performance storages are highly required in the cloud computing and data center due to rapidly increasing amount of information and number of access. When many people heavily access to internet services such as social network service (SNS) or web search engine, they will always expect an instant response from the service. In the corporate on-premise network system and financial trading system, micro seconds range of the response time makes big difference to the business outcome. As an evolution of the enterprise server and storage system, more focus is placed on speed, response time and latency to support large number of SNS user accesses and time-critical business transactions. Then Solid State Drive (SSD) plays an important role for server system to make the transactions faster. Enterprise class SSD is a storage device with NAND flash memories and required the design to store and retrieve data at very high speed under high duty operation conditions with reliable endurance performance. It will be very beneficial for system managers to deploy SSD for boosting up computing process and storage access speed. By positioning between host processor and hard disk drive (HDD) storage pools as a large cache memory, SSD increases the system transaction speed. SSD has also good points, in addition to higher performance, such as smaller footprint and lower power consumption. Toshiba has matured leading edge technologies in the SSD as the first company developing NAND flash memory. Interfaces Host Controller Interface) is popular for client application.
    [Show full text]
  • MULTI-STREAM SSD TECHNOLOGY 2X READ PERFORMANCE & WRITE ENDURANCE T10 SCSI STANDARD
    MULTI-STREAM SSD TECHNOLOGY 2x READ PERFORMANCE & WRITE ENDURANCE T10 SCSI STANDARD NAND flash solid-state drives (SSDs) are increasingly deployed within enterprise datacenters thanks to their high performance and low power consumption. Decreasing NAND flash cost-per-gigabyte is also accelerating SSD adoption to replace hard disk drives (HDDs) in storage applications. One SSD drawback is that, as a device continually writes data, valid data can be fragmented across the NAND flash medium (See Figure 1). To reclaim free space, garbage collection activity copies user data to new storage blocks and erases invalid data storage blocks, thereby allowing the media to store new write data. However, garbage collection processing decreases both SSD read and write performance. In addition, garbage collection increases write amplification because individual host data write requests can result in multiple internal SSD writes to the NAND medium. Here, valid data is first read from a media block about to be erased, then rewritten to another media storage block, accompanied by the write to store new host data. Consequently, write amplification decreases SSD lifetime because each NAND chip can endure a certain number of writes before it begins to fail. MULTI-STREAM SSD TECHNOLOGY With multi-stream technology, an innovative new technology System Configuration FIO Configuration standardized in T10, implemented in Samsung PM953S NVMe SSD, garbage collection can be eliminated or reduced by storing • Hardware system • I/O workload: associated or similar lifetime data in the same erase block. This Quad Core Intel i7-4790 70% Read/ 30% Write helps avoid NAND erase block fragmentation for data with the CPU 3.60GHz • 4 sequential write jobs same lifetime.
    [Show full text]
  • Filesystem Considerations for Embedded Devices ELC2015 03/25/15
    Filesystem considerations for embedded devices ELC2015 03/25/15 Tristan Lelong Senior embedded software engineer Filesystem considerations ABSTRACT The goal of this presentation is to answer a question asked by several customers: which filesystem should you use within your embedded design’s eMMC/SDCard? These storage devices use a standard block interface, compatible with traditional filesystems, but constraints are not those of desktop PC environments. EXT2/3/4, BTRFS, F2FS are the first of many solutions which come to mind, but how do they all compare? Typical queries include performance, longevity, tools availability, support, and power loss robustness. This presentation will not dive into implementation details but will instead summarize provided answers with the help of various figures and meaningful test results. 2 TABLE OF CONTENTS 1. Introduction 2. Block devices 3. Available filesystems 4. Performances 5. Tools 6. Reliability 7. Conclusion Filesystem considerations ABOUT THE AUTHOR • Tristan Lelong • Embedded software engineer @ Adeneo Embedded • French, living in the Pacific northwest • Embedded software, free software, and Linux kernel enthusiast. 4 Introduction Filesystem considerations Introduction INTRODUCTION More and more embedded designs rely on smart memory chips rather than bare NAND or NOR. This presentation will start by describing: • Some context to help understand the differences between NAND and MMC • Some typical requirements found in embedded devices designs • Potential filesystems to use on MMC devices 6 Filesystem considerations Introduction INTRODUCTION Focus will then move to block filesystems. How they are supported, what feature do they advertise. To help understand how they compare, we will present some benchmarks and comparisons regarding: • Tools • Reliability • Performances 7 Block devices Filesystem considerations Block devices MMC, EMMC, SD CARD Vocabulary: • MMC: MultiMediaCard is a memory card unveiled in 1997 by SanDisk and Siemens based on NAND flash memory.
    [Show full text]
  • SATA 6Gb/S 3I+1 SSD Hybrid Pcie Quick Installation Guide
    SATA 6Gb/s 3i+1 SSD Hybrid PCIe Quick Installation Guide Introduction The SATA 6Gb/s 3i+1 SSD Hybrid PCIe is a high performance SATA host adapter which adds three Serial ATA 6Gb/s channels and one SATA SSD socket to your PCIe-equipped computer and helps users to save more space in their computer. Features and Benefits • Compliant with Serial ATA specification, Revision 3.0 • Built-in socket for a 2.5" SATA SSD for space saving and superior drive performance • Hybrid (HyperDuo) enables 80% of SSD performance at one-third the cost • Hardware RAID to offload the host CPU for max performance. Supports RAID 0, 1 & 10 04-0897A 1 • Supports Port Multiplier FIS-based and Native Command Queuing (NCQ) • Supports TRIM to extend the life of SSDs for maximum durability System Requirements • Desktop PC with an available 4-lane (or more) PCIe slot • Windows® 8 (32-/64-bit) / 7 (32-/64-bit) / Vista (32-/64-bit) / XP (32-/64-bit) /Server 2003 & 2008 (32/64-bit) / Server 2008 R2 Package Contents • SATA 6Gb/s 3i+1 SSD Hybrid PCIe • SATA data cables (2) • Screws (4) • Driver CD • Quick installation guide 2 Layout + - HDD LED pins (Front chassis LED) 3 SATA internal connectors Screw holes SSD slot Figure 1: Layout Application Three HDDs or SSDs connected to the internal SATA channels One SSD installed to the SSD slot Figure 2: Application 3 Hardware Installation SSD Installation Install the 2.5" SATA SSD to the SATA 6Gb/s 3i+1 SSD Hybrid PCIe before card installation. 1. Simply insert the SATA SSD to end of the SSD slot.
    [Show full text]
  • Things You Should Know About Solid State Storage
    ThingsPRESENTATION You Should TITLE Know GOES HERE About Solid State Storage Snippets from SNIA Tutorials and other Giblets Marty Czekalski - President SCSI Trade Association, Sr. Staff Program Manager - Seagate Technology SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced in their entirety without modification The SNIA must be acknowledged as the source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the author nor the presenter is an attorney and nothing in this presentation is intended to be, or should be construed as legal advice or an opinion of counsel. If you need legal advice or a legal opinion please contact your attorney. The information presented herein represents the author's personal opinion and current understanding of the relevant issues involved. The author, the presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. What You Should Know About Solid State Storage © 2013 Storage Networking Industry Association. All Rights Reserved. 2 Abstract What You Should Know About Solid State Storage This session will appeal to Data Center Managers, Development Managers, and those that are seeking an overview of Solid State Storage. It’s comprised of excerpts from SNIA Solid State Tutorials and other sources.
    [Show full text]
  • How Controllers Maximize SSD Life
    SSSI TECH NOTES How Controllers Maximize SSD Life January 2013 by SNIA SSSI Member: Jim Handy Objective Analysis “The SSD Guy” www.snia.org1 About the Solid State Storage Initiative The SNIA Solid State Storage Initiative (SSSI) fosters the growth and success of the market for solid state storage in both enterprise and client environ- ments. Members of the SSSI work together to promote the development of technical standards and tools, educate the IT communities about solid state storage, perform market outreach that highlights the virtues of solid state storage, and collaborate with other industry associations on solid state stor- age technical work. SSSI member companies come from a wide variety of segments in the SSD industry www.snia.org/forums/sssi/about/members. How Controllers Maximize SSD Life by SNIA SSSI Member: Jim Handy “The SSD Guy”, Objective Analysis Table of Contents Introduction 2 How Controllers Maximize SSD Life 2 Better Wear Leveling 3 External Data Buffering 6 Improved ECC 7 Other Error Management 9 Reduced Write Amplification 10 Over Provisioning 11 Feedback on Block Wear 13 Internal NAND Management 14 1 Introduction This booklet contains a collection of posts from Jim Handy’s SSD Guy blog www.TheSSDGuy.com which explores the various techniques designers use to increase SSD life. How Controllers Maximize SSD Life How do controllers maximize the life of an SSD? After all, MLC flash has a lifetime of only 10,000 erase/write cycles or fewer and that is a very small number compared to the write traffic an SSD is expected to see in a high- workload environment, especially in the enterprise.
    [Show full text]
  • Parafs: a Log-Structured File System to Exploit the Internal Parallelism of Flash Devices
    ParaFS: A Log-Structured File System to Exploit the Internal Parallelism of Flash Devices Jiacheng Zhang, Jiwu Shu, Youyou Lu Tsinghua University 1 Outline • Background and Motivation • ParaFS Design • Evaluation • Conclusion 2 Solid State Drives – Internal Parallelism • Internal Parallelism – Channel Level, Chip Level, Die Level, Plane Level – Chips in one package share the same 8/16-bit-I/O bus, but have separated chip enable (CE) and ready/busy (R/B) control signals. – Each die has one internal R/B signal. – Each plane contains thousands of flash blocks and one data register. ü Internal Parallelism à High Bandwidth. Die Level Plane Level Channel Level Block 0 Block 0 Block 1 Block 1 H/W Interface Flash Flash Chip Chip Block ... Block ... Host FTL Register Register Interconnect Plane 0 Plane 1 Flash Flash Die 0 Die 1 Chip Chip Chip Level Flash File Systems • Log-structured File System – Duplicate Functions: Space Allocation, Garbage Collection. – Semantic Isolation: FTL Abstraction, Block I/O Interface, Log on Log. Log-structured File System Namespace Alloc. GC READ / WRITE / TRIM FTL Mapping Alloc. GC WL ECC Channel 0 Channel 1 Channel N … Flash Flash Flash Flash Memory 4 Observation • F2FS vs. EXt4 (under heavy write traffic) – YCSB: 1000w random Read and Update operations – 16GB flash space + 24GB write traffic 100 EXT4 F2FS 75 7 50 6 5 25 4 (%) Efficiency GC 0 3 1 4 8 16 32 25000 2 20000 1 Blocks Normalized Throughput Normalized 0 15000 1 4 8 16 32 10000 Number of Channels 5000 F2FS has poorer performance than Ext4 Recycled of # 0 on SSDs.
    [Show full text]
  • Write Amplification Analysis in Flash-Based Solid State Drives
    Write Amplification Analysis in Flash-Based Solid State Drives Xiao-Yu Hu, Evangelos Eleftheriou, Robert Haas, Ilias Iliadis, Roman Pletka IBM Research IBM Zurich Research Laboratory CH-8803 Rüschlikon, Switzerland {xhu,ele,rha,ili,rap}@zurich.ibm.com ABSTRACT age computer architecture, ranging from notebooks to en- Write amplification is a critical factor limiting the random terprise storage systems. These devices provide random I/O write performance and write endurance in storage devices performance and access latency that are orders of magnitude based on NAND-flash memories such as solid-state drives better than that of rotating hard-disk drives (HDD). More- (SSD). The impact of garbage collection on write amplifica- over, SSDs significantly reduce power consumption and dra- tion is influenced by the level of over-provisioning and the matically improve robustness and shock resistance thanks to choice of reclaiming policy. In this paper, we present a novel the absence of moving parts. probabilistic model of write amplification for log-structured NAND-flash memories have unique characteristics that flash-based SSDs. Specifically, we quantify the impact of pose challenges to the SSD system design, especially the over-provisioning on write amplification analytically and by aspects of random write performance and write endurance. simulation assuming workloads of uniformly-distributed ran- They are organized in terms of blocks, each block consist- dom short writes. Moreover, we propose modified versions ing of a fixed number of pages, typically 64 pages of 4 KiB of the greedy garbage-collection reclaiming policy and com- each. A block is the elementary unit for erase operations, pare their performance.
    [Show full text]
  • Why Data Retention Is Becoming More Critical in Automotive
    White Paper Why Data Retention is Becoming More Critical in Automotive Applications Understanding and Improving Managed NAND Flash Memory for Higher Data Retention and Extended Product Life Christine Lee – Kevin Hsu – Scott Harlin KIOXIA America, Inc. Assisted and self-driving vehicles are fully loaded with electronics that support the infrastructure within. They have become mobile data centers that require an immense amount of computing power to capture, process and analyze data ‘near-instantaneously’ from a myriad of sensors, recorders, algorithms and external connections. A massive amount of this data is either stored locally or uploaded to cloud storage to be leveraged and transformed into real-time intelligence and value. Data storage has now become a critical part of automotive design, placing a precedence on high data retention and continual data integrity. The automotive environment creates unique challenges and presents a much different scenario from computing equipment in air-conditioned server rooms under controlled temperatures. Due to the extreme temperatures that can affect the NAND flash storage used within vehicles, there are information technology (IT) considerations that require different design approaches. Understanding how data wears out, how temperature and NAND flash memory characteristics can affect data retention and product life, and what practices can be used to improve them, are the focuses of this paper. ‘Under the Hood’ in Automotive Data Storage Depending on the source, assisted and self-driving vehicles generate terabytes (TB)1 of data daily. One prediction2 forecasts that between 5 TB and 20 TB of data will be consumed per day per vehicle, which is overwhelmingly more data than the average person consumes on a smartphone3.
    [Show full text]
  • Getting the Most out of SSD: Sometimes, Less Is More
    Getting the Most Out of SSD: Sometimes, Less is More Bruce Moxon Chief Solutions Architect STEC Flash Memory Summit 2012 Santa Clara, CA 1 Overview • A Quick Solid State Backgrounder • NAND Flash, Log structured file systems, GC, and Overprovisioning – less really is more! • Benchmarking • Operationally representative testing • Applications • Why Flash, why now? Enough • Caching – Less *is* more • Optimizing the Stack and Changing Application Architectures Flash Memory Summit 2012 Santa Clara, CA 2 Solid State Performance Characteristics (General) HDD (SAS) Sequential Random General Performance Characteristics. Read 200 MB/s 200 IOPS YMMV, depending on Write 200 MB/s 200 IOPS . Device architecture . Interface (SATA, SAS, PCIe) 2-8x 100-1000x • 2-4x performance range SSD / PCIe Sequential 4K Random . Transfer sizes Read .5 – 1.5 GB/s 60-200K IOPS . QDs (concurrency) Write .3 – 1 GB/s 15-40K IOPS Cost Differential . $0.50 - $1.50 / GB SAS HDD . $2 - $12 / GB SSD/PCIe Rand Read Write Sweet Spot Response . High Random IOPS (esp. Read) HDD 8 ms 0.5 ms* . Low Latency SSD 60 us 20 us Solid State Storage Fundamentals Everything I Needed to Know I learned at FMS … . Data is read/written in pages (typically 4-8 KB) . Data is *erased* in multi-page blocks (e.g., 128) . Data can only be written (programmed) into a previously erased block (no “overwrite”) . Background garbage collection (GC) copies valid pages to “squeeze out” deleted pages and make room for new data • OS/FS integration (TRIM) . Additional write amplification can occur in support of wear leveling and GC . Flash memory can only be programmed/erased (P/E) a limited number of times (Endurance) • Performance degrades over device lifetime .
    [Show full text]
  • Improving Endurance with Garbage Collection, TRIM and Wear Leveling
    ® Application Note Solid State Storage and Memory AN07-0819-01 Improving Endurance with Garbage Collection, TRIM and Wear Leveling Introduction Prior to the advent of solid state drives (SSD), hard disk drives (HDD) were the primary storage medium. HDDs use heads and platters to store data and can be written to and erased an almost unlimited number of times, assuming no failure of mechanical or circuit components (one of the main drawbacks of HDDs for industrial applications). SSDs, on the other hand, have no moving parts, thus are much more durable; however, they only support a finite number of program/erase (P/E)1 operations. For single-level cell (SLC) NAND, each cell can be programmed/erased 60,000 to 100,000 times. Wear-leveling and other flash management techniques, while will use some of the cycles, provide protection for the NAND and the data on the SSD. These are part of the Write amplification factor (WAF)3. The final number is typically translated as Terabytes Written (TBW)2. For (industrial-grade) multi-level cell (MLC) and triple-level cell (TLC), each cell can be programmed/erased 3,000 to 10,000 times. These conventional NAND flash types use what is referred to as “floating gate” technology whereby during writing operations, tunnel current passes through the oXide layer of the floating gate in the NAND cell (Figure 1.). Figure 1. NAND Flash Floating Gate This occurrence, known as electron tunneling causes the oxide layer to degrade with each charged/trapped electron; the more the cell is programmed (written to or erased), the faster it degrades, eventually causing the cell block to wear out where it then cannot be programmed any longer and turns into a read-only device.
    [Show full text]
  • Determining Solid-State Drive (SSD) Lifetimes for SEL Rugged Computers
    Application Note AN2016-03 Determining Solid-State Drive (SSD) Lifetimes for SEL Rugged Computers Jerry Bennett and Ian Olson INTRODUCTION There are three types of solid-state drives (SSDs) offered for SEL’s computer family. The type of SSD you choose depends on your application. This application note briefly explains the types of SSDs offered by SEL and assists you in determining the best type of SSD to use for your application. PROBLEM It can be difficult to determine the correct SSD for your application. When evaluating which type of SSD best meets your application needs, consider the following key operational variables: environment, capacity, and endurance. Environmental factors include the temperature of the operating environment and the moisture or chemicals in the air that may cause the internal components of the SSDs to require conformal coating in order to be protected. Capacity is the amount of data storage space needed to store the operating system, application software, and data or log files (consider both current needs and future room for expansion). Endurance is a measure of how many times the drive can be overwritten before it wears out. For more information on SSD flash endurance, refer to the “NAND Flash Memory Reliability in Embedded Computer Systems” white paper available on the SEL website (https://www.selinc.com). The three types of SSDs that SEL offers use either single-level cell (SLC), industrial multilevel cell (iMLC), or consumer multilevel cell (MLC) flash memory. Use Figure 1 as a starting point to determine which type of SSD fits your application. For example, applications requiring high capacity and high endurance in an industrial environment should use SLC or iMLC SSD types.
    [Show full text]