How Controllers Maximize SSD Life
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Technology of Enterprise Solid State Drive
White Paper | April 2014 Technology of Enterprise Solid State Drive Recently, large capacity and high-performance storages are highly required in the cloud computing and data center due to rapidly increasing amount of information and number of access. When many people heavily access to internet services such as social network service (SNS) or web search engine, they will always expect an instant response from the service. In the corporate on-premise network system and financial trading system, micro seconds range of the response time makes big difference to the business outcome. As an evolution of the enterprise server and storage system, more focus is placed on speed, response time and latency to support large number of SNS user accesses and time-critical business transactions. Then Solid State Drive (SSD) plays an important role for server system to make the transactions faster. Enterprise class SSD is a storage device with NAND flash memories and required the design to store and retrieve data at very high speed under high duty operation conditions with reliable endurance performance. It will be very beneficial for system managers to deploy SSD for boosting up computing process and storage access speed. By positioning between host processor and hard disk drive (HDD) storage pools as a large cache memory, SSD increases the system transaction speed. SSD has also good points, in addition to higher performance, such as smaller footprint and lower power consumption. Toshiba has matured leading edge technologies in the SSD as the first company developing NAND flash memory. Interfaces Host Controller Interface) is popular for client application. -
MULTI-STREAM SSD TECHNOLOGY 2X READ PERFORMANCE & WRITE ENDURANCE T10 SCSI STANDARD
MULTI-STREAM SSD TECHNOLOGY 2x READ PERFORMANCE & WRITE ENDURANCE T10 SCSI STANDARD NAND flash solid-state drives (SSDs) are increasingly deployed within enterprise datacenters thanks to their high performance and low power consumption. Decreasing NAND flash cost-per-gigabyte is also accelerating SSD adoption to replace hard disk drives (HDDs) in storage applications. One SSD drawback is that, as a device continually writes data, valid data can be fragmented across the NAND flash medium (See Figure 1). To reclaim free space, garbage collection activity copies user data to new storage blocks and erases invalid data storage blocks, thereby allowing the media to store new write data. However, garbage collection processing decreases both SSD read and write performance. In addition, garbage collection increases write amplification because individual host data write requests can result in multiple internal SSD writes to the NAND medium. Here, valid data is first read from a media block about to be erased, then rewritten to another media storage block, accompanied by the write to store new host data. Consequently, write amplification decreases SSD lifetime because each NAND chip can endure a certain number of writes before it begins to fail. MULTI-STREAM SSD TECHNOLOGY With multi-stream technology, an innovative new technology System Configuration FIO Configuration standardized in T10, implemented in Samsung PM953S NVMe SSD, garbage collection can be eliminated or reduced by storing • Hardware system • I/O workload: associated or similar lifetime data in the same erase block. This Quad Core Intel i7-4790 70% Read/ 30% Write helps avoid NAND erase block fragmentation for data with the CPU 3.60GHz • 4 sequential write jobs same lifetime. -
Filesystem Considerations for Embedded Devices ELC2015 03/25/15
Filesystem considerations for embedded devices ELC2015 03/25/15 Tristan Lelong Senior embedded software engineer Filesystem considerations ABSTRACT The goal of this presentation is to answer a question asked by several customers: which filesystem should you use within your embedded design’s eMMC/SDCard? These storage devices use a standard block interface, compatible with traditional filesystems, but constraints are not those of desktop PC environments. EXT2/3/4, BTRFS, F2FS are the first of many solutions which come to mind, but how do they all compare? Typical queries include performance, longevity, tools availability, support, and power loss robustness. This presentation will not dive into implementation details but will instead summarize provided answers with the help of various figures and meaningful test results. 2 TABLE OF CONTENTS 1. Introduction 2. Block devices 3. Available filesystems 4. Performances 5. Tools 6. Reliability 7. Conclusion Filesystem considerations ABOUT THE AUTHOR • Tristan Lelong • Embedded software engineer @ Adeneo Embedded • French, living in the Pacific northwest • Embedded software, free software, and Linux kernel enthusiast. 4 Introduction Filesystem considerations Introduction INTRODUCTION More and more embedded designs rely on smart memory chips rather than bare NAND or NOR. This presentation will start by describing: • Some context to help understand the differences between NAND and MMC • Some typical requirements found in embedded devices designs • Potential filesystems to use on MMC devices 6 Filesystem considerations Introduction INTRODUCTION Focus will then move to block filesystems. How they are supported, what feature do they advertise. To help understand how they compare, we will present some benchmarks and comparisons regarding: • Tools • Reliability • Performances 7 Block devices Filesystem considerations Block devices MMC, EMMC, SD CARD Vocabulary: • MMC: MultiMediaCard is a memory card unveiled in 1997 by SanDisk and Siemens based on NAND flash memory. -
Garbage Collection Understanding Foreground Vs
Garbage Collection Understanding Foreground vs. Background GC and Other Related Elements Kent Smith Sr. Director of Corporate Marketing SandForce Flash Memory Summit August 2011 Santa Clara, CA 1 Understanding Garbage Collection (GC) • In flash memory, GC is the process of relocating existing data, deleting stale data, and creating empty blocks for new data • All SSDs will have some form of GC – it is not an optional feature • NAND flash cannot directly overwrite a page with data; it has to be first erased • One full block of pages has to be erased, not just one page • GC starts after each page has been written one time • Valid data is consolidated and written into new blocks • Invalid (replaced) data is ignored and gets erased • Wear leveling mainly occurs during GC Source: Wikipedia Flash Memory Summit 2011 Santa Clara, CA 2 How the OS Deletes Data • The OS tracks what files are present and what logical blocks are holding the files • SSDs do not understand the file structure of an OS; they only track valid data locations reported by the OS • When the OS deletes a file, it marks the file’s space in its logical table as free ► It does not tell the drive anything • When the OS writes a new file to the drive, it will eventually write to the previously used spaces in the table ► An SSD only knows data is no longer needed when the OS tells it to write to an address that already contains data Flash Memory Summit 2011 Santa Clara, CA 3 Understanding the TRIM Command • The OS* sends a TRIM command** at the point of file deletion • The SSD marks -
SATA 6Gb/S 3I+1 SSD Hybrid Pcie Quick Installation Guide
SATA 6Gb/s 3i+1 SSD Hybrid PCIe Quick Installation Guide Introduction The SATA 6Gb/s 3i+1 SSD Hybrid PCIe is a high performance SATA host adapter which adds three Serial ATA 6Gb/s channels and one SATA SSD socket to your PCIe-equipped computer and helps users to save more space in their computer. Features and Benefits • Compliant with Serial ATA specification, Revision 3.0 • Built-in socket for a 2.5" SATA SSD for space saving and superior drive performance • Hybrid (HyperDuo) enables 80% of SSD performance at one-third the cost • Hardware RAID to offload the host CPU for max performance. Supports RAID 0, 1 & 10 04-0897A 1 • Supports Port Multiplier FIS-based and Native Command Queuing (NCQ) • Supports TRIM to extend the life of SSDs for maximum durability System Requirements • Desktop PC with an available 4-lane (or more) PCIe slot • Windows® 8 (32-/64-bit) / 7 (32-/64-bit) / Vista (32-/64-bit) / XP (32-/64-bit) /Server 2003 & 2008 (32/64-bit) / Server 2008 R2 Package Contents • SATA 6Gb/s 3i+1 SSD Hybrid PCIe • SATA data cables (2) • Screws (4) • Driver CD • Quick installation guide 2 Layout + - HDD LED pins (Front chassis LED) 3 SATA internal connectors Screw holes SSD slot Figure 1: Layout Application Three HDDs or SSDs connected to the internal SATA channels One SSD installed to the SSD slot Figure 2: Application 3 Hardware Installation SSD Installation Install the 2.5" SATA SSD to the SATA 6Gb/s 3i+1 SSD Hybrid PCIe before card installation. 1. Simply insert the SATA SSD to end of the SSD slot. -
Things You Should Know About Solid State Storage
ThingsPRESENTATION You Should TITLE Know GOES HERE About Solid State Storage Snippets from SNIA Tutorials and other Giblets Marty Czekalski - President SCSI Trade Association, Sr. Staff Program Manager - Seagate Technology SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced in their entirety without modification The SNIA must be acknowledged as the source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the author nor the presenter is an attorney and nothing in this presentation is intended to be, or should be construed as legal advice or an opinion of counsel. If you need legal advice or a legal opinion please contact your attorney. The information presented herein represents the author's personal opinion and current understanding of the relevant issues involved. The author, the presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. What You Should Know About Solid State Storage © 2013 Storage Networking Industry Association. All Rights Reserved. 2 Abstract What You Should Know About Solid State Storage This session will appeal to Data Center Managers, Development Managers, and those that are seeking an overview of Solid State Storage. It’s comprised of excerpts from SNIA Solid State Tutorials and other sources. -
Storage Solutions for Embedded Applications
White Paper Storage Solutions Brian Skerry Sr. Software Architect Intel Corporation for Embedded Applications December 2008 1 321054 Storage Solutions for Embedded Applications Executive Summary Any embedded system needs reliable access to storage. This may be provided by a hard disk drive or access to a remote storage device. Alternatively there are many flash solutions available on the market today. When considering flash, there are a number of important criteria to consider with capacity, cost, and reliability being foremost. This paper considers hardware, software, and other considerations in choosing a storage solution. Wear leveling is an important factor affecting the expected lifetime of any flash solution, and it can be implemented in a number of ways. Depending on the choices made, software changes may be necessary. Solid state drives offer the most straight forward replacement option for Hard disk drives, but may not be cost-effective for some applications. The Intel® X-25M Mainstream SATA Solid State Drive is one solution suitable for a high performance environment. For smaller storage requirements, CompactFlash* and USB flash are very attractive. Downward pressure continues to be applied to flash solutions, and there are a number of new technologies on the horizon. As a result of reading this paper, the reader will be able to take into consideration all the relevant factors in choosing a storage solution for an embedded system. Intel® architecture can benefit the embedded system designer as they can be assured of widespread -
Parafs: a Log-Structured File System to Exploit the Internal Parallelism of Flash Devices
ParaFS: A Log-Structured File System to Exploit the Internal Parallelism of Flash Devices Jiacheng Zhang, Jiwu Shu, Youyou Lu Tsinghua University 1 Outline • Background and Motivation • ParaFS Design • Evaluation • Conclusion 2 Solid State Drives – Internal Parallelism • Internal Parallelism – Channel Level, Chip Level, Die Level, Plane Level – Chips in one package share the same 8/16-bit-I/O bus, but have separated chip enable (CE) and ready/busy (R/B) control signals. – Each die has one internal R/B signal. – Each plane contains thousands of flash blocks and one data register. ü Internal Parallelism à High Bandwidth. Die Level Plane Level Channel Level Block 0 Block 0 Block 1 Block 1 H/W Interface Flash Flash Chip Chip Block ... Block ... Host FTL Register Register Interconnect Plane 0 Plane 1 Flash Flash Die 0 Die 1 Chip Chip Chip Level Flash File Systems • Log-structured File System – Duplicate Functions: Space Allocation, Garbage Collection. – Semantic Isolation: FTL Abstraction, Block I/O Interface, Log on Log. Log-structured File System Namespace Alloc. GC READ / WRITE / TRIM FTL Mapping Alloc. GC WL ECC Channel 0 Channel 1 Channel N … Flash Flash Flash Flash Memory 4 Observation • F2FS vs. EXt4 (under heavy write traffic) – YCSB: 1000w random Read and Update operations – 16GB flash space + 24GB write traffic 100 EXT4 F2FS 75 7 50 6 5 25 4 (%) Efficiency GC 0 3 1 4 8 16 32 25000 2 20000 1 Blocks Normalized Throughput Normalized 0 15000 1 4 8 16 32 10000 Number of Channels 5000 F2FS has poorer performance than Ext4 Recycled of # 0 on SSDs. -
Write Amplification Analysis in Flash-Based Solid State Drives
Write Amplification Analysis in Flash-Based Solid State Drives Xiao-Yu Hu, Evangelos Eleftheriou, Robert Haas, Ilias Iliadis, Roman Pletka IBM Research IBM Zurich Research Laboratory CH-8803 Rüschlikon, Switzerland {xhu,ele,rha,ili,rap}@zurich.ibm.com ABSTRACT age computer architecture, ranging from notebooks to en- Write amplification is a critical factor limiting the random terprise storage systems. These devices provide random I/O write performance and write endurance in storage devices performance and access latency that are orders of magnitude based on NAND-flash memories such as solid-state drives better than that of rotating hard-disk drives (HDD). More- (SSD). The impact of garbage collection on write amplifica- over, SSDs significantly reduce power consumption and dra- tion is influenced by the level of over-provisioning and the matically improve robustness and shock resistance thanks to choice of reclaiming policy. In this paper, we present a novel the absence of moving parts. probabilistic model of write amplification for log-structured NAND-flash memories have unique characteristics that flash-based SSDs. Specifically, we quantify the impact of pose challenges to the SSD system design, especially the over-provisioning on write amplification analytically and by aspects of random write performance and write endurance. simulation assuming workloads of uniformly-distributed ran- They are organized in terms of blocks, each block consist- dom short writes. Moreover, we propose modified versions ing of a fixed number of pages, typically 64 pages of 4 KiB of the greedy garbage-collection reclaiming policy and com- each. A block is the elementary unit for erase operations, pare their performance. -
Why Data Retention Is Becoming More Critical in Automotive
White Paper Why Data Retention is Becoming More Critical in Automotive Applications Understanding and Improving Managed NAND Flash Memory for Higher Data Retention and Extended Product Life Christine Lee – Kevin Hsu – Scott Harlin KIOXIA America, Inc. Assisted and self-driving vehicles are fully loaded with electronics that support the infrastructure within. They have become mobile data centers that require an immense amount of computing power to capture, process and analyze data ‘near-instantaneously’ from a myriad of sensors, recorders, algorithms and external connections. A massive amount of this data is either stored locally or uploaded to cloud storage to be leveraged and transformed into real-time intelligence and value. Data storage has now become a critical part of automotive design, placing a precedence on high data retention and continual data integrity. The automotive environment creates unique challenges and presents a much different scenario from computing equipment in air-conditioned server rooms under controlled temperatures. Due to the extreme temperatures that can affect the NAND flash storage used within vehicles, there are information technology (IT) considerations that require different design approaches. Understanding how data wears out, how temperature and NAND flash memory characteristics can affect data retention and product life, and what practices can be used to improve them, are the focuses of this paper. ‘Under the Hood’ in Automotive Data Storage Depending on the source, assisted and self-driving vehicles generate terabytes (TB)1 of data daily. One prediction2 forecasts that between 5 TB and 20 TB of data will be consumed per day per vehicle, which is overwhelmingly more data than the average person consumes on a smartphone3. -
Getting the Most out of SSD: Sometimes, Less Is More
Getting the Most Out of SSD: Sometimes, Less is More Bruce Moxon Chief Solutions Architect STEC Flash Memory Summit 2012 Santa Clara, CA 1 Overview • A Quick Solid State Backgrounder • NAND Flash, Log structured file systems, GC, and Overprovisioning – less really is more! • Benchmarking • Operationally representative testing • Applications • Why Flash, why now? Enough • Caching – Less *is* more • Optimizing the Stack and Changing Application Architectures Flash Memory Summit 2012 Santa Clara, CA 2 Solid State Performance Characteristics (General) HDD (SAS) Sequential Random General Performance Characteristics. Read 200 MB/s 200 IOPS YMMV, depending on Write 200 MB/s 200 IOPS . Device architecture . Interface (SATA, SAS, PCIe) 2-8x 100-1000x • 2-4x performance range SSD / PCIe Sequential 4K Random . Transfer sizes Read .5 – 1.5 GB/s 60-200K IOPS . QDs (concurrency) Write .3 – 1 GB/s 15-40K IOPS Cost Differential . $0.50 - $1.50 / GB SAS HDD . $2 - $12 / GB SSD/PCIe Rand Read Write Sweet Spot Response . High Random IOPS (esp. Read) HDD 8 ms 0.5 ms* . Low Latency SSD 60 us 20 us Solid State Storage Fundamentals Everything I Needed to Know I learned at FMS … . Data is read/written in pages (typically 4-8 KB) . Data is *erased* in multi-page blocks (e.g., 128) . Data can only be written (programmed) into a previously erased block (no “overwrite”) . Background garbage collection (GC) copies valid pages to “squeeze out” deleted pages and make room for new data • OS/FS integration (TRIM) . Additional write amplification can occur in support of wear leveling and GC . Flash memory can only be programmed/erased (P/E) a limited number of times (Endurance) • Performance degrades over device lifetime . -
Improving Endurance with Garbage Collection, TRIM and Wear Leveling
® Application Note Solid State Storage and Memory AN07-0819-01 Improving Endurance with Garbage Collection, TRIM and Wear Leveling Introduction Prior to the advent of solid state drives (SSD), hard disk drives (HDD) were the primary storage medium. HDDs use heads and platters to store data and can be written to and erased an almost unlimited number of times, assuming no failure of mechanical or circuit components (one of the main drawbacks of HDDs for industrial applications). SSDs, on the other hand, have no moving parts, thus are much more durable; however, they only support a finite number of program/erase (P/E)1 operations. For single-level cell (SLC) NAND, each cell can be programmed/erased 60,000 to 100,000 times. Wear-leveling and other flash management techniques, while will use some of the cycles, provide protection for the NAND and the data on the SSD. These are part of the Write amplification factor (WAF)3. The final number is typically translated as Terabytes Written (TBW)2. For (industrial-grade) multi-level cell (MLC) and triple-level cell (TLC), each cell can be programmed/erased 3,000 to 10,000 times. These conventional NAND flash types use what is referred to as “floating gate” technology whereby during writing operations, tunnel current passes through the oXide layer of the floating gate in the NAND cell (Figure 1.). Figure 1. NAND Flash Floating Gate This occurrence, known as electron tunneling causes the oxide layer to degrade with each charged/trapped electron; the more the cell is programmed (written to or erased), the faster it degrades, eventually causing the cell block to wear out where it then cannot be programmed any longer and turns into a read-only device.