The Cache and Codec Model for Storing and Manipulating Data

Total Page:16

File Type:pdf, Size:1020Kb

The Cache and Codec Model for Storing and Manipulating Data ................................................................................................................................................................................................................. THE CACHE AND CODEC MODEL FOR STORING AND MANIPULATING DATA ................................................................................................................................................................................................................. THIS ARTICLE DESCRIBES AN ANALYTICAL MODEL OF SYSTEMS THAT STORE AND MANIPULATE DATA.THE CACHE AND CODEC MODEL CAPTURES THE INTERPLAY BETWEEN TWO CENTRAL ASPECTS OF SUCH SYSTEMS: INFORMATION REPRESENTATION AND DATA- PROCESSING EFFICIENCY.THIS ARTICLE DESCRIBES THE MODEL AND EXPLORES DESIGN OPTIONS FOR SYSTEMS THAT INCORPORATE DATA ENCRYPTION AND COMPRESSION.IN EACH CASE STUDY, THE AUTHORS RAISE SEVERAL RESEARCH IMPLICATIONS AND DIRECTIONS FOR FUTURE RESEARCH. ......The recent explosion of data has this might be the time or energy per bit thrown traditional data management ques- encrypted or decrypted. This term is directly tions into stark relief. As the data growth rate linked to how a particular function is imple- outpaces technology scaling, computer sys- mented. For example, DRAM requires more tems designers will need to become increas- time and energy per bit read than static RAM ingly strategic about what data they choose to (SRAM). Similarly, software encryption is store, where they store it, and in what represen- slower and more power hungry than a hard- tation (for example, compressed, encrypted, ware implementation. processed, or raw). Although modern com- The second property is bits of informa- puter systems span a range of scales and appli- tion per bit of data. Data’s value is in the Van Bui cations, at their core they all store and information it contains. Depending on how transform information. The cache and codec the information is encoded, the data might Martha A. Kim model that we developed captures, at a high use more or fewer bits to represent it. A stor- abstraction level, the ways a computer system age component that does not manipulate the Columbia University stores and transforms information. This simple data or change the representation would not model of how data flows through a system, change this term. In contrast, a lossless com- potentially manipulated or transformed along pressor would maintain the information the way, can help researchers and designers while shrinking the data, thereby increasing confront and explore a range of design options. this term. The cache and codec model represents the Ultimately, what matters is the cost per system as a set of interacting components, bit of information produced—that is, each of which has two properties. The first is cost ¼ cost  bitdata. The cache and codec the cost per bit of data processed. For a stor- bitinfo bitdata bitinfo agecomponent,thismightbethetimeor model captures the interplay between the energy per bit read or written. For encryption, data representation and the characteristics of ....................................................... 28 Published by the IEEE Computer Society 0272-1732/14/$31.00 c 2014 IEEE the components that manipulate it, thereby particular level, which we will call Ai, depends enabling high-level design-space explorations on the volume of data drawn from the mod- for systems that must manipulate large vol- ule at the ith level and the cost of doing so: umes of data efficiently. Ai ¼ Vi  Ci ð2Þ In this article, we describe the cache and codec model, along with two case studies that where Vi is the expected volume of data use the model to explore the properties of drawn from a module in bits. The volume of encrypted and compressed storage systems. data at each level is a function of k and the R values of the upper-level module(s) (for Cache and codec model example, the number of accesses to DRAM depends on what is happening in the last- We model a simple yet general class of level cache [LLC] and above): computer systems consisting of a processor that draws data from a storage hierarchy. Vi ¼ ViÀ1  RiÀ1 ð3Þ This storage hierarchy consists of caches that store information, and, optionally, codecs The volume of data being accessed at the that re-encode data along the way. first level, V0,issimplyk, which is an input Because the storage system is often a hier- to the model. In addition to k, the other archy, the model is an extension of the classic inputs to the model are the order of the mod- expression for the expected access time to the ules and an ðÞRi; Ci pair for each module. ith level of a cache hierarchy: Figure 1 illustrates a slightly more com- plex example modeling memory compres- E ¼ k  C þ R  E ð1Þ i i i iþ1 sion. This hierarchy contains three levels of In this expression, k is the total number of cache, backed by compressed memory. Thus, between the LLC and the memory sits a com- bits accessed, and Ci is the access cost per bit at level i. There are two key conceptual differ- pressor/decompressor that compresses in- ences with respect to the classic expression. coming data on writes and decompresses First, each level of the hierarchy consists of outgoing data on reads. The key to using the either a cache (or other storage module) or a model is reasoning about appropriate R and codec. In either case, the first term in Equa- C values for each module. The compressor tion 1 captures the cost of accessing that modeled in this example compresses data by ¼ module, whether it is storing or re-encoding a factor of 4, so R 0.25. Similarly, in this data. Second, we have substituted R for the example we model an L2 with a 20-percent i miss rate, so for that module R ¼ 0.20. traditional miss rate in Equation 1. Ri is the “ratio” of a module, which for both caches Using Equation 3, we can calculate the and codecs, captures the module’s influence expected volume of data drawn from each on the volume of data. In the case of a cache module, as illustrated by the dotted red lines in Figure 1. Then, using those volume esti- module, Ri is simply the miss rate, indicating that for every n bits requested from a cache, it mates and the efficiency parameter for each module, C, we can calculate the cost contribu- is expected to request Ri  n bits from the lower levels of the hierarchy. For a 4 com- tion for each module according to Equation 2 pression module, the R value would be either and shown with the dashed lines in Figure 1. Although the case studies that follow center 4 or 0.25, depending on the direction of the compression. on big data, the model is applicable to various As a concrete example, consider a simple other systems where data gets manipulated and stored in some form or fashion. two-level hierarchy. In this scenario, Equa- tion 1 expands to Case studies E1 ¼ k  C1 þ k  R1  C2 We present two case studies of the cache and codec model for encrypted and com- The first term, k Á C1, is the access cost to pressed storage systems, demonstrating the the first level; the second term, k Á R1 Á C2,is sort of design space explorations that it the access cost at level 2. The access cost at a supports. ............................................................. JULY/AUGUST 2014 29 .............................................................................................................................................................................................. BIG DATA percent on Intel Dunnington.2 The page k fault rates were estimated to be an average of k k typical virtual-memory miss rates on systems k with page sizes ranging from 4 to 64 Kbytes.3 0.1 0.02 0.005 0.00125 The read and write energy efficiency values C/D V = V = V = V = k L1 L2 L3 Memory (C parameters) for LLC are from Chang et al.,4 and the values for memory and disk are from Ramos and Bianchini.5 LLC encryption. For each LLC encryption R = 0.10 R = 0.20 R = 0.25 R = 0.25 R = 0.00 scenario, we consider three cache technolo- C = 5 C = 20 C = 50 C = 30 C = 100 gies. In the ideal case, the LLC will have both A = 5k A = 2k A = 1k A = 0.15k A = 0.125k high density and low power. SRAM is the more traditional cache implementation with fast access times, but low density relative to Figure 1. Illustration of how the cache and codec model captures the other technology and high leakage current. primary features of a memory compression system, namely the Spin-transfer torque magnetic RAM (STT- compression’s impact on the volume of data stored in memory and its cost RAM) is an alternative technology with relative to the cost of the system’s other components. higher density, but it has high write latency and write energy consumption. A third op- Data encryption tion is embedded DRAM (eDRAM), which Big data is a rich target for hackers because also features high density and low leakage. A of the sheer amount of information stored on drawback of eDRAM is that it requires those servers, with high-profile data breaches refresh operations, and this is a major source of customer records or unauthorized accesses of power dissipation. Each of these LLC tech- regularly reported. Even RSA, a company nologies has tradeoffs with respect to density with expertise in data security technology, is and power. not immune to these breaches.1 Applying Because the LLC is already power con- data encryption to big-data storage systems strained, adding a data encryption module to could be a second line of attack for security. the system could further increase total system This not only is a deterrent to hackers, but energy costs. We use our cache and codec also can minimize the damage should a model to explore a system with an encryption breach occur.
Recommended publications
  • Storage Systems and Technologies - Jai Menon and Clodoaldo Barrera
    INFORMATION TECHNOLOGY AND COMMUNICATIONS RESOURCES FOR SUSTAINABLE DEVELOPMENT - Storage Systems and Technologies - Jai Menon and Clodoaldo Barrera STORAGE SYSTEMS AND TECHNOLOGIES Jai Menon IBM Academy of Technology, San Jose, CA, USA Clodoaldo Barrera IBM Systems Group, San Jose, CA, USA Keywords: Storage Systems, Hard disk drives, Tape Drives, Optical Drives, RAID, Storage Area Networks, Backup, Archive, Network Attached Storage, Copy Services, Disk Caching, Fiber Channel, Storage Switch, Storage Controllers, Disk subsystems, Information Lifecycle Management, NFS, CIFS, Storage Class Memories, Phase Change Memory, Flash Memory, SCSI, Caching, Non-Volatile Memory, Remote Copy, Storage Virtualization, Data De-duplication, File Systems, Archival Storage, Storage Software, Holographic Storage, Storage Class Memory, Long-Term data preservation Contents 1. Introduction 2. Storage devices 2.1. Storage Device Industry Overview 2.2. Hard Disk Drives 2.3. Digital Tape Drives 2.4. Optical Storage 2.5. Emerging Storage Technologies 2.5.1. Holographic Storage 2.5.2. Flash Storage 2.5.3. Storage Class Memories 3. Block Storage Systems 3.1. Storage System Functions – Current 3.2. Storage System Functions - Emerging 4. File and Archive Storage Systems 4.1. Network Attached Storage 4.2. Archive Storage Systems 5. Storage Networks 5.1. SAN Fabrics 5.2. IP FabricsUNESCO – EOLSS 5.3. Converged Networking 6. Storage SoftwareSAMPLE CHAPTERS 6.1. Data Backup 6.2. Data Archive 6.3. Information Lifecycle Management 6.4. Disaster Protection 7. Concluding Remarks Acknowledgements Glossary Bibliography Biographical Sketches ©Encyclopedia of Life Support Systems (EOLSS) INFORMATION TECHNOLOGY AND COMMUNICATIONS RESOURCES FOR SUSTAINABLE DEVELOPMENT - Storage Systems and Technologies - Jai Menon and Clodoaldo Barrera Summary Our world is increasingly becoming a data-centric world.
    [Show full text]
  • Error Detection and Correction Methods for Memories Used in System-On-Chip Designs
    International Journal of Engineering and Advanced Technology (IJEAT) ISSN: 2249 – 8958, Volume-8, Issue-2S2, January 2019 Error Detection and Correction Methods for Memories used in System-on-Chip Designs Gunduru Swathi Lakshmi, Neelima K, C. Subhas ABSTRACT— Memory is the basic necessity in any SoC Static Random Access Memory (SRAM): It design. Memories are classified into single port memory and consists of a latch or flipflop to store each bit of multiport memory. Multiport memory has ability to source more memory and it does not required any refresh efficient execution of operation and high speed performance operation. SRAM is mostly used in cache memory when compared to single port. Testing of semiconductor memories is increasing because of high density of current in the and in hand-held devices. It has advantages like chips. Due to increase in embedded on chip memory and memory high speed and low power consumption. It has density, the number of faults grow exponentially. Error detection drawback like complex structure and expensive. works on concept of redundancy where extra bits are added for So, it is not used for high capacity applications. original data to detect the error bits. Error correction is done in Read Only Memory (ROM): It is a non-volatile memory. two forms: one is receiver itself corrects the data and other is It can only access data but cannot modify data. It is of low receiver sends the error bits to sender through feedback. Error detection and correction can be done in two ways. One is Single cost. Some applications of ROM are scanners, ID cards, Fax bit and other is multiple bit.
    [Show full text]
  • Use External Storage Devices Like Pen Drives, Cds, and Dvds
    External Intel® Learn Easy Steps Activity Card Storage Devices Using external storage devices like Pen Drives, CDs, and DVDs loading Videos Since the advent of computers, there has been a need to transfer data between devices and/or store them permanently. You may want to look at a file that you have created or an image that you have taken today one year later. For this it has to be stored somewhere securely. Similarly, you may want to give a document you have created or a digital picture you have taken to someone you know. There are many ways of doing this – online and offline. While online data transfer or storage requires the use of Internet, offline storage can be managed with minimum resources. The only requirement in this case would be a storage device. Earlier data storage devices used to mainly be Floppy drives which had a small storage space. However, with the development of computer technology, we today have pen drives, CD/DVD devices and other removable media to store and transfer data. With these, you store/save/copy files and folders containing data, pictures, videos, audio, etc. from your computer and even transfer them to another computer. They are called secondary storage devices. To access the data stored in these devices, you have to attach them to a computer and access the stored data. Some of the examples of external storage devices are- Pen drives, CDs, and DVDs. Introduction to Pen Drive/CD/DVD A pen drive is a small self-powered drive that connects to a computer directly through a USB port.
    [Show full text]
  • Embedded DRAM
    Embedded DRAM Raviprasad Kuloor Semiconductor Research and Development Centre, Bangalore IBM Systems and Technology Group DRAM Topics Introduction to memory DRAM basics and bitcell array eDRAM operational details (case study) Noise concerns Wordline driver (WLDRV) and level translators (LT) Challenges in eDRAM Understanding Timing diagram – An example References Slide 1 Acknowledgement • John Barth, IBM SRDC for most of the slides content • Madabusi Govindarajan • Subramanian S. Iyer • Many Others Slide 2 Topics Introduction to memory DRAM basics and bitcell array eDRAM operational details (case study) Noise concerns Wordline driver (WLDRV) and level translators (LT) Challenges in eDRAM Understanding Timing diagram – An example Slide 3 Memory Classification revisited Slide 4 Motivation for a memory hierarchy – infinite memory Memory store Processor Infinitely fast Infinitely large Cycles per Instruction Number of processor clock cycles (CPI) = required per instruction CPI[ ∞ cache] Finite memory speed Memory store Processor Finite speed Infinite size CPI = CPI[∞ cache] + FCP Finite cache penalty Locality of reference – spatial and temporal Temporal If you access something now you’ll need it again soon e.g: Loops Spatial If you accessed something you’ll also need its neighbor e.g: Arrays Exploit this to divide memory into hierarchy Hit L2 L1 (Slow) Processor Miss (Fast) Hit Register Cache size impacts cycles-per-instruction Access rate reduces Slower memory is sufficient Cache size impacts cycles-per-instruction For a 5GHz
    [Show full text]
  • Semiconductor Memories
    Semiconductor Memories Prof. MacDonald Types of Memories! l" Volatile Memories –" require power supply to retain information –" dynamic memories l" use charge to store information and require refreshing –" static memories l" use feedback (latch) to store information – no refresh required l" Non-Volatile Memories –" ROM (Mask) –" EEPROM –" FLASH – NAND or NOR –" MRAM Memory Hierarchy! 100pS RF 100’s of bytes L1 1nS SRAM 10’s of Kbytes 10nS L2 100’s of Kbytes SRAM L3 100’s of 100nS DRAM Mbytes 1us Disks / Flash Gbytes Memory Hierarchy! l" Large memories are slow l" Fast memories are small l" Memory hierarchy gives us illusion of large memory space with speed of small memory. –" temporal locality –" spatial locality Register Files ! l" Fastest and most robust memory array l" Largest bit cell size l" Basically an array of large latches l" No sense amps – bits provide full rail data out l" Often multi-ported (i.e. 8 read ports, 2 write ports) l" Often used with ALUs in the CPU as source/destination l" Typically less than 10,000 bits –" 32 32-bit fixed point registers –" 32 60-bit floating point registers SRAM! l" Same process as logic so often combined on one die l" Smaller bit cell than register file – more dense but slower l" Uses sense amp to detect small bit cell output l" Fastest for reads and writes after register file l" Large per bit area costs –" six transistors (single port), eight transistors (dual port) l" L1 and L2 Cache on CPU is always SRAM l" On-chip Buffers – (Ethernet buffer, LCD buffer) l" Typical sizes 16k by 32 Static Memory
    [Show full text]
  • Control Design and Implementation of Hard Disk Drive Servos by Jianbin
    Control Design and Implementation of Hard Disk Drive Servos by Jianbin Nie A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering-Mechanical Engineering in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Roberto Horowitz, Chair Professor Masayoshi Tomizuka Professor David Brillinger Spring 2011 Control Design and Implementation of Hard Disk Drive Servos ⃝c 2011 by Jianbin Nie 1 Abstract Control Design and Implementation of Hard Disk Drive Servos by Jianbin Nie Doctor of Philosophy in Engineering-Mechanical Engineering University of California, Berkeley Professor Roberto Horowitz, Chair In this dissertation, the design of servo control algorithms is investigated to produce high-density and cost-effective hard disk drives (HDDs). In order to sustain the continuing increase of HDD data storage density, dual-stage actuator servo systems using a secondary micro-actuator have been developed to improve the precision of read/write head positioning control by increasing their servo bandwidth. In this dissertation, the modeling and control design of dual-stage track-following servos are considered. Specifically, two track-following control algorithms for dual-stage HDDs are developed. The designed controllers were implemented and evaluated on a disk drive with a PZT-actuated suspension-based dual-stage servo system. Usually, the feedback position error signal (PES) in HDDs is sampled on some spec- ified servo sectors with an equidistant sampling interval, which implies that HDD servo systems with a regular sampling rate can be modeled as linear time-invariant (LTI) systems. However, sampling intervals for HDD servo systems are not always equidistant and, sometimes, an irregular sampling rate due to missing PES sampling data is unavoidable.
    [Show full text]
  • Manufacturing Equipment Technologies for Hard Disk's
    Manufacturing Equipment Technologies for Hard Disk’s Challenge of Physical Limits 222 Manufacturing Equipment Technologies for Hard Disk’s Challenge of Physical Limits Kyoichi Mori OVERVIEW: To meet the world’s growing demands for volume information, Brian Rattray not just with computers but digitalization and video etc. the HDD must Yuichi Matsui, Dr. Eng. continually increase its capacity and meet expectations for reduced power consumption and green IT. Up until now the HDD has undergone many innovative technological developments to achieve higher recording densities. To continue this increase, innovative new technology is again required and is currently being developed at a brisk pace. The key components for areal density improvements, the disk and head, require high levels of performance and reliability from production and inspection equipment for efficient manufacturing and stable quality assurance. To meet this demand, high frequency electronics, servo positioning and optical inspection technology is being developed and equipment provided. Hitachi High-Technologies Corporation is doing its part to meet market needs for increased production and the adoption of next-generation technology by developing the technology and providing disk and head manufacturing/inspection equipment (see Fig. 1). INTRODUCTION higher efficiency storage, namely higher density HDDS (hard disk drives) have long relied on the HDDs will play a major role. computer market for growth but in recent years To increase density, the performance and quality of there has been a shift towards cloud computing and the HDD’s key components, disks (media) and heads consumer electronics etc. along with a rapid expansion have most effect. Therefore further technological of data storage applications (see Fig.
    [Show full text]
  • SRAM Edram DRAM
    Technology Challenges and Directions of SRAM, DRAM, and eDRAM Cell Scaling in Sub- 20nm Generations Jai-hoon Sim SK hynix, Icheon, Korea Outline 1. Nobody is perfect: Main memory & cache memory in the dilemma in sub-20nm era. 2. SRAM Scaling: Diet or Die. 6T SRAM cell scaling crisis & RDF problem. 3. DRAM Scaling: Divide and Rule. Unfinished 1T1C DRAM cell scaling and its technical direction. 4. eDRAM Story: Float like a DRAM & sting like a SRAM. Does logic based DRAM process work? 5. All for one. Reshaping DRAM with logic technology elements. 6. Conclusion. 2 Memory Hierarchy L1$ L2$ SRAM Higher Speed (< few nS) Working L3$ Better Endurance eDRAM Memory (>1x1016 cycles) Access Speed Access Stt-RAM Main Memory DRAM PcRAM ReRAM Lower Speed Bigger Size Storage Class Memory NAND Density 3 Technologies for Cache & Main Memories SRAM • 6T cell. • Non-destructive read. • Performance driven process Speed technology. eDRAM • Always very fast. • 1T-1C cell. • Destructive read and Write-back needed. • Leakage-Performance compromised process technology. Standby • Smaller than SRAM and faster than Density Power DRAM. DRAM • 1T-1C cell. • Destructive read and Write-back needed. • Leakage control driven process technology. • Not always fast. • Smallest cell and lowest cost per bit. 4 eDRAM Concept: Performance Gap Filler SRAM 20-30X Cell Size Cell eDRAM DRAM 50-100X Random Access Speed • Is there any high density & high speed memory solution that could be 100% integrated into logic SoC? 5 6T-SRAM Cell Operation VDD WL WL DVBL PU Read PG SN SN WL Icell PD BL VSS BL V SN BL BL DD SN PD Read Margin: Write PG PG V Write Margin: SS PU Time 6 DRAM’s Charge Sharing DVBL VS VBL Charge Sharing Write-back VPP WL CS CBL V BL DD 1 SN C V C V C C V S DD B 2 DD S B BL 1/2VDD Initial Charge After Charge Sharing DVBL T d 1 1 V BL SS DVBL VBL VBL VDD 1 CB / CS 2 VBBW Time 1 1 I t L RET if cell leakage included Cell select DVBL VDD 1 CB / CS 2 CS 7 DRAM Scalability Metrics BL Cell WL CS • Cell CS.
    [Show full text]
  • The Magnetic Hard Disk Drive
    The Magnetic Hard Disk Drive Today’s Technical Status And Future Dr. Edward Grochowski Dr. Peter Goglia Memory/Storage Consultant Vice President – Plasma Therm [email protected] [email protected] SNIA Data Storage (DSC) Conference September 19-22, 2016 Santa Clara Hyatt Santa Clara, CA 1 Subjects To Be Addressed • History • Present HDD Market • Future HDD Bytes Shipped • Cloud Computing and HDD Design • Today’s HDD Products and Future Trends • HDD Performance Trend • HDD Cost Analysis • HDD Reliability/Endurance • HDD Technology: Shingle Write, Helium Ambient, Form Factor/Disk Count • Future Technology: HAMR, 2DMR, BPM • Materials: Heads, Media • Competitive Storage Technologies: Flash, NVM • Summary and Conclusions 2 3 HDD Exabyte Curve From Multiple Sources 1E+121024 1023 1E+11 HDD Exabyte Curve tes From Multiple Sources 1022 1E+10 Ed Grochowski 1E+91021 HDD 1E+81020 1E+7 10 19 1E+61018 Bytes Shipped/Year 1017 1E+5 1E+4 10 16 Petabytes Exabytes Zetaby Exabytes Petabytes 1E+31015 88 92 96 2000100 104 04 108 08 112 12 116 16 120 20 Production Year 4 TB2020B.PRZ Number of Disk Media Grains, Grain Diam. Write Head Field Limit Magnetic Coercive Field, Volume of Media Grain 5 Heat Activated Magnetic Recording (HAMR) Source: Seagate Technology Corp. Heating Cooling Coercivity Head Field Toperating Twrite Media Temperature 6 HAMR HEAD w/LASER ELEMENT 7 Heat Activated Magnetic Recording (HAMR) 1. Major HDD Modification 2. Adds Laser Element to HDD Heads (+3X Mass To Slider) 3. Involves FePtX Magnetic Media (Lower Curie Temperature) 4. Grain Diameter Scaling Levels Out 5. Disk Lube, Overcoat Modifications 6.
    [Show full text]
  • Computer Hardware SIG Non-Removable Storage Devices – Feb
    Computer Hardware SIG Non-Removable Storage Devices – Feb. 1, 2012 A hard disk drive (HDD; also hard drive, hard disk, or disk drive) is a device for storing and retrieving digital information, primarily computer data. It consists of one or more rigid (hence "hard") rapidly rotating discs (often referred to as platters), coated with magnetic material and with magnetic heads arranged to write data to the surfaces and read it from them. Hard drives are classified as non-volatile, random access, digital, magnetic, data storage devices. Introduced by IBM in 1956, hard disk drives have decreased in cost and physical size over the years while dramatically increasing in capacity and speed. Hard disk drives have been the dominant device for secondary storage of data in general purpose computers since the early 1960s. They have maintained this position because advances in their recording capacity, cost, reliability, and speed have kept pace with the requirements for secondary storage Types of Hard Drives Parallel ATA (PATA), originally AT Attachment (old term IDE), is an interface standard for the connection of storage devices such as hard disks, solid-state drives, floppy drives, and optical disc drives in computers. The standard is maintained by X3/INCITS committee. It uses the underlying AT Attachment (ATA) and AT Attachment Packet Interface (ATAPI) standards. The Parallel ATA standard is the result of a long history of incremental technical development, which began with the original AT Attachment interface, developed for use in early PC AT equipment. The ATA interface itself evolved in several stages from Western Digital's original Integrated Drive Electronics (IDE) interface.
    [Show full text]
  • Hard Disk Drive Specifications Models: 2R015H1 & 2R010H1
    Hard Disk Drive Specifications Models: 2R015H1 & 2R010H1 P/N:1525/rev. A This publication could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein – which will be incorporated in revised editions of the publication. Maxtor may make changes or improvements in the product(s) described in this publication at any time and without notice. Copyright © 2001 Maxtor Corporation. All rights reserved. Maxtor®, MaxFax® and No Quibble Service® are registered trademarks of Maxtor Corporation. Other brands or products are trademarks or registered trademarks of their respective holders. Corporate Headquarters 510 Cottonwood Drive Milpitas, California 95035 Tel: 408-432-1700 Fax: 408-432-4510 Research and Development Center 2190 Miller Drive Longmont, Colorado 80501 Tel: 303-651-6000 Fax: 303-678-2165 Before You Begin Thank you for your interest in Maxtor hard drives. This manual provides technical information for OEM engineers and systems integrators regarding the installation and use of Maxtor hard drives. Drive repair should be performed only at an authorized repair center. For repair information, contact the Maxtor Customer Service Center at 800- 2MAXTOR or 408-922-2085. Before unpacking the hard drive, please review Sections 1 through 4. CAUTION Maxtor hard drives are precision products. Failure to follow these precautions and guidelines outlined here may lead to product failure, damage and invalidation of all warranties. 1 BEFORE unpacking or handling a drive, take all proper electro-static discharge (ESD) precautions, including personnel and equipment grounding. Stand-alone drives are sensitive to ESD damage. 2 BEFORE removing drives from their packing material, allow them to reach room temperature.
    [Show full text]
  • Comparing SLC and MLC Flash Technologies and Structure
    September, 2009 Version 1.0 Comparing SLC and MLC Flash Technologies and Structure Author: Ethan Chen/ Tones Yen E-mail: [email protected]; [email protected] www.advantech.com September, 2009 Version 1.0 Table of Contents NOR versus NAND Memory .......................................................................................................1 NOR and NAND Technologies ................................................................................................1 The Cell Structure of NOR and NAND Memory ......................................................................1 Comparing Single-Level Cell and Multi-Level Cell Flash Technologies and Structure ......................................................................................................................................3 Conclusion...................................................................................................................................4 www.advantech.com NOR versus NAND Memory NOR and NAND Technologies In simplest terms, NAND Flash memory refers to sequential access devices that are best suited for handling data storage such as pictures, music, or data files. NOR Flash, on the other hand, is designed for random access devices that are best suited for storage and execution of program code. Code storage applications include STBs (Set-Top Boxes), PCs (Personal Computers), and cell phones. NAND Flash memory can also be used for some boot-up operations. The Cell Structure of NOR and NAND Memory A NAND Flash array is grouped
    [Show full text]