Manufacturing Equipment Technologies for Hard Disk's

Total Page:16

File Type:pdf, Size:1020Kb

Manufacturing Equipment Technologies for Hard Disk's Manufacturing Equipment Technologies for Hard Disk’s Challenge of Physical Limits 222 Manufacturing Equipment Technologies for Hard Disk’s Challenge of Physical Limits Kyoichi Mori OVERVIEW: To meet the world’s growing demands for volume information, Brian Rattray not just with computers but digitalization and video etc. the HDD must Yuichi Matsui, Dr. Eng. continually increase its capacity and meet expectations for reduced power consumption and green IT. Up until now the HDD has undergone many innovative technological developments to achieve higher recording densities. To continue this increase, innovative new technology is again required and is currently being developed at a brisk pace. The key components for areal density improvements, the disk and head, require high levels of performance and reliability from production and inspection equipment for efficient manufacturing and stable quality assurance. To meet this demand, high frequency electronics, servo positioning and optical inspection technology is being developed and equipment provided. Hitachi High-Technologies Corporation is doing its part to meet market needs for increased production and the adoption of next-generation technology by developing the technology and providing disk and head manufacturing/inspection equipment (see Fig. 1). INTRODUCTION higher efficiency storage, namely higher density HDDS (hard disk drives) have long relied on the HDDs will play a major role. computer market for growth but in recent years To increase density, the performance and quality of there has been a shift towards cloud computing and the HDD’s key components, disks (media) and heads consumer electronics etc. along with a rapid expansion have most effect. Therefore further technological of data storage applications (see Fig. 2). advances in manufacturing and inspection equipment In line with the increasing volumes of information, are sought to support these. the HDD requires increases of both capacities and Increased areal density has caused bit size to densities. Furthermore, in recent years, data centers get ever smaller, resulting in recent areal densities controlling massive amounts of information are of something like 650 Gbit/in² or the equivalent said to consume hundreds of megawatts of power. of 1,000 bits of data in a 1-micron square. To cope From an environmental perspective, low power with this Hitachi High-Technologies Corporation consumption IT (information technology) must be has promoted development of high precision head promoted and is being actively pursued in Japan positioning technology along with high frequency through the “Green IT Initiative.” In this regard, electronics for the smaller signals. Disk manufacturing & inspection Head inspection equipment Disk tester Cleaning machine Quasi tester Head tester Disk Fig. 1—HDD Manufacturing and Inspection Equipment Products. The increasing densities of HDDs Optical inspection Imprint Write pole profiler Head places high demands on the reliability and quality of the manufacturing and HDD inspection equipment used in their configuration production. These Hitachi High- Technologies Corporation products HDD: hard disk drive were designed to meet these demands. Hitachi Review Vol. 60 (2011), No. 5 223 Transfer speed Gbit/s the read method changed from the electromagnetic 10 Home electronics (video recorder, etc.) induction of thin film heads to a magnetoresistive type and a decade or so later traditional longitudinal HDD in 3 to 5 years Mobile (digital movies, etc.) recording was surpassed by perpendicular recording. 1 Server Cell phone, PC camera/audio, Manufacturing and Inspection Equipment Mbit/s mobile PC 100 Technology Flash Inspection equipment plays an important role in (NAND) trying to maintain highly efficient manufacturing and HDD 10 assure stable quality. Flash per chip HDD: per platter Furthermore, recent areal density increases have 1 Volume been accompanied by miniaturization of the magnetic 1 Mbyte 1 Gbyte 1 Tbyte domains and decreases in the head-disk spacing, PC: personal computer warranting ever sophisticated inspection equipment Fig. 2—Storage Technology Adoption. year after year. Whereas past growth in the market for HDDs has been based Currently, track pitch is being squeezed down to primarily on their use in computers, new applications such as around 70 nm, requiring high precision head positioning the cloud and consumer electronics are growing rapidly. servo technology for inspection equipment. To read the test signals correctly requires track mis-registration of the head to less than 3% of track TechnoloGY For increASinG HDD pitch. Fig. 4 shows the signal on a disk, read as denSitY the head is repeatedly scanned sideways across it, HDD Technology Transitions plotting the signal strength relative to the distance HDD have a long history dating back to the world’s from the track center. From this we determine that first HDD announced by International Business positioning repeatability is of the order of 1 nm, and Machines Corporation (IBM) in 1956. Since then thus highly stable. recording density has grown massively, increasing by 8 digits. Technical Trends for High Density To maintain the fast and continuous pace of areal Increasing areal density beyond 1 Tbit/in² is said density improvements over the years has taken many to be extremely difficult with existing technology. In technological innovations (see Fig. 3). In the 1990s order for the recorded bit to be smaller the grain size of the recording layer must be smaller requiring an Areal recording density 1 Tbit/in2 100 Track profile repeatability measured 10 times Gbit/in2 10 3,500 60% yearly 1 3,000 AFC disk 100 Perpendicular 2,500 recording 10 nm First HDD 30% yearly Mbit/in2 10 GMR/TMR/CPP 2,000 “RAMAC” head 1 Thin film disk 1,500 AMR head 100 Coated disk Amplitude (uV) 1,000 Thin film head kbit/in2 10 500 1 0 1950 1960 1970 1980 1990 2000 2010 −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 Year Cross-track position (µm) AMR: anti-magnetoresistive AFC: anti-ferromagnetically coupled GMR: giant magnetoresistive TMR: tunneling magnetoresistive CPP: current perpendicular to plane Fig. 4—Head Positioning and Repeatability. Fig. 3—HDD Areal Density and Technology Transitions. Head inspection requires a relative positioning accuracy To satisfy market demand, HDDs have achieved a continuous between disk and head of 3% or less of track pitch. The graph increase in their areal densities over a span of more than shows the disk write signal read back from the head and 50 years thanks to numerous technological innovations. indicates positioning repeatability of about 1 nm. Manufacturing Equipment Technologies for Hard Disk’s Challenge of Physical Limits 224 Bit patterned media Energy assist Shingled recording switch polarization. Thus, small bits can be recorded Track width in high Ku recording layers. Assist energy solutions in the form of laser light utilizing near-field optics Laser Write head Head or microwaves from high frequency magnetic field Track width oscillators are being considered. However, focusing Head Direction of the energy into a minute spot is something that Heat of rotation must be built in to a new head. spot M−2 (3) Bit patterned media M−1 M An innovation of the disk sees a move away from M+1 the traditional continuous granular recording layer Fig. 5—Technologies for Increasing HDD Density. to one of isolated magnetic domains with each one While achieving areal densities in excess of 1 Tbit/in2 is representing a single recorded bit of data. The mutual difficult using conventional technologies, Hitachi has come up interference between adjacent bits is minimized so, with a number of innovative technologies which it is developing even for low Ku recording layers, SNR and thermal to address this challenge. instability effects are improved. However, the bit location/synchronization not to mention the high improvement in SNR (signal-to-noise ratio) to read. volume production of the new disk manufacturing Data though would then be vulnerable to decay from processes still pose problems. the super paramagnetic effect’s thermal instability, These innovative technologies are being considered which in turn would require a high Ku (anisotropy but to achieve areal density of 4 Tbit/in² will likely energy) recording layer but the small magnetic field require a combination of both energy assist recording head could no longer write to it and this vicious circle and bit patterned media. is the so-called “tri-lemma in magnetic recording.” Bit patterned media has servo information pre- Several innovations are proposed to break through embedded on the disk and requires positioning this barrier (see Fig. 5). technology to account for eccentricity and otherwise (1) Shingled recording implement track following. To cope with these Technically there are no great hurdles so this is technology trends, the following equipment technology said to be close to implementation. Large Ku recording is being developed: layers with large magnetic field heads overwriting (a) Non- write-read certify inspection technology part of the previous data leave a narrow remnant to (b) Head inspection technology for early process stage form the track. The drawback is that conventional (c) Patterned media volume manufacturing technology random access writing will not be possible and must be accounted for with new formatting methods and OpticAL mediA teStinG EQuipment additional memory. The last step of the disk manufacturing process (2) Energy assisted recording is performing a glide test, checking the surface for This is primarily an innovation for the head, when asperities to assure the fly ability of the head, and a writing a bit of data, additional energy is applied certify inspection to examine magnetic characteristics to the area enabling the use of low field strength to and defects that influence read and write. Scattering detector Mirror Laser Reflection detector Disk Fig. 6—Optics Configuration. The machine uses laser light to inspect the disk. This can inspect Spindle motor the entire disk more quickly than the previous method of inspection using heads.
Recommended publications
  • Quantum Dxi4500 User's Guide
    User’s Guide User’s Guide User’s Guide User’s Guide QuantumQuantum DXDXii45004500 DX i -Series 6-66904-01 Rev C DXi4500 User’s Guide, 6-66904-01 Rev C, August 2010, Product of USA. Quantum Corporation provides this publication “as is” without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability or fitness for a particular purpose. Quantum Corporation may revise this publication from time to time without notice. COPYRIGHT STATEMENT © 2010 Quantum Corporation. All rights reserved. Your right to copy this manual is limited by copyright law. Making copies or adaptations without prior written authorization of Quantum Corporation is prohibited by law and constitutes a punishable violation of the law. TRADEMARK STATEMENT Quantum, the Quantum logo, DLT, DLTtape, the DLTtape logo, Scalar, and StorNext are registered trademarks of Quantum Corporation, registered in the U.S. and other countries. Backup. Recovery. Archive. It’s What We Do., the DLT logo, DLTSage, DXi, DXi-Series, Dynamic Powerdown, FastSense, FlexLink, GoVault, MediaShield, Optyon, Pocket-sized. Well-armored, SDLT, SiteCare, SmartVerify, StorageCare, Super DLTtape, SuperLoader, and Vision are trademarks of Quantum. LTO and Ultrium are trademarks of HP, IBM, and Quantum in the U.S. and other countries. All other trademarks are the property of their respective companies. Specifications are subject to change without notice. ii Quantum DXi4500 User’s Guide Contents Preface xv Chapter 1 DXi4500 System Description 1 Overview . 1 Features and Benefits . 3 Data Reduction . 3 Data Deduplication . 3 Compression. 4 Space Reclamation . 4 Remote Replication . 5 DXi4500 System .
    [Show full text]
  • Storage Systems and Technologies - Jai Menon and Clodoaldo Barrera
    INFORMATION TECHNOLOGY AND COMMUNICATIONS RESOURCES FOR SUSTAINABLE DEVELOPMENT - Storage Systems and Technologies - Jai Menon and Clodoaldo Barrera STORAGE SYSTEMS AND TECHNOLOGIES Jai Menon IBM Academy of Technology, San Jose, CA, USA Clodoaldo Barrera IBM Systems Group, San Jose, CA, USA Keywords: Storage Systems, Hard disk drives, Tape Drives, Optical Drives, RAID, Storage Area Networks, Backup, Archive, Network Attached Storage, Copy Services, Disk Caching, Fiber Channel, Storage Switch, Storage Controllers, Disk subsystems, Information Lifecycle Management, NFS, CIFS, Storage Class Memories, Phase Change Memory, Flash Memory, SCSI, Caching, Non-Volatile Memory, Remote Copy, Storage Virtualization, Data De-duplication, File Systems, Archival Storage, Storage Software, Holographic Storage, Storage Class Memory, Long-Term data preservation Contents 1. Introduction 2. Storage devices 2.1. Storage Device Industry Overview 2.2. Hard Disk Drives 2.3. Digital Tape Drives 2.4. Optical Storage 2.5. Emerging Storage Technologies 2.5.1. Holographic Storage 2.5.2. Flash Storage 2.5.3. Storage Class Memories 3. Block Storage Systems 3.1. Storage System Functions – Current 3.2. Storage System Functions - Emerging 4. File and Archive Storage Systems 4.1. Network Attached Storage 4.2. Archive Storage Systems 5. Storage Networks 5.1. SAN Fabrics 5.2. IP FabricsUNESCO – EOLSS 5.3. Converged Networking 6. Storage SoftwareSAMPLE CHAPTERS 6.1. Data Backup 6.2. Data Archive 6.3. Information Lifecycle Management 6.4. Disaster Protection 7. Concluding Remarks Acknowledgements Glossary Bibliography Biographical Sketches ©Encyclopedia of Life Support Systems (EOLSS) INFORMATION TECHNOLOGY AND COMMUNICATIONS RESOURCES FOR SUSTAINABLE DEVELOPMENT - Storage Systems and Technologies - Jai Menon and Clodoaldo Barrera Summary Our world is increasingly becoming a data-centric world.
    [Show full text]
  • Replication Using Group Communication Over a Partitioned Network
    Replication Using Group Communication Over a Partitioned Network Thesis submitted for the degree “Doctor of Philosophy” Yair Amir Submitted to the Senate of the Hebrew University of Jerusalem (1995). This work was carried out under the supervision of Professor Danny Dolev ii Acknowledgments I am deeply grateful to Danny Dolev, my advisor and mentor. I thank Danny for believing in my research, for spending so many hours on it, and for giving it the theoretical touch. His warm support and patient guidance helped me through. I hope I managed to adopt some of his professional attitude and integrity. I thank Daila Malki for her help during the early stages of the Transis project. Thanks to Idit Keidar for helping me sharpen some of the issues of the replication server. I enjoyed my collaboration with Ofir Amir on developing the coloring model of the replication server. Many thanks to Roman Vitenberg for his valuable insights regarding the extended virtual synchrony model and the replication algorithm. I benefited a lot from many discussions with Ahmad Khalaila regarding distributed systems and other issues. My thanks go to David Breitgand, Gregory Chokler, Yair Gofen, Nabil Huleihel and Rimon Orni, for their contribution to the Transis project and to my research. I am grateful to Michael Melliar-Smith and Louise Moser from the Department of Electrical and Computer Engineering, University of California, Santa Barbara. During two summers, several mutual visits and extensive electronic correspondence, Louise and Michael were involved in almost every aspect of my research, and unofficially served as my co-advisors. The work with Deb Agarwal and Paul Ciarfella on the Totem protocol contributed a lot to my understanding of high-speed group communication.
    [Show full text]
  • Use External Storage Devices Like Pen Drives, Cds, and Dvds
    External Intel® Learn Easy Steps Activity Card Storage Devices Using external storage devices like Pen Drives, CDs, and DVDs loading Videos Since the advent of computers, there has been a need to transfer data between devices and/or store them permanently. You may want to look at a file that you have created or an image that you have taken today one year later. For this it has to be stored somewhere securely. Similarly, you may want to give a document you have created or a digital picture you have taken to someone you know. There are many ways of doing this – online and offline. While online data transfer or storage requires the use of Internet, offline storage can be managed with minimum resources. The only requirement in this case would be a storage device. Earlier data storage devices used to mainly be Floppy drives which had a small storage space. However, with the development of computer technology, we today have pen drives, CD/DVD devices and other removable media to store and transfer data. With these, you store/save/copy files and folders containing data, pictures, videos, audio, etc. from your computer and even transfer them to another computer. They are called secondary storage devices. To access the data stored in these devices, you have to attach them to a computer and access the stored data. Some of the examples of external storage devices are- Pen drives, CDs, and DVDs. Introduction to Pen Drive/CD/DVD A pen drive is a small self-powered drive that connects to a computer directly through a USB port.
    [Show full text]
  • Replication Vs. Caching Strategies for Distributed Metadata Management in Cloud Storage Systems
    ShardFS vs. IndexFS: Replication vs. Caching Strategies for Distributed Metadata Management in Cloud Storage Systems Lin Xiao, Kai Ren, Qing Zheng, Garth A. Gibson Carnegie Mellon University {lxiao, kair, zhengq, garth}@cs.cmu.edu Abstract 1. Introduction The rapid growth of cloud storage systems calls for fast and Modern distributed storage systems commonly use an archi- scalable namespace processing. While few commercial file tecture that decouples metadata access from file read and systems offer anything better than federating individually write operations [21, 25, 46]. In such a file system, clients non-scalable namespace servers, a recent academic file sys- acquire permission and file location information from meta- tem, IndexFS, demonstrates scalable namespace processing data servers before accessing file contents, so slower meta- based on client caching of directory entries and permissions data service can degrade performance for the data path. Pop- (directory lookup state) with no per-client state in servers. In ular new distributed file systems such as HDFS [25] and the this paper we explore explicit replication of directory lookup first generation of the Google file system [21] have used cen- state in all servers as an alternative to caching this infor- tralized single-node metadata services and focused on scal- mation in all clients. Both eliminate most repeated RPCs to ing only the data path. However, the single-node metadata different servers in order to resolve hierarchical permission server design limits the scalability of the file system in terms tests. Our realization for server replicated directory lookup of the number of objects stored and concurrent accesses to state, ShardFS, employs a novel file system specific hybrid the file system [40].
    [Show full text]
  • Replication As a Prelude to Erasure Coding in Data-Intensive Scalable Computing
    DiskReduce: Replication as a Prelude to Erasure Coding in Data-Intensive Scalable Computing Bin Fan Wittawat Tantisiriroj Lin Xiao Garth Gibson fbinfan , wtantisi , lxiao , garth.gibsong @ cs.cmu.edu CMU-PDL-11-112 October 2011 Parallel Data Laboratory Carnegie Mellon University Pittsburgh, PA 15213-3890 Acknowledgements: We would like to thank several people who made significant contributions. Robert Chansler, Raj Merchia and Hong Tang from Yahoo! provide various help. Dhruba Borthakur from Facebook provides statistics and feedback. Bin Fu and Brendan Meeder gave us their scientific applications and data-sets for experimental evaluation. The work in this paper is based on research supported in part by the Betty and Gordon Moore Foundation, by the Department of Energy, under award number DE-FC02-06ER25767, by the Los Alamos National Laboratory, by the Qatar National Research Fund, under award number NPRP 09-1116-1-172, under contract number 54515- 001-07, by the National Science Foundation under awards CCF-1019104, CCF-0964474, OCI-0852543, CNS-0546551 and SCI-0430781, and by Google and Yahoo! research awards. We also thank the members and companies of the PDL Consortium (including APC, EMC, Facebook, Google, Hewlett-Packard, Hitachi, IBM, Intel, Microsoft, NEC, NetApp, Oracle, Panasas, Riverbed, Samsung, Seagate, STEC, Symantec, and VMware) for their interest, insights, feedback, and support. Keywords: Distributed File System, RAID, Cloud Computing, DISC Abstract The first generation of Data-Intensive Scalable Computing file systems such as Google File System and Hadoop Distributed File System employed n (n ≥ 3) replications for high data reliability, therefore delivering users only about 1=n of the total storage capacity of the raw disks.
    [Show full text]
  • Control Design and Implementation of Hard Disk Drive Servos by Jianbin
    Control Design and Implementation of Hard Disk Drive Servos by Jianbin Nie A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering-Mechanical Engineering in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Roberto Horowitz, Chair Professor Masayoshi Tomizuka Professor David Brillinger Spring 2011 Control Design and Implementation of Hard Disk Drive Servos ⃝c 2011 by Jianbin Nie 1 Abstract Control Design and Implementation of Hard Disk Drive Servos by Jianbin Nie Doctor of Philosophy in Engineering-Mechanical Engineering University of California, Berkeley Professor Roberto Horowitz, Chair In this dissertation, the design of servo control algorithms is investigated to produce high-density and cost-effective hard disk drives (HDDs). In order to sustain the continuing increase of HDD data storage density, dual-stage actuator servo systems using a secondary micro-actuator have been developed to improve the precision of read/write head positioning control by increasing their servo bandwidth. In this dissertation, the modeling and control design of dual-stage track-following servos are considered. Specifically, two track-following control algorithms for dual-stage HDDs are developed. The designed controllers were implemented and evaluated on a disk drive with a PZT-actuated suspension-based dual-stage servo system. Usually, the feedback position error signal (PES) in HDDs is sampled on some spec- ified servo sectors with an equidistant sampling interval, which implies that HDD servo systems with a regular sampling rate can be modeled as linear time-invariant (LTI) systems. However, sampling intervals for HDD servo systems are not always equidistant and, sometimes, an irregular sampling rate due to missing PES sampling data is unavoidable.
    [Show full text]
  • Middleware-Based Database Replication: the Gaps Between Theory and Practice
    Appears in Proceedings of the ACM SIGMOD Conference, Vancouver, Canada (June 2008) Middleware-based Database Replication: The Gaps Between Theory and Practice Emmanuel Cecchet George Candea Anastasia Ailamaki EPFL EPFL & Aster Data Systems EPFL & Carnegie Mellon University Lausanne, Switzerland Lausanne, Switzerland Lausanne, Switzerland [email protected] [email protected] [email protected] ABSTRACT There exist replication “solutions” for every major DBMS, from Oracle RAC™, Streams™ and DataGuard™ to Slony-I for The need for high availability and performance in data Postgres, MySQL replication and cluster, and everything in- management systems has been fueling a long running interest in between. The naïve observer may conclude that such variety of database replication from both academia and industry. However, replication systems indicates a solved problem; the reality, academic groups often attack replication problems in isolation, however, is the exact opposite. Replication still falls short of overlooking the need for completeness in their solutions, while customer expectations, which explains the continued interest in developing new approaches, resulting in a dazzling variety of commercial teams take a holistic approach that often misses offerings. opportunities for fundamental innovation. This has created over time a gap between academic research and industrial practice. Even the “simple” cases are challenging at large scale. We deployed a replication system for a large travel ticket brokering This paper aims to characterize the gap along three axes: system at a Fortune-500 company faced with a workload where performance, availability, and administration. We build on our 95% of transactions were read-only. Still, the 5% write workload own experience developing and deploying replication systems in resulted in thousands of update requests per second, which commercial and academic settings, as well as on a large body of implied that a system using 2-phase-commit, or any other form of prior related work.
    [Show full text]
  • A Crash Course in Wide Area Data Replication
    A Crash Course In Wide Area Data Replication Jacob Farmer, CTO, Cambridge Computer SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. A Crash Course in Wide Area Data Replication 2 © 2008 Storage Networking Industry Association. All Rights Reserved. Abstract A Crash Course in Wide Area Data Replication Replicating data over a WAN sounds pretty straight-forward, but it turns out that there are literally dozens of different approaches, each with it's own pros and cons. Which approach is the best? Well, that depends on a wide variety of factors! This class is a fast-paced crash course in the various ways in which data can be replicated with some commentary on each major approach. We trace the data path from applications to disk drives and examine all of the points along the way wherein replication logic can be inserted. We look at host based replication (application, database, file system, volume level, and hybrids), SAN replication (disk arrays, virtualization appliances, caching appliances, and storage switches), and backup system replication (block level incremental backup, CDP, and de-duplication). This class is not only the fastest way to understand replication technology it also serves as a foundation for understanding the latest storage virtualization techniques.
    [Show full text]
  • The Magnetic Hard Disk Drive
    The Magnetic Hard Disk Drive Today’s Technical Status And Future Dr. Edward Grochowski Dr. Peter Goglia Memory/Storage Consultant Vice President – Plasma Therm [email protected] [email protected] SNIA Data Storage (DSC) Conference September 19-22, 2016 Santa Clara Hyatt Santa Clara, CA 1 Subjects To Be Addressed • History • Present HDD Market • Future HDD Bytes Shipped • Cloud Computing and HDD Design • Today’s HDD Products and Future Trends • HDD Performance Trend • HDD Cost Analysis • HDD Reliability/Endurance • HDD Technology: Shingle Write, Helium Ambient, Form Factor/Disk Count • Future Technology: HAMR, 2DMR, BPM • Materials: Heads, Media • Competitive Storage Technologies: Flash, NVM • Summary and Conclusions 2 3 HDD Exabyte Curve From Multiple Sources 1E+121024 1023 1E+11 HDD Exabyte Curve tes From Multiple Sources 1022 1E+10 Ed Grochowski 1E+91021 HDD 1E+81020 1E+7 10 19 1E+61018 Bytes Shipped/Year 1017 1E+5 1E+4 10 16 Petabytes Exabytes Zetaby Exabytes Petabytes 1E+31015 88 92 96 2000100 104 04 108 08 112 12 116 16 120 20 Production Year 4 TB2020B.PRZ Number of Disk Media Grains, Grain Diam. Write Head Field Limit Magnetic Coercive Field, Volume of Media Grain 5 Heat Activated Magnetic Recording (HAMR) Source: Seagate Technology Corp. Heating Cooling Coercivity Head Field Toperating Twrite Media Temperature 6 HAMR HEAD w/LASER ELEMENT 7 Heat Activated Magnetic Recording (HAMR) 1. Major HDD Modification 2. Adds Laser Element to HDD Heads (+3X Mass To Slider) 3. Involves FePtX Magnetic Media (Lower Curie Temperature) 4. Grain Diameter Scaling Levels Out 5. Disk Lube, Overcoat Modifications 6.
    [Show full text]
  • Replication Under Scalable Hashing: a Family of Algorithms for Scalable Decentralized Data Distribution
    Replication Under Scalable Hashing: A Family of Algorithms for Scalable Decentralized Data Distribution R. J. Honicky Ethan L. Miller [email protected] [email protected] Storage Systems Research Center Jack Baskin School of Engineering University of California, Santa Cruz Abstract block management is handled by the disk as opposed to a dedicated server. Because block management on individ- Typical algorithms for decentralized data distribution ual disks requires no inter-disk communication, this redis- work best in a system that is fully built before it first used; tribution of work comes at little cost in performance or ef- adding or removing components results in either exten- ficiency, and has a huge benefit in scalability, since block sive reorganization of data or load imbalance in the sys- layout is completely parallelized. tem. We have developed a family of decentralized algo- There are differing opinions about what object-based rithms, RUSH (Replication Under Scalable Hashing), that storage is or should be, and how much intelligence belongs maps replicated objects to a scalable collection of storage in the disk. In order for a device to be an OSD, however, servers or disks. RUSH algorithms distribute objects to each disk or disk subsystem must have its own filesystem; servers according to user-specified server weighting. While an OSD manages its own allocation of blocks and disk lay- all RUSH variants support addition of servers to the sys- out. To store data on an OSD, a client providesthe data and tem, different variants have different characteristics with a key for the data. To retrieve the data, a client provides a respect to lookup time in petabyte-scale systems, perfor- key.
    [Show full text]
  • Score: a Scalable One-Copy Serializable Partial Replication Protocol⋆
    SCORe: a Scalable One-Copy Serializable Partial Replication Protocol⋆ Sebastiano Peluso12, Paolo Romano2, and Francesco Quaglia1 1 Sapienza University, Rome, Italy {peluso,quaglia}@dis.uniroma1.it 2 IST/INESC-ID, Lisbon, Portugal {peluso,romanop}@gsd.inesc-id.pt Abstract. In this article we present SCORe, a scalable one-copy serial- izable partial replication protocol. Differently from any other literature proposal, SCORe jointly guarantees the following properties: (i) it is gen- uine, thus ensuring that only the replicas that maintain data accessed by a transaction are involved in its processing, and (ii) it guarantees that read operations always access consistent snapshots, thanks to a one-copy serializable multiversion scheme, which never aborts read-only transactions and spares them from any (distributed) validation phase. This makes SCORe particularly efficient in presence of read-intensive workloads, as typical of a wide range of real-world applications. We have integrated SCORe into a popular open source distributed data grid and performed a large scale experimental study with well-known benchmarks using both private and public cloud infrastructures. The experimental results demonstrate that SCORe provides stronger consistency guaran- tees (namely One-Copy Serializability) than existing multiversion partial replication protocols at no additional overhead. Keywords: Distributed Transactional Systems, Partial Replication, Scal- ability, Multiversioning. 1 Introduction In-memory, transactional data platforms, often referred to as NoSQL data grids, such as Cassandra, BigTable, or Infinispan, have become the reference data man- agement technology for grid and cloud computing systems. For these platforms, data replication represents the key mechanism to ensure both adequate perfor- mance and fault-tolerance, since it allows (a) distributing the load across the different nodes within the platform, and (b) ensuring data survival in the event of node failures.
    [Show full text]