United States Patent (10) Patent N0.: US 7,290,102 B2 Lubbers Et A]

Total Page:16

File Type:pdf, Size:1020Kb

United States Patent (10) Patent N0.: US 7,290,102 B2 Lubbers Et A] US007290102B2 (12) United States Patent (10) Patent N0.: US 7,290,102 B2 Lubbers et a]. (45) Date of Patent: Oct. 30, 2007 (54) POINT IN TIME STORAGE COPY 5,184,281 A 2/1993 Samarov et al. 5,513,314 A 4/1996 Kandasamy et al. (75) Inventors: Clark E. Lubbers, Colorado Springs, 5,815,371 A 9/ 1998 Jeiffies et a1~ CO (US); James M. Reiser, Colorado 5,815,649 A 9/1998 Utter et al' Springs’ CO (Us); Anuja Korgaonkars 5,822,777 A 10/1998 Les-hem et al. Colorado Springs CO Randy L 5,832,222 A 11/1998 DZladosZ et al. Roberson New £30m Richeg/ FL (US)_' 5,835,700 A 11/1998 Carbonneau et al. ’ ’ ’ 5,923,876 A 7/1999 Teague Robert G- Bean, Monument, CO (Us) 5,987,622 A 11/1999 L0 Verso et al. 5,996,089 A 11/1999 Mann et a1. (73) Assignee: Hewlett-Packard Development 6,033,639 A 3/2000 Schmidt et a1, Company, LP, Houston, TX (US) (Continued) ( * ) Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 OTHER PUBLICATIONS U-S-C- 154(1)) by 360 days- Smart Storage Inc., “SmartStor In?NetTM; Virtual Storage for Today’s E-economy,” Sep. 2000. (21) Appl. N0.: 11/081,061 _ (Contlnued) (22) Filed: Mar‘ 15’ 2005 Primary ExaminerAiary Portka (65) Prior Publication Data (57) ABSTRACT US 2005/0160243 A1 Jul. 21, 2005 A storage system permits Virtual storage of user data by Related US. Application Data implementing a logical disk mapping structure that provides (63) Continuation of application NO‘ 10/080 961 ?led on access to user data stored on physical storage media and Oct 22 2001 HOW Pat NO 6 915 392 Winch is a methods for generating point-in-time copies, or snapshots, con?nue’ltion_il’l_ an of 2'‘ li'cat’ion ’NO ’09/872 597 of logical disks. A snapshot logical disk is referred to as a ?led on Jun 1 5001 noglppat NO 6961838 ’ ’ predecessor logical disk and the original logical disk is ' ’ ’ ' ' ’ ’ ' referred to as a successor logical disk. Creating a snapshot (51) 1111.0. 1nvo' 1 ves creatin'gPd re ecessor lg'o 1ca ld'k1s maPP'gd 1n ata G06F 12/16 (200601) structures and populating the data structures With metadata G06F 1 7/30 (200601) that maps the predecessor logical disk to the user data stored (52) Us Cl 711/162 707/204 711/206 on physical media. Logical disks include metadata that 58 F,‘ I'd "" "" "s h ’ ’ N indicates Whether user information is shared between logical ( ) gm 01. $551 ?cla It?“ earcl t """"" one disks. Multiple generations of snapshots may be created, and ee app 1ca Ion e or Comp 6 e Seam 15 my‘ user data may be shared between these generations. Methods (56) References Cited are disclosed for maintaining data accuracy When Write l/O operations are directed to a logical disk. U.S. PATENT DOCUMENTS 5,109,318 A 4/1992 Funari et al. 17 Claims, 15 Drawing Sheets K- 501 | RSS ID I PHYSICAL MEMBER SELECTION RAIDINFO STATE LMAP POINTER ~._\ , STATE LMAPPOINTER (- 50a ,./" STATE LMAP POINTER RSSS STATE LMAP POINTER R5EG# STATE RSD POINTER STATE STATE LMAP POINTER ' RSEGTI STATE RSO POINTER P$EG# PSEG# STATE LMAF‘ POINTER RSEGII STATE RSO POINTER PSEG# PSEGit STATE LMAP POINTER : : PSEG‘t P5561‘ STATE LMAP POINTER I D I PSEG# PSEG# STATE LMAP POINTER wm :. :- STATE LMAP POINTER PsEOrr v.—l I I I ‘ ' ' - RSD STATE LMAP POINTER * ' STATE LMAPPOINTER K 505 STATE LMAF' POINTER US 7,290,102 B2 Page 2 US. PATENT DOCUMENTS 2002/0019923 A1 2/2002 Reuter 2002/0048284 A1 4/2002 Moulton 6,073,209 A 6/2000 Bergsten 2002/0188800 A1 12/2002 TomasZeWski 6,148,414 A 11/2000 Brown et 31- 2003/0051109 A1 3/2003 Cochran 6,161,192 A 12/2000 Lubbérs 2003/0056038 A1 3/2003 Cochran 6,170,063 B1 V2001 Goldlng 2003/0063134 A1 4/2003 Lord 6,188,973 B1 2/2001 Martinez et al. 2003/0074492 A1 4/2003 Cochran 6,237,112 B1 5/2001 YOO eta1~ 2003/0079014 A1 4/2003 Lubbers 6,243,790 B1 6/2001 Yorimitsu 2003/0079074 A1 4/2003 Sicola 6,260,120 B1 7/2001 Blumenau et al. 2003/00790g2 A1 4/2003 Sicola 6,266,721 B1 7/2001 Sheikh et 31 2003/0079083 A1 4/2003 Lubbers 6,278,609 B1 8/2001 Suzuki et 81- 2003/0079102 A1 4/2003 Lubbers 6,282,094 B1 8/2001 L0 eta1~ 2003/0079156 A1 4/2003 Sicola 6,282,096 B1 8/2001 Lo et 31- 2003/0084241 A1 5/2003 Lubbers 6,282,610 B1 8/2001 Bergsten 2003/0101318 A1 5/2003 Kaga 6,295,578 B1 9/2001 Dimitroff 2003/0110237 A1 6/2003 Kitamura 6,397,293 B2 5/2002 Shrader 2003/0126315 A1 7/2003 Tan 6,487,636 B1 11/2002 Dolphin 2003/0126347 A1 7/2003 Tan 6,490,122 B1 12/2002 Holmquist 2003/0140191 A1 7/2003 McGoWen 6,493,656 B1 12/2002 Houston 2003/0145045 A1 7/2003 Pellegrino 6,505,268 B1 V2003 Schultz 2003/0145130 A1 7/2003 Schultz 6,523,749 B2 2/2003 Reasoner 2003/0170012 A1 9/2003 Cochran 6,546,459 B2 4/2003 Rust 2003/0177323 A1 9/2003 Popp 6,560,673 B2 5/2003 Elliot 2003/0187847 A1 10/2003 Lubbers 6,587,962 B1 7/2003 Hepner 2003/0187947 A1 10/2003 Lubbers 6,594,745 B2 7/2003 Grover 2003/0188085 A1 10/2003 Arakawa 6,601,187 B1 7/2003 Sicola 2003/0188114 A1 10/2003 Lubbers 6,606,690 B2 8/2003 Padovano 2003/0188119 A1 10/2003 Lubbers 6,609,145 B1 8/2003 Thompson 2003/0188153 A1 10/2003 Dcinofr 6,629,108 B2 9/2003 Frey 2003/0188218 A1 10/2003 Lubbers 6,629,273 B1 9/2003 Patterson 2003/0188229 A1 10/2003 Lubbers 6,643,795 B1 11/2003 $100111 2003/0188233 A1 10/2003 Lubbers 6,647,514 B1 11/2003 Umberger 2003/0191909 A1 10/2003 Asano 6,658,590 B1 12/2003 Sicola 2003/0191919 A1 10/2003 Sato 6,663,003 B2 12/2003 Johnson 2003/0196023 A1 10/2003 Dickson 6,681,308 B1 V2004 Dallmann 2003/0212781 A1 11/2003 Kaneda 6,708,285 B2 3/2004 Old?eld 2003/0229651 A1 12/2003 MiZuno 6,715,101 B2 3/2004 Old?eld 2003/0236953 A1 12/2003 Giicfr 6,718,404 B2 4/2004 Reuter 2004/0019740 A1 1/2004 Nakayama 6,718,434 B2 4/2004 Veitch 2004/0022546 A1 2/2004 Cochran 6,721,902 B1 4/2004 Cochran 2004/0024838 A1 2/2004 Cochran 6,725,393 B1 4/2004 Pellegrlno 2004/0024961 A1 2/2004 Cochran 6,742,020 B1 5/2004 Dimitroff 2004/0030727 A1 2/2004 Armangau 6,745,207 B2 6/2004 Reuter 2004/0030846 A1 2/2004 Armangau 6,763,409 B1 7/2004 Elliot 2004/0049634 A1 3/2004 Cochran 6,772,231 B2 8/2004 Reuter 2004/0078638 A1 4/2004 Cochran 6,775,790 B2 8/2004 Reuter 2004/0078641 A1 4/2004 Fleischmann 6,795,904 B1 9/2004 Kamvysselis 2004/0128404 A1 7/2004 Cochran 6,802,023 B2 10/2004 Old?eld 2004/0168034 A1 8/2004 Sigeo 6,807,605 B2 10/2004 Umberger 2004/0215602 A1 10/2004 Cioccarelli 6,817,522 B2 11/2004 Brignone 2004/0230859 A1 11/2004 Cochran 2,253,322 5; 15588;‘ gaget?mn 2004/0267959 A1 12/2004 Cochran , , am e 6,842,833 B1 1/2005 Phillips OTHER PUBLICATIONS 6,845,403 B2 1/2005 Chadalapaka Compaq Computer Corporation, “The Compaq Enterprise Network 2002/0019863 A1 2/2002 Reuter Storage Architecture: An Overview,” May 2000. 2002/0019908 A1 2/2002 Reuter Compaq Computer Corporation, “Compaq Storage Works” Data 2002/0019920 A1 2/2002 Reuter Replication Manager HSG80 ACS V8.5P Operations Guide, Mar. 2002/0019922 A1 2/2002 Reuter 2000. U.S. Patent 0a. 30, 2007 Sheet 1 0f 15 US 7,290,102 B2 44069.. xwa1| $069 , xwE 130604 v56 h.QI U.S. Patent 0a. 30, 2007 Sheet 2 0f 15 US 7,290,102 B2 203[ ' 301 303 213.r\\ FIG.3 211 U.S. Patent 0a. 30, 2007 Sheet 3 0f 15 US 7,290,102 B2 Pom Q wow wow Now U.S. Patent 0a. 30, 2007 Sheet 4 0f 15 US 7,290,102 B2 wwwm IO OC _Emmi3w? E252”.05mmmmmsmg$0.9112955mm .Qwm .Emma :823mm".owm1526a55w#wwwm Emmanew?‘x5251Qwm35wEmma Emma“28mmun0\ x3260E55“6mm”.owmM55 F2555600mm“Emma. 8m\ ,. Pom\t U.S. Patent 0a. 30, 2007 Sheet 7 0f 15 US 7,290,102 B2 905 CREATE PREDECESSOR PLDMC 1 9O QUIESCE WRITE l/O OPERATIONS TO SUCCESSOR LOGICAL DISK I Y FLUSH SUCCESSOR LOGICAL DISK CACHE DATA L 920 SET SHARE BITS IN SUCCESSOR PLDMC _I__95 CREATE LZMAP. LMAP FOR PREDECESSOR LOGICAL DISK 1 930 PREDECESSORPOPULATE // LZMAP WITH PREDECESSOR LMAP POINTERS 1 93 5 POPULATE I PRED EC ESSOR LMAP WITH SUCCESSOR LMAP CONTENTS 1 940 SET SHARE BITS I IN SUCCESSOR / AND PREDECESOR LMAPS I 945 SUCCESSORUNQUIESCE / LOGICAL DISK Fig. 9 U.S. Patent 0a. 30, 2007 Sheet 8 0f 15 US 7,290,102 B2 _ Ss Sp S5 Sp Ss Sp Ss Sp __ FIG. 10 U.S. Patent 0a. 30, 2007 Sheet 9 0f 15 US 7,290,102 B2 WI ___Ss Sp Ss Sp Ss Sp Ss Sp _ FIG.
Recommended publications
  • Cortex-A9 Single Core Microarchitecture
    Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition 18-600 Foundations of Computer Systems Lecture 10: “The Memory Hierarchy” John P. Shen & Gregory Kesden October 2, 2017 ➢ Required Reading Assignment: • Chapter 6 of CS:APP (3rd edition) by Randy Bryant & Dave O’Hallaron ➢ Recommended Reference: • Sec. 1 & Sec. 3: Bruce Jacob, “The Memory System: You Can't Avoid It, You Can't Ignore It, You Can't Fake It,” Synthesis Lectures on Computer Architecture 2009. 10/02/2017 (© John Shen) 18-600 Lecture #10 1 18-600 Foundations of Computer Systems Lecture 10: “The Memory Hierarchy” A. Memory Technologies B. Main Memory Implementation a. DRAM Organization b. DRAM Operation c. Memory Controller C. Disk Storage Technologies 10/02/2017 (© John Shen) 18-600 Lecture #10 2 From Lec #9 … Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Three Flow Paths of Superscalar Processors ➢ Wide Instruction Fetching ➢ Dynamic Branch Prediction I-cache Branch Instruction FETCH Flow Predictor Instruction ➢ Register Renaming Buffer ➢ Dynamic Scheduling DECODE ➢ Load Bypassing & Forwarding ➢ Speculative Memory Disamb. DISPATCH Integer Floating-point Media Memory Reservation Stations Memory Data EXECUTE Flow Reorder Buffer Register (ROB) Data COMMIT Flow Store D-cache Queue 10/02/2017 (© John Shen) 18-600 Lecture #10 3 From Lec #9 … Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Integrating Map Table with the ARF 9/27/2017 (©J.P. Shen) 18-600 Lecture #9 4 From Lec #9 … Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Reservation Station Implementation + info for executing instruction (opcode, ROB entry, RRF entry…) • Reservation Stations: distributed vs.
    [Show full text]
  • Memory Hierarchy Summary
    Carnegie Mellon Carnegie Mellon Today DRAM as building block for main memory The Memory Hierarchy Locality of reference Caching in the memory hierarchy Storage technologies and trends 15‐213 / 18‐213: Introduction to Computer Systems 10th Lecture, Feb 14, 2013 Instructors: Seth Copen Goldstein, Anthony Rowe, Greg Kesden 1 2 Carnegie Mellon Carnegie Mellon Byte‐Oriented Memory Organization Simple Memory Addressing Modes Normal (R) Mem[Reg[R]] • • • . Register R specifies memory address . Aha! Pointer dereferencing in C Programs refer to data by address movl (%ecx),%eax . Conceptually, envision it as a very large array of bytes . In reality, it’s not, but can think of it that way . An address is like an index into that array Displacement D(R) Mem[Reg[R]+D] . and, a pointer variable stores an address . Register R specifies start of memory region . Constant displacement D specifies offset Note: system provides private address spaces to each “process” . Think of a process as a program being executed movl 8(%ebp),%edx . So, a program can clobber its own data, but not that of others nd th From 2 lecture 3 From 5 lecture 4 Carnegie Mellon Carnegie Mellon Traditional Bus Structure Connecting Memory Read Transaction (1) CPU and Memory CPU places address A on the memory bus. A bus is a collection of parallel wires that carry address, data, and control signals. Buses are typically shared by multiple devices. Register file Load operation: movl A, %eax ALU %eax CPU chip Main memory Register file I/O bridge A 0 Bus interface ALU x A System bus Memory bus I/O Main Bus interface bridge memory 5 6 Carnegie Mellon Carnegie Mellon Memory Read Transaction (2) Memory Read Transaction (3) Main memory reads A from the memory bus, retrieves CPU read word xfrom the bus and copies it into register word x, and places it on the bus.
    [Show full text]
  • Lecture 22: Mass Storage
    Lecture 22: Mass Storage Fall 2019 Jason Tang Slides based upon Operating System Concept slides, http://codex.cs.yale.edu/avi/os-book/OS9/slide-dir/index.html Copyright Silberschatz, Galvin, and Gagne, 2013 "1 Topics • Mass Storage Systems! • Disk Scheduling! • Disk Management! • Boot Process "2 Mass Storage Systems • After startup, OS loads programs from secondary storage into main memory:! • ROM / EPROM / EEPROM / FPGA! • Magnetic hard disk! • Non-volatile random-access memory (NVRAM) ! • Tape drive ! • Or others: boot CD, Zip drive, punch card, … "3 EPROM • Erasable programmable read-only memory! • Manufacturer or OEM burns image into EPROM! • Use in older computing systems and in modern embedded systems "4 Magnetic Hard Disk (HDD) • Spindle motor spins a stack$ of platters coated with$ magnetic material! • Spins from 5400 to over$ 10000 RPMs! • Actuator motor moves a disk$ head over the platters, to$ sense polarity of the track$ underneath https://www.technobu%alo.com/2012/11/24/western-digital- "5 expands-high-performance-wd-black-hard-drive-line-to-4tb/ Magnetic Hard Disk (HDD) • Transfer rate: rate which data flow between drive and computer! • Positioning time (random-access time): time to move disk arm to desired cylinder (seek time) plus time for desired sector to rotate under disk head (rotational latency)! • Head crash: when disk head hits platter! • Attached to computer via a bus: SCSI, IDE, SATA, Fibre Channel, USB, Thunderbolt, others! • Host controller in computer uses bus to talk to disk controller "6 Magnetic Hard Disk
    [Show full text]
  • Tuning IBM System X Servers for Performance
    Front cover Tuning IBM System x Servers for Performance Identify and eliminate performance bottlenecks in key subsystems Expert knowledge from inside the IBM performance labs Covers Windows, Linux, and VMware ESX David Watts Alexandre Chabrol Phillip Dundas Dustin Fredrickson Marius Kalmantas Mario Marroquin Rajeev Puri Jose Rodriguez Ruibal David Zheng ibm.com/redbooks International Technical Support Organization Tuning IBM System x Servers for Performance August 2009 SG24-5287-05 Note: Before using this information and the product it supports, read the information in “Notices” on page xvii. Sixth Edition (August 2009) This edition applies to IBM System x servers running Windows Server 2008, Windows Server 2003, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and VMware ESX. © Copyright International Business Machines Corporation 1998, 2000, 2002, 2004, 2007, 2009. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Contents Notices . xvii Trademarks . xviii Foreword . xxi Preface . xxiii The team who wrote this book . xxiv Become a published author . xxix Comments welcome. xxix Part 1. Introduction . 1 Chapter 1. Introduction to this book . 3 1.1 Operating an efficient server - four phases . 4 1.2 Performance tuning guidelines . 5 1.3 The System x Performance Lab . 5 1.4 IBM Center for Microsoft Technologies . 7 1.5 Linux Technology Center . 7 1.6 IBM Client Benchmark Centers . 8 1.7 Understanding the organization of this book . 10 Chapter 2. Understanding server types . 13 2.1 Server scalability . 14 2.2 Authentication services . 15 2.2.1 Windows Server 2008 Active Directory domain controllers .
    [Show full text]
  • POWER7 and POWER7+ Optimization and Tuning Guide
    Front cover POWER7 and POWER7+ Optimization and Tuning Guide Discover simple strategies to optimize your POWER7 environment Analyze and maximize performance with solid solutions Learn about the new POWER7+ processor Brian Hall Steve Munroe Mala Anand Francis P O’Connell Bill Buros Sergio Reyes Miso Cilimdzic Raul Silvera Hong Hua Randy Swanberg Judy Liu Brian Twichell John MacMillan Brian F Veale Sudhir Maddali Julian Wang K Madhusudanan Yaakov Yaari Bruce Mealey ibm.com/redbooks International Technical Support Organization POWER7 and POWER7+ Optimization and Tuning Guide November 2012 SG24-8079-00 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. First Edition (November 2012) This edition pertains to Power Systems servers based on POWER7 and POWER7+ processor-based technology. Specific software levels and firmware levels used are noted throughout the text. © Copyright International Business Machines Corporation 2012. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii Preface . ix The team who wrote this book . ix Now you can become a published author, too! . xiii Comments welcome. xiv Stay connected to IBM Redbooks . xiv Chapter 1. Optimization and tuning on IBM POWER7 and IBM POWER7+ . 1 1.1 Introduction . 2 1.2 Outline of this guide . 2 1.3 Conventions that are used in this guide . 4 1.4 Background . 4 1.5 Optimizing performance on POWER7 . 5 1.5.1 Lightweight tuning and optimization guidelines. 6 1.5.2 Deployment guidelines . 13 1.5.3 Deep performance optimization guidelines.
    [Show full text]
  • Fasttrak Tx2200 / Tx2300 Tx4200 / Tx4300 User Manual
    FASTTRAK TX2200 / TX2300 TX4200 / TX4300 USER MANUAL Version 1.8 FastTrak TX2200/2300,TX4200/4300 User Manual Copyright © 2005 Promise Technology, Inc. All Rights Reserved. Copyright by Promise Technology, Inc. (Promise Technology). No part of this manual may be reproduced or transmitted in any form without the expressed, written permission of Promise Technology. Trademarks Promise, and the Promise logo are registered in U.S. Patent and Trademark Office. All other product names mentioned herein may be trademarks or registered trademarks of their respective companies. Important data protection information You should back up all data before installing any drive controller or storage peripheral. Promise Technology is not responsible for any loss of data resulting from the use, disuse or misuse of this or any other Promise Technology product. Notice Although Promise Technology has attempted to ensure the accuracy of the content of this manual, it is possible that this document may contain technical inaccuracies, typographical, or other errors. Promise Technology assumes no liability for any error in this publication, and for damages, whether direct, indirect, incidental, consequential or otherwise, that may result from such error, including, but not limited to loss of data or profits. Promise Technology provides this publication “as is” without warranty of any kind, either express or implied, including, but not limited to implied warranties of merchantability or fitness for a particular purpose. The published information in the manual is subject to change without notice. Promise Technology reserves the right to make changes in the product design, layout, and driver revisions without notification to its users. This version of the User Manual supersedes all previous versions.
    [Show full text]
  • Introduction to Computer Systems 15-213/18-243, Spring 2009
    Carnegie Mellon The Memory Hierarchy 15-213: Introduction to Computer Systems 11th Lecture, Feb. 16, 2016 Instructors: Franz Franchetti & Seth Copen Goldstein, Ralf Brown, and Brian Railing Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition 1 Carnegie Mellon Today Storage technologies and trends Locality of reference Caching in the memory hierarchy Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition 2 Carnegie Mellon Random-Access Memory (RAM) Key features . RAM is traditionally packaged as a chip. Basic storage unit is normally a cell (one bit per cell). Multiple RAM chips form a memory. RAM comes in two varieties: . SRAM (Static RAM) . DRAM (Dynamic RAM) Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition 3 Carnegie Mellon SRAM vs DRAM Summary Trans. Access Needs Needs per bit time refresh? EDC? Cost Applications SRAM 4 or 6 1X No Maybe 100x Cache memories DRAM 1 10X Yes Yes 1X Main memories, frame buffers Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition 4 Carnegie Mellon Nonvolatile Memories DRAM and SRAM are volatile memories . Lose information if powered off. Nonvolatile memories retain value even if powered off . Read-only memory (ROM): programmed during production . Programmable ROM (PROM): can be programmed once . Eraseable PROM (EPROM): can be bulk erased (UV, X-Ray) . Electrically eraseable PROM (EEPROM): electronic erase capability . Flash memory: EEPROMs. with partial (block-level) erase capability . Wears out after about 100,000 erasings Uses for Nonvolatile Memories . Firmware programs stored in a ROM (BIOS, controllers for disks, network cards, graphics accelerators, security subsystems,…) .
    [Show full text]
  • Introduction to Computer Systems 15-213/18-243, Spring 2009
    Carnegie Mellon The Memory Hierarchy 15-213: Introduction to Computer Systems 11th Lecture, June 11, 2019 Instructor: Brian Railing Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition 1 Carnegie Mellon Today Storage technologies and trends Locality of reference Caching in the memory hierarchy Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition 2 Carnegie Mellon Random-Access Memory (RAM) Key features . RAM is traditionally packaged as a chip. Basic storage unit is normally a cell (one bit per cell). Multiple RAM chips form a memory. RAM comes in two varieties: . SRAM (Static RAM) . DRAM (Dynamic RAM) Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition 3 Carnegie Mellon SRAM vs DRAM Summary Trans. Access Needs Needs per bit time refresh? EDC? Cost Applications SRAM 4 or 6 1X No Maybe 100x Cache memories DRAM 1 10X Yes Yes 1X Main memories, frame buffers Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition 4 Carnegie Mellon Enhanced DRAMs Basic DRAM cell has not changed since its invention in 1966. Commercialized by Intel in 1970. DRAM cores with better interface logic and faster I/O : . Synchronous DRAM (SDRAM) . Uses a conventional clock signal instead of asynchronous control . Allows reuse of the row addresses (e.g., RAS, CAS, CAS, CAS) . Double data-rate synchronous DRAM (DDR SDRAM) . Double edge clocking sends two bits per cycle per pin . Different types distinguished by size of small prefetch buffer: – DDR (2 bits), DDR2 (4 bits), DDR3 (8 bits) . By 2010, standard for most server and desktop systems .
    [Show full text]
  • Spheras Storage Director Installation & User Guide
    Spheras Storage Director Installation and User Document Name: Guide Part Number MAN-00005-UG Revision 1.0 Revision History Rev Approved Date Change Description Reviewed By 1.0 ECO-3679 Sep., 2003 Released with SPHSSD 2.1 CCB Table 1 Revision History 2 Spheras Storage Director Installation and User Guide Contents Contents - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 3 List of Figures- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 7 Preface - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 11 Proprietary Rights Notice - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 11 Document Description - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 11 Disclaimer - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 11 License Restrictions- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 11 Trademark Acknowledgements - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 11 Copyright Notice- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 11 Chapter 1 Introduction - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 13 1.1 About this Manual - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 13 1.2 Conventions- - - - - - -
    [Show full text]
  • Algorithms and Data Structures for External Memory Algorithms and Data Structures 2:4 for External Memory Jeffrey Scott Vitter
    TCSv2n4.qxd 4/24/2008 11:56 AM Page 1 FnT TCS 2:4 Foundations and Trends® in Theoretical Computer Science Algorithms and Data Structures for External MemoryAlgorithms and Data Structures for Vitter Scott Jeffrey Algorithms and Data Structures 2:4 for External Memory Jeffrey Scott Vitter Data sets in large applications are often too massive to fit completely inside the computer's internal memory. The resulting input/output communication (or I/O) between fast internal memory and slower external memory (such as disks) can be a major performance bottleneck. Algorithms and Data Structures Algorithms and Data Structures for External Memory surveys the state of the art in the design and analysis of external memory (or EM) algorithms and data structures, where the goal is to exploit locality in order to reduce the I/O costs. A variety of EM paradigms are considered for for External Memory solving batched and online problems efficiently in external memory. Jeffrey Scott Vitter Algorithms and Data Structures for External Memory describes several useful paradigms for the design and implementation of efficient EM algorithms and data structures. The problem domains considered include sorting, permuting, FFT, scientific computing, computational geometry, graphs, databases, geographic information systems, and text and string processing. Algorithms and Data Structures for External Memory is an invaluable reference for anybody interested in, or conducting research in the design, analysis, and implementation of algorithms and data structures. This book is originally published as Foundations and Trends® in Theoretical Computer Science Volume 2 Issue 4, ISSN: 1551-305X. now now the essence of knowledge Algorithms and Data Structures for External Memory Algorithms and Data Structures for External Memory Jeffrey Scott Vitter Department of Computer Science Purdue University West Lafayette Indiana, 47907–2107 USA [email protected] Boston – Delft Foundations and TrendsR in Theoretical Computer Science Published, sold and distributed by: now Publishers Inc.
    [Show full text]
  • CS429: Computer Organization and Architecture Storage Technologies
    CS429: Computer Organization and Architecture Storage Technologies Dr. Bill Young Department of Computer Sciences University of Texas at Austin Last updated: November 28, 2017 at 14:31 CS429 Slideset 18: 1 Storage Technologies Random-Access Memory (RAM) Key Features RAM is packaged as a chip The basic storage unit is a cell (one bit per cell) Multiple RAM chips form a memory. Static RAM (SRAM) Each cell stores a bit with a 6-transistor circuit. Retains value indefinitely, as long as kept powered (volatile). Relatively insensitive to disturbances such as electrical noise. Faster but more expensive than DRAM. Dynamic RAM (DRAM) Each cell stores a bit with a capacitor and transistor. Value must be refreshed every 10–100 ms (volatile). Sensitive to disturbances, slower and cheaper than SRAM CS429 Slideset 18: 2 Storage Technologies Random-Access Memory (RAM) (2) Flash RAM (what’s in your ipod and cell phone) Each cell stores 1 or more bits on a “floating-gate” capacitor Keeps state even when power is off (non-volatile). As cheap as DRAM, but much slower RAM Summary Type Trans. Access Persist? Sensitive Cost Applications per bit time SRAM 6 1X No No 100X cache memory DRAM 1 10X No Yes 1X main memory Flash 1/2–1 10000X Yes No 1X disk substitute CS429 Slideset 18: 3 Storage Technologies Conventional DRAM Organization DRAM is typically organized as a d × w array of d supercells of size w bits. 16 x 8 DRAM chip cols 0 1 2 3 0 2 bits 1 / addr rows memory (to CPU) 2 controller 3 8 bits supercell (2, 1) / data internal row buffer CS429 Slideset 18: 4 Storage Technologies Reading DRAM Supercell (2, 1) Step 1(a): Row access strobe (RAS) selects row 2.
    [Show full text]
  • The Logical Disk: a New Approach to Improving File Systems
    The Logical Disk: A New Approach to Improving File Systems Wiebren de Jonge Dept. of Mathematics and Computer Science, Vrije Universiteit, Amsterdam M. Frans Kaashoek and Wilson C. Hsieh Laboratory for Computer Science, MIT, Cambridge, MA Abstract Separating these two concerns has three advantages. First, it leads to a file system structure that makes file sys- The Logical Disk (LD) defines a new interface to disk tems easier to develop, maintain, and modify. Second, it storage that separates file management and disk manage- makes file systems more flexible. Different implementations ment by using logical block numbers and block lists. The of LD can be tailored for different access patterns and dif- LD interface is designed to support multiple file systems ferent disks can run under different implementations of LD. and to allow multiple implementations, both of which are Similarly, different file systems can all share a particular LD important given the increasing use of kernels that support implementation. Third, it allows for efficient solutions to the multiple operating system personalities. I/O bottleneck [Ousterhout and Douglis 1989; Ousterhout A log-structured implementation of LD (LLD) demon- 1990]. LD can transparently reorganize the layout of blocks strates that LD can be implemented efficiently. LLD adds on the disk to reduce access time, similar to other systems about 5% to 10% to the purchase cost of a disk for the main that use logical-block numbers [Vongsathorn and Carson memory it requires. Combining LLD with an existing file 1990; Solworth and Orji 1991; English and Stepanov 1992; system results in a log-structured file system that exhibits Akyürek and Salem 1993].
    [Show full text]