University of California San Diego

Total Page:16

File Type:pdf, Size:1020Kb

University of California San Diego UNIVERSITY OF CALIFORNIA SAN DIEGO Enabling Non-Volatile Memory for Data-intensive Applications A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Computer Science (Computer Engineering) by Xiao Liu Committee in charge: Professor Jishen Zhao, Chair Professor Ryan Kastner Professor Farinaz Koushanfar Professor Tajana Rosing Professor Steve Swanson 2021 Copyright Xiao Liu, 2021 All rights reserved. The dissertation of Xiao Liu is approved, and it is accept- able in quality and form for publication on microfilm and electronically. University of California San Diego 2021 iii DEDICATION To my parents. iv EPIGRAPH Oh, let the sun beat down upon my face. With stars to fill my dreams. I am a traveler of both time and space. To be where I have been. Sit with elders of the gentle race. This world has seldom seen. Talk of days for which they sit and wait. All will be revealed. — Robert A. Plant v TABLE OF CONTENTS Dissertation Approval Page . iii Dedication . iv Epigraph . .v Table of Contents . vi List of Figures . ix List of Tables . xii Acknowledgements . xiii Vita ............................................. xvi Abstract of the Dissertation . xvii Chapter 1 Introduction . .1 1.1 Thesis Statement . .3 1.1.1 Enabling NVM for Big Data Applications . .3 1.1.2 Enabling NVM for Neural Network Applications . .4 1.2 Thesis Outline . .6 Chapter 2 Background and Motivation . .8 2.1 The Hardware of Non-Volatile Memory . .8 2.2 Data-intensive Applications . 10 2.2.1 Big Data Applications . 10 2.2.2 Neural Network Applications . 12 2.3 NVM DIMM and Persistent Memory . 13 2.3.1 Motivation: Lack of Reliability Support in NVM DIMM . 15 2.4 RRAM-based NN Accelerators . 16 2.4.1 Motivation: Missing Features Demanded by the Neural Net- work Applications . 19 Chapter 3 Persistent Memory Workload Characterization: A Hardware Perspective . 21 3.1 Background and Motivation . 23 3.1.1 Background on Persistent Memory Systems . 23 3.1.2 The Need for Understanding Hardware Behavior . 24 3.1.3 Row Buffer in Memory Devices and Its Impact on NVM Emulation . 25 3.2 Methodology . 26 vi 3.2.1 System Platforms . 27 3.2.2 File Systems . 27 3.2.3 Persistent Memory Workloads . 29 3.3 The Impact of Write Intensity . 31 3.3.1 Impact of Write Intensity on System Performance . 31 3.3.2 Bonnie Results . 34 3.4 The Impact of NVM Access Locality . 35 3.4.1 Buffer Hit Rate . 35 3.4.2 Buffer Size . 37 3.5 Microarchitecture Analysis . 38 3.6 Implications on Persistent Memory System Design . 40 3.7 Conclusion . 42 Chapter 4 Binary Star: Coordinated Reliability in Heterogeneous Memory Systems for High Performance and Scalability . 44 4.1 Background and Motivation . 47 4.1.1 DRAM Main Memory . 48 4.1.2 3D DRAM Last Level Cache . 49 4.1.3 NVM Main Memory . 49 4.1.4 Issues with Decoupled Reliability . 50 4.2 Binary Star Design . 52 4.2.1 Coordinated Reliability Scheme . 53 4.2.2 Coordinating Wear Leveling and Consistent Cache Writeback 57 4.2.3 3D DRAM LLC Error Correction and Error Recovery . 59 4.2.4 Putting it All Together: An Example . 60 4.3 Implementation . 61 4.3.1 3D DRAM LLC Modification . 62 4.3.2 Memory Controller Modifications . 62 4.3.3 System Software Modifications . 65 4.3.4 Binary Star Overheads . 66 4.4 Experimental Methodology . 67 4.5 Results . 70 4.5.1 Reliability . 70 4.5.2 Performance . 72 4.5.3 Impact of the Periodic Forced Writeback Interval . 74 4.6 Conclusion . 77 Chapter 5 HR3AM: A Heat Resilient Design for RRAM-based Neuromorphic Com- puting . 78 5.1 Background and Motivation . 80 5.1.1 Neural Network Data Mapping on RRAM crossbar . 80 5.1.2 Thermal Issues in RRAM Accelerator . 81 5.2 HR3AM Design . 83 vii 5.2.1 HR3AM Overview . 83 5.2.2 Heat-resilient Weight Adjustment . 84 5.2.3 Dynamic Thermal Management . 88 5.3 Evaluation . 90 5.3.1 Methodology . 90 5.3.2 Inference accuracy . 91 5.3.3 Thermal status . 92 5.3.4 Performance and energy . 93 5.4 Conclusion . 94 Chapter 6 Mirage: A Highly Paralleled And Flexible RRAM Accelerator . 96 6.1 Background & Motivation . 98 6.1.1 Shared Data Induced Data Dependency . 100 6.1.2 Issues with Existing Hardware Assignments . 102 6.2 Mirage . 104 6.2.1 Finer-grained Parallel RRAM Architecture (FPRA) . 105 6.2.2 Auto Assignment . 110 6.3 Mirage Implementation . 113 6.3.1 The Implementation of FPRA . 113 6.3.2 Auto Assignment Tool Implementation . 115 6.4 Experimental Setup . 116 6.5 Evaluations . 118 6.5.1 Performance . 118 6.5.2 Flexibility . 120 6.5.3 Power Efficiency . 123 6.6 Conclusion . 124 Chapter 7 Conclusion . 125 7.1 Summary of the Thesis . 125 7.2 Future Works . 126 Bibliography . 127 viii LIST OF FIGURES Figure 2.1: Overview of RRAM-based DNN accelerator. 17 Figure 3.1: System platform configuration. 26 Figure 3.2: Throughput of FFSB on four systems. The x-axis represents the read to write ratio. The y-axis represents the throughput. 31 Figure 3.3: Normalized results of Bonnie. The y-axis stands for all the six metrics. The x-axis stands for the normalized number of each metric. 34 Figure 3.4: Filebench throughputs under various memory models. The y-axis represents the throughput. The x-axis represents memory models, which include DRAM, classic NVM with no buffer, and NVM with buffer hit rates under 50% to 90%. 36 Figure 3.5: Filebench throughputs of DRAM and NVMs with different row buffer sizes. The workload has 60% sequential read operations. 37 Figure 3.6: Filebench throughputs of DRAM and NVMs with different row buffer sizes. The workload has 80% sequential read operations. 37 Figure 3.7: Correlation between workload performance and five hardware counter statis- tics. The x-axis represents different metrics. The y-axis represents the correlation coefficient between the throughput and the metric. 38 Figure 4.1: Reliability and throughput of various LLC and main memory hierarchy con- figurations normalized to 28nm DRAM with RECC. We define “reliability” as the reciprocal of device failures in time (FIT) [1, 2], i.e., 1/FIT. Reliability data is based on a recent study [1]. 45 Figure 4.2: (a) Multiple reliability mechanisms for a cache line across LLC and main memory. (b) Fraction of clean LLC lines during application execution. We collected data for ten seconds after the benchmark finishes the warm up stage. 50 Figure 4.3: Basic architecture configuration. 53 Figure 4.4: Binary Star overview. (a) Periodic forced writeback and (b) Consistent cache writeback. (c) Binary Star error correction and error recovery mechanisms. A solid filled square represents a checkpointed consistent data block. A striped filled square represents a remapped inconsistent data block. 54 Figure 4.5: An example of running an application on Binary Star. 60 Figure 4.6: Modifications to NVM wear leveling. 63 Figure 4.7: State machine of the Binary Star software daemon. 65 Figure 4.8: Device failure rates of 3D DRAM for traditional RECC [3], RECC with IECC [1], Binary Star, and Chipkill with IECC [1, 4] vs. single cell bit error rate. Vertical lines represent 3D DRAM technology nodes (28nm and sub-20nm).1 ................................. 71 Figure 4.9: System performance of different memory configurations with no ECC and Binary Star. We normalize the throughput of each configuration to the throughput of 3D DRAM LLC with DRAM. 73 ix Figure 4.10: Performance of Binary Star normalized to the performance of the 3D DRAM +DRAM configuration, as PCM latency is varied as a multiple of its value in Table 4.1. 73 Figure 4.11: Effect of varying the periodic forced writeback interval on (a) throughput, (b) probability of rollbacks (i.e., error rate on a dirty line), and (c) NVM writes. 75 Figure 5.1: Temperature impact on (a) RRAM cell conductance and (b) CNN applications relative inference accuracy. 81 Figure 5.2: Temperature distribution of a single RRAM chip when running VGG16, InceptionV3 and ResNet50. 82 Figure 5.3: Overview of HR3AM structure. 83 Figure 5.4: Bitwidth downgrading hardware in the crossbar array. 86 Figure 5.5:.
Recommended publications
  • Copy on Write Based File Systems Performance Analysis and Implementation
    Copy On Write Based File Systems Performance Analysis And Implementation Sakis Kasampalis Kongens Lyngby 2010 IMM-MSC-2010-63 Technical University of Denmark Department Of Informatics Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected] www.imm.dtu.dk Abstract In this work I am focusing on Copy On Write based file systems. Copy On Write is used on modern file systems for providing (1) metadata and data consistency using transactional semantics, (2) cheap and instant backups using snapshots and clones. This thesis is divided into two main parts. The first part focuses on the design and performance of Copy On Write based file systems. Recent efforts aiming at creating a Copy On Write based file system are ZFS, Btrfs, ext3cow, Hammer, and LLFS. My work focuses only on ZFS and Btrfs, since they support the most advanced features. The main goals of ZFS and Btrfs are to offer a scalable, fault tolerant, and easy to administrate file system. I evaluate the performance and scalability of ZFS and Btrfs. The evaluation includes studying their design and testing their performance and scalability against a set of recommended file system benchmarks. Most computers are already based on multi-core and multiple processor architec- tures. Because of that, the need for using concurrent programming models has increased. Transactions can be very helpful for supporting concurrent program- ming models, which ensure that system updates are consistent. Unfortunately, the majority of operating systems and file systems either do not support trans- actions at all, or they simply do not expose them to the users.
    [Show full text]
  • PDF, 32 Pages
    Helge Meinhard / CERN V2.0 30 October 2015 HEPiX Fall 2015 at Brookhaven National Lab After 2004, the lab, located on Long Island in the State of New York, U.S.A., was host to a HEPiX workshop again. Ac- cess to the site was considerably easier for the registered participants than 11 years ago. The meeting took place in a very nice and comfortable seminar room well adapted to the size and style of meeting such as HEPiX. It was equipped with advanced (sometimes too advanced for the session chairs to master!) AV equipment and power sockets at each seat. Wireless networking worked flawlessly and with good bandwidth. The welcome reception on Monday at Wading River at the Long Island sound and the workshop dinner on Wednesday at the ocean coast in Patchogue showed more of the beauty of the rather natural region around the lab. For those interested, the hosts offered tours of the BNL RACF data centre as well as of the STAR and PHENIX experiments at RHIC. The meeting ran very smoothly thanks to an efficient and experienced team of local organisers headed by Tony Wong, who as North-American HEPiX co-chair also co-ordinated the workshop programme. Monday 12 October 2015 Welcome (Michael Ernst / BNL) On behalf of the lab, Michael welcomed the participants, expressing his gratitude to the audience to have accepted BNL's invitation. He emphasised the importance of computing for high-energy and nuclear physics. He then intro- duced the lab focusing on physics, chemistry, biology, material science etc. The total head count of BNL-paid people is close to 3'000.
    [Show full text]
  • Wang Paper (Prepublication)
    Riverbed: Enforcing User-defined Privacy Constraints in Distributed Web Services Frank Wang Ronny Ko, James Mickens MIT CSAIL Harvard University Abstract 1.1 A Loss of User Control Riverbed is a new framework for building privacy-respecting Unfortunately, there is a disadvantage to migrating applica- web services. Using a simple policy language, users define tion code and user data from a user’s local machine to a restrictions on how a remote service can process and store remote datacenter server: the user loses control over where sensitive data. A transparent Riverbed proxy sits between a her data is stored, how it is computed upon, and how the data user’s front-end client (e.g., a web browser) and the back- (and its derivatives) are shared with other services. Users are end server code. The back-end code remotely attests to the increasingly aware of the risks associated with unauthorized proxy, demonstrating that the code respects user policies; in data leakage [11, 62, 82], and some governments have begun particular, the server code attests that it executes within a to mandate that online services provide users with more con- Riverbed-compatible managed runtime that uses IFC to en- trol over how their data is processed. For example, in 2016, force user policies. If attestation succeeds, the proxy releases the EU passed the General Data Protection Regulation [28]. the user’s data, tagging it with the user-defined policies. On Articles 6, 7, and 8 of the GDPR state that users must give con- the server-side, the Riverbed runtime places all data with com- sent for their data to be accessed.
    [Show full text]
  • Master Boot Record Vs Guid Mac
    Master Boot Record Vs Guid Mac Wallace is therefor divinatory after kickable Noach excoriating his philosophizer hourlong. When Odell perches dilaceratinghis tithes gravitated usward ornot alkalize arco enough, comparatively is Apollo and kraal? enduringly, If funked how or following augitic is Norris Enrico? usually brails his germens However, half the UEFI supports the MBR and GPT. Following your suggested steps, these backups will appear helpful to restore prod data. OK, GPT makes for playing more logical choice based on compatibility. Formatting a suit Drive are Hard Disk. In this guide, is welcome your comments or thoughts below. Thus, making, or paid other OS. Enter an open Disk Management window. Erase panel, or the GUID Partition that, we have covered the difference between MBR and GPT to care unit while partitioning a drive. Each record in less directory is searched by comparing the hash value. Disk Utility have to its important tasks button activated for adding, total capacity, create new Container will be created as well. Hard money fix Windows Problems? MBR conversion, the main VBR and the backup VBR. At trial three Linux emergency systems ship with GPT fdisk. In else, the user may decide was the hijack is unimportant to them. GB even if lesser alignment values are detected. Interoperability of the file system also important. Although it hard be read natively by Linux, she likes shopping, the utility Partition Manager has endeavor to working when Disk Utility if nothing to remain your MBR formatted external USB hard disk drive. One station time machine, reformat the storage device, GPT can notice similar problem they attempt to recover the damaged data between another location on the disk.
    [Show full text]
  • Hardware-Driven Evolution in Storage Software by Zev Weiss A
    Hardware-Driven Evolution in Storage Software by Zev Weiss A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2018 Date of final oral examination: June 8, 2018 ii The dissertation is approved by the following members of the Final Oral Committee: Andrea C. Arpaci-Dusseau, Professor, Computer Sciences Remzi H. Arpaci-Dusseau, Professor, Computer Sciences Michael M. Swift, Professor, Computer Sciences Karthikeyan Sankaralingam, Professor, Computer Sciences Johannes Wallmann, Associate Professor, Mead Witter School of Music i © Copyright by Zev Weiss 2018 All Rights Reserved ii To my parents, for their endless support, and my cousin Charlie, one of the kindest people I’ve ever known. iii Acknowledgments I have taken what might be politely called a “scenic route” of sorts through grad school. While Ph.D. students more focused on a rapid graduation turnaround time might find this regrettable, I am glad to have done so, in part because it has afforded me the opportunities to meet and work with so many excellent people along the way. I owe debts of gratitude to a large cast of characters: To my advisors, Andrea and Remzi Arpaci-Dusseau. It is one of the most common pieces of wisdom imparted on incoming grad students that one’s relationship with one’s advisor (or advisors) is perhaps the single most important factor in whether these years of your life will be pleasant or unpleasant, and I feel exceptionally fortunate to have ended up iv with the advisors that I’ve had.
    [Show full text]
  • BSD Projects IV – BSD Certification • Main Features • Community • Future Directions a (Very) Brief History of BSD
    BSD Overview Jim Brown May 24, 2012 BSD Overview - 5/24/2012 - Jim Brown, ISD BSD Overview I – A Brief History of BSD III – Cool Hot Stuff • ATT UCB Partnership • Batteries Included • ATT(USL) Lawsuit • ZFS , Hammer • BSD Family Tree • pf Firewall, pfSense • BSD License • Capsicum • Virtualization Topics • Jails, Xen, etc. • Desktop PC-BSD II – The Core BSD Projects IV – BSD Certification • Main Features • Community • Future Directions A (Very) Brief History of BSD 1971 – ATT cheaply licenses Unix source code to many organizations, including UCB as educational material 1975 – Ken Thompson takes a sabbatical from ATT, brings the latest Unix source on tape to UCB his alma mater to run on a PDP 11 which UCB provided. (Industry/academic partnerships were much more common back then.) Computer Science students (notably Bill Joy and Chuck Haley) at UCB begin to make numerous improvements to Unix and make them available on tape as the “Berkeley Software Distribution” - BSD A (Very) Brief History of BSD Some notable CSRG • 1980 – Computer Science Research Group members (CSRG) forms at UCB with DARPA funding to make many more improvements to Unix - job control, autoreboot, fast filesystem, gigabit address space, Lisp, IPC, sockets, TCP/IP stack + applications, r* utils, machine independence, rewriting almost all ATT code with UCB/CSRG code, including many ports • 1991 – The Networking Release 2 tape is released on the Internet via anon FTP. A 386 port quickly follows by Bill and Lynne Jolitz. The NetBSD group is formed- the first Open Source community entirely on the Internet • 1992 – A commercial version, BSDI (sold for $995, 1-800-ITS-UNIX) draws the ire of USL/ATT.
    [Show full text]
  • Data Storage on Unix
    Data Storage on Unix Patrick Louis 2017-11-05 Published online on venam.nixers.net © Patrick Louis 2017 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of the rightful author. First published eBook format 2017 The author has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Contents Introduction 4 Ideas and Concepts 5 The Overall Generic Architecture 6 Lowest Level - Hardware & Limitation 8 The Medium ................................ 8 Connectors ................................. 10 The Drivers ................................. 14 A Mention on Block Devices and the Block Layer ......... 16 Mid Level - Partitions and Volumes Organisation 18 What’s a partitions ............................. 21 High Level 24 A Big Overview of FS ........................... 24 FS Examples ............................. 27 A Bit About History and the Origin of Unix FS .......... 28 VFS & POSIX I/O Layer ...................... 28 POSIX I/O 31 Management, Commands, & Forensic 32 Conclusion 33 Bibliography 34 3 Introduction Libraries and banks, amongst other institutions, used to have a filing system, some still have them. They had drawers, holders, and many tools to store the paperwork and organise it so that they could easily retrieve, through some documented process, at a later stage whatever they needed. That’s where the name filesystem in the computer world emerges from and this is oneofthe subject of this episode. We’re going to discuss data storage on Unix with some discussion about filesys- tem and an emphasis on storage device.
    [Show full text]
  • Evolutionary Tree-Structured Storage: Concepts, Interfaces, and Applications
    Evolutionary Tree-Structured Storage: Concepts, Interfaces, and Applications Dissertation submitted for the degree of Doctor of Natural Sciences Presented by Marc Yves Maria Kramis at the Faculty of Sciences Department of Computer and Information Science Date of the oral examination: 22.04.2014 First supervisor: Prof. Dr. Marcel Waldvogel Second supervisor: Prof. Dr. Marc Scholl i Abstract Life is subdued to constant evolution. So is our data, be it in research, business or personal information management. From a natural, evolutionary perspective, our data evolves through a sequence of fine-granular modifications resulting in myriads of states, each describing our data at a given point in time. From a technical, anti- evolutionary perspective, mainly driven by technological and financial limitations, we treat the modifications as transient commands and only store the latest state of our data. It is surprising that the current approach is to ignore the natural evolution and to willfully forget about the sequence of modifications and therefore the past state. Sticking to this approach causes all kinds of confusion, complexity, and performance issues. Confusion, because we still somehow want to retrieve past state but are not sure how. Complexity, because we must repeatedly work around our own obsolete approaches. Performance issues, because confusion times complexity hurts. It is not surprising, however, that intelligence agencies notoriously try to collect, store, and analyze what the broad public willfully forgets. Significantly faster and cheaper random-access storage is the key driver for a paradigm shift towards remembering the sequence of modifications. We claim that (1) faster storage allows to efficiently and cleverly handle finer-granular modifi- cations and (2) that mandatory versioning elegantly exposes past state, radically simplifies the applications, and effectively lays a solid foundation for backing up, distributing and scaling of our data.
    [Show full text]
  • Freebsd Enterprise Storage Polish BSD User Group Welcome 2020/02/11 Freebsd Enterprise Storage
    FreeBSD Enterprise Storage Polish BSD User Group Welcome 2020/02/11 FreeBSD Enterprise Storage Sławomir Wojciech Wojtczak [email protected] vermaden.wordpress.com twitter.com/vermaden bsd.network/@vermaden https://is.gd/bsdstg FreeBSD Enterprise Storage Polish BSD User Group What is !nterprise" 2020/02/11 What is Enterprise Storage? The wikipedia.org/wiki/enterprise_storage page tells nothing about enterprise. Actually just redirects to wikipedia.org/wiki/data_storage page. The other wikipedia.org/wiki/computer_data_storage page also does the same. The wikipedia.org/wiki/enterprise is just meta page with lin s. FreeBSD Enterprise Storage Polish BSD User Group What is !nterprise" 2020/02/11 Common Charasteristics o Enterprise Storage ● Category that includes ser$ices/products designed &or !arge organizations. ● Can handle !arge "o!umes o data and !arge num%ers o sim#!tano#s users. ● 'n$olves centra!ized storage repositories such as SA( or NAS de$ices. ● )equires more time and experience%expertise to set up and operate. ● Generally costs more than consumer or small business storage de$ices. ● Generally o&&ers higher re!ia%i!it'%a"aila%i!it'%sca!a%i!it'. FreeBSD Enterprise Storage Polish BSD User Group What is !nterprise" 2020/02/11 EnterpriCe or EnterpriSe? DuckDuckGo does not pro$ide search results count +, Goog!e search &or enterprice word gi$es ~ 1 )00 000 results. Goog!e search &or enterprise word gi$es ~ 1 000 000 000 results ,1000 times more). ● /ost dictionaries &or enterprice word sends you to enterprise term. ● Given the *+,CE o& many enterprise solutions it could be enterPRICE 0 ● 0 or enterpri$e as well +.
    [Show full text]
  • 855400Cd46b6c9e7683b5ace58a55234.Pdf
    Ars TeAchrsn iTceachnica UK Register Log in ▼ ▼ Ars Technica has arrived in Europe. Check it out! × Encryption options are great, but Apple's attitude on checksums is still funky. by Adam H. Leventhal - Jun 26, 2016 1:00pm UTC KEEPING ON TRUCKING INDEPENDENCE, WHAT INDEPENDENCE? Two hours or so of WWDC keynoting and Tim Cook didn't mention a new file system once? Andrew Cunningham This article was originally published on Adam Leventhal's blog in multiple parts. Programmer Andrew Nõmm: "I had to be made Apple announced a new file system that will make its way into all of its OS variants (macOS, tvOS, iOS, an example of as a warning to all IT people." watchOS) in the coming years. Media coverage to this point has been mostly breathless elongations of Apple's developer documentation. With a dearth of detail I decided to attend the presentation and Q&A with the APFS team at WWDC. Dominic Giampaolo and Eric Tamura, two members of the APFS team, gave an overview to a packed room; along with other members of the team, they patiently answered questions later in the day. With those data points and some first-hand usage I wanted to provide an overview and analysis both as a user of Apple-ecosystem products and as a long-time operating system and file system developer. The overview is divided into several sections. I'd encourage you to jump around to topics of interest or skip right to the conclusion (or to the tweet summary). Highest praise goes to encryption; ire to data integrity.
    [Show full text]
  • Limits and the Practical Usability of Bsds, a Big Data Prospective
    Limits and the Practical Usability of BSDs, a Big Data Prospective Predrag Punosevacˇ [email protected] The Auton Lab Carnegie Mellon University June 11, 2016 1 / 22 Thanks Thanks to organizers for this great meeting and for giving me the op- portunity to speak. note 1 of slide 1 Intro ❖ Intro ● Who am I? ❖ Chronology ❖ Chronology II ❖ Genealogy Tree ❖ General Limitations ❖ Scientific Computing ❖ Continuation ❖ misc issues ❖ NetBSD ❖ OpenBSD ❖ pf.conf and pfctl ❖ OpenBSD cons ❖ FreeBSD ❖ TrueOS ❖ TurnKey Appliance ❖ FreeNAS ❖ pfSense ❖ DragonFly BSD ❖ HAMMER ❖ Dark Clouds ❖ References 2 / 22 Intro ❖ Intro ● Who am I? ❖ Chronology ❖ Chronology II ❖ Genealogy Tree ● What is the Auton Lab? ❖ General Limitations ❖ Scientific Computing ❖ Continuation ❖ misc issues ❖ NetBSD ❖ OpenBSD ❖ pf.conf and pfctl ❖ OpenBSD cons ❖ FreeBSD ❖ TrueOS ❖ TurnKey Appliance ❖ FreeNAS ❖ pfSense ❖ DragonFly BSD ❖ HAMMER ❖ Dark Clouds ❖ References 2 / 22 Intro ❖ Intro ● Who am I? ❖ Chronology ❖ Chronology II ❖ Genealogy Tree ● What is the Auton Lab? ❖ General Limitations ❖ Scientific ● Why don’t we just use SCS computing facilities? Computing ❖ Continuation ❖ misc issues ❖ NetBSD ❖ OpenBSD ❖ pf.conf and pfctl ❖ OpenBSD cons ❖ FreeBSD ❖ TrueOS ❖ TurnKey Appliance ❖ FreeNAS ❖ pfSense ❖ DragonFly BSD ❖ HAMMER ❖ Dark Clouds ❖ References 2 / 22 Intro ❖ Intro ● Who am I? ❖ Chronology ❖ Chronology II ❖ Genealogy Tree ● What is the Auton Lab? ❖ General Limitations ❖ Scientific ● Why don’t we just use SCS computing facilities? Computing ❖ Continuation ❖ misc issues ● How did
    [Show full text]
  • 08-Filesystems.Pdf
    ECE590-03 Enterprise Storage Architecture Fall 2017 File Systems Tyler Bletsch Duke University The file system layer User code open, read, write, seek, close, stat, mkdir, rmdir, unlink, ... Kernel VFS layer File system drivers ext4 fat nfs ... Disk driver NIC driver read_block, write_block packets Could be a single drive or a RAID HDD / SSD 2 High-level motivation • Disks are dumb arrays of blocks. • Need to allocate/deallocate (claim and free blocks) • Need logical containers for blocks (allow delete, append, etc. on different buckets of data – files) • Need to organize such containers (how to find data – directories) • May want to access control (restrict data access per user) • May want to dangle additional features on top Result: file systems 3 Disk file systems • All have same goal: • Fulfill file system calls (open, seek, read, write, close, mkdir, etc.) • Store resulting data on a block device • The big (non-academic) file systems • FAT (“File Allocation Table”): Primitive Microsoft filesystem for use on floppy disks and later adapted to hard drives • FAT32 (1996) still in use (default file system for USB sticks, SD cards, etc.) • Bad performance, poor recoverability on crash, but near-universal and easy for simple systems to implement • ext2, ext3, ext4: Popular Linux file system. • Ext2 (1993) has inode-based on-disk layout – much better scalability than FAT • Ext3 (2001) adds journaling – much better recoverability than FAT • Ext4 (2008) adds various smaller benefits • NTFS: Current Microsoft filesystem (1993). • Like ext3, adds journaling to provide better recoverability than FAT • More expressive metadata (e.g. Access Control Lists (ACLs)) • HFS+: Current Mac filesystem (1998).
    [Show full text]