CRC File Systems Storage Presentation

Total Page:16

File Type:pdf, Size:1020Kb

CRC File Systems Storage Presentation HPC File Systems and Storage Irena Johnson University of Notre Dame Center for Research Computing HPC (High Performance Computing) • Aggregating computer power for higher performance than that of a typical desktop computer/workstation for solving large problems in science, engineering, business • Large systems perform calculations • Data access is critical HPC: • Compute node • Head node • File System • Storage • Networking File Collection of data/information: • Document • Picture • Audio or video stream • Application • Other collection of data Metadata The information that describes the data contained in files: • Size • Date created • Date modified • Location on disk • Permissions (who can view/modify your file) File System Definition from TLDP (The Linux Documentation Project): "On a UNIX system, everything is a file; if something is not a file, it is a process*.” Many Types of File Systems • Not all file systems are equal • Designed for different uses • Data is organized in different ways • Some are faster than others • Some are more robust/reliable • Some support large storage drives *Process - a task (a process is started when a program is initiated) UNIX/Linux File System • Hierarchical file structure • Tree-structured file system (upside down tree) • Everything starts from the root directory / and expands into sub-directories and so forth • Unlike Windows which uses ‘drives’ UNIX/Linux File System / /bin/ /boot/ /dev/ /etc/ /home/ /lib/ /media/ /mnt/ /opt/ /proc/ /bea/ /ed/ /jen/ UNIX/Linux File System Serial File System (Traditional) • A single server controls the users and data • Can be faster for one user • No redundancy • Simple UNIX/Linux File System Distributed / Parallel File System • Data is spread out across many systems on a network • Single shared global namespace • Supports multiple users (can be distributed) • Supports high bandwidth • More storage than on a single system • Fault tolerant • Reliable • Scalable • Complex Parallel File System Clients Clients Clients Metadata Parallel Read/Write Metadata servers Metadata servers Management Storage Devices Parallel File System • Breaks up a data set and distributes (stripes), the blocks to multiple storage drives (local and/or remote servers). • Users do not need to know the physical location of the data blocks to retrieve a file. • Data access is done via a global namespace. • A metadata server stores the file name, location, owner, access permissions. • Reads and writes data to distributed storage devices using multiple I/O paths concurrently. • Capacity and bandwidth can be scaled. • Storage - high availability, mirroring, replication, snapshots. File Systems at CRC AFS (Andrew File System) • Developed in 1982, part of the Andrew project at Carnegie Mellon University. • Named after Andrew Carnegie and Andrew Mellon • Client-server architecture • Federated file sharing • Provides location independence • Scalable • Secure (Kerberos for authentication and ACL - access control lists on directories for users and groups) • Available for a wide range of heterogeneous systems - UNIX/ Linux, MacOS X, and Microsoft Windows File Systems at CRC Panasas - High Performance Parallel scratch File System /scratch365 • Parallel access to data • Data is striped across multiple storage nodes, providing increased capacity and/or performance • Concurrent reading and writing (scalable performance to individual files) • Global Namespace - all compute nodes accessing the storage see the same namespace (same name and pathname); management is done through one system only Overview CRC File Systems Purpose File System Type, Full Name File Access Space Available Aggregated bandwidth (approx.) Globally accessible home and project directories User’s Home AFS - crc.nd.edu Directly using OpenAFS 100GB - 2TB up to 70-85 MB/sec per node - Directories /afs/crc.nd.edu/user/first/netid client (open source) volume Approximately 200 MB/sec aggregated $HOME using multiple nodes Group Directories AFS crc.nd.edu Directly using OpenAFS 100GB - 2TB /afs/crc.nd.edu/group/ client volume Pseudo-temporary File System Panasas High /scratch365/netid Directly using Panasas 500GB - 1TB 70-90 MB/sec per node with 1 Gb Performance proprietary pants client network Parallel scratch file system Local File Systems Node local Local disks Directly - shared with R815 - 500GB R815-H700 RAIDctrl -250-300 MB/sec temporary /scratch (link to /tmp) other users on node HP DL160 - scratch file HP DL160 -d6copt 50-60 MB/sec d6copt - 100GB system IBM/Lenovo IBM/Lenovo 90-100 MB/sec nx360M4 -400GB daccssfe - 800-1,000 MB/sec Daccssfe - 5TB RAID Redundant Array(s) of Inexpensive/Independent Disks • Physical disks bound together with hardware or software • Used to create larger filesystems out of standard drive technology • Configurations optimize cost vs capability RAID Levels: 0, 1, 3, 4, 5, 6, 0+1, 1+0 • RAID 0 - striped (performance and capacity) • RAID 1 - mirrored (read performance, fault tolerance FT) • RAID 5 - striped with distributed parity (performance, capacity, FT N+1) • RAID 6 - striped with distributed parity (performance, capacity, FT N+2) https://searchstorage.techtarget.com/definition/RAID Data Storage • How information is kept in a digital format that may be retrieved later • Computers/Laptops/Tablets/Smartphones/other devices - all store data • Hard drive/disk/flash drive/SSD (solid state data)/cloud • Is not the same as RAM memory * Hard drive - think long term memory * RAM - think short term memory Data Storage Types • File-based storage • Block-based storage • Object-based storage File Storage • Also called file-level or file-based storage • You use file storage when you access documents/pictures saved in files on your computer • Data is stored as a single piece of information inside a file, inside a directory • A single path to data • Hierarchical in nature (called tree-structured system) • Oldest type of storage • Inexpensive • Simple Block Storage • Breaks a file into individual blocks of data • The blocks are stored as separate pieces of data • No need for file-folder structure because each block of data has a unique address • The smaller blocks of data spread out to where is most efficient • The storage system software pulls all the blocks back together to assemble the file when accessed • The more data you need to store, the better Block Storage • Used in storage-area network (SAN) environments where data is stored in volumes (blocks) • Data is divided into blocks (can be different sizes) which are stored separately on hard drive(s) • Consistent I/O performance, low latency connectivity • More expensive, complex • Good for data that has to be frequently accessed and updated • Usage examples: database storage; applications like Java Object Storage • Also called object-based storage • Files are broken into units called objects and spread out among hardware • The objects are kept in a single repository, instead of being kept as files in directories or as blocks on servers • The blocks of data that make up a file, the metadata is kept into a storage pool • Unique identifier assigned to the object • Cost efficient: you only pay for what you use • Usage examples: big data, web applications, backup archives • Good for data that doesn’t need to be modified (just READ) File/Block/Object Storage Comparison File-based storage Block-based Object-based storage storage Transaction units Files Blocks Objects Protocols CIFS, NFS SCSI, Web services (XML- FiberChannel, SATA based messaging) Metadata File-system File-system Custom metadata attributes attributes Recommended for Shared file data Transactional data, Static file data, frequently cloud storage changing data Strength Simplified access High performance Scalable, and management distributed access of shared files SAN (Storage Area Network) • dedicated high-speed network that interconnects and shares pools of storage devices to multiple servers • each server accesses the shared storage as if it were directly attached to it • raw storage is treated as a pool of resources which can be centrally managed and allocated • highly scalable - capacity can be added as needed • disadvantages: cost and complexity Network Clients Clients Clients Storage Metadata Network server NAS (Network Attached Storage) • dedicated file storage device that provides nodes within same network file-based storage via Ethernet connection • storage appliance, connected to a network switch • reliable, flexible • highly scalable network storage • speed Clients Clients Clients NAS Storage Network Network Panasas - object-based storage cluster • Performance improves with scale - linear scalability • Data protection improves with scale • Scalable storage - easy to access, deploy and manage Panasas - ActiveStor • Parallel scale-out NAS storage appliance • Complete hardware and software storage solution • Implements: • Parallel, object-based filesystem • Global namespace • Strict client cache coherency • Network Attached Storage (NAS) - Panasas DirectFlow (pNFS, CIFS, NFS) - rpm package for Linux (also MAC OS supported) • Scale-out NAS - serving parallel access to data (data is striped across multiple storage nodes, providing increased capacity and/or performance) • Parallel File System - concurrent reading and writing (data for a single file is striped across multiple storage nodes to provide scalable performance to individual files) • Global Namespace - all compute nodes accessing the storage see the same namespace (same name and pathname); management is done through 1 system only Panasas 1 node of Panasas architecture ActiveStor16
Recommended publications
  • Andrew File System (AFS) Google File System February 5, 2004
    Advanced Topics in Computer Systems, CS262B Prof Eric A. Brewer Andrew File System (AFS) Google File System February 5, 2004 I. AFS Goal: large-scale campus wide file system (5000 nodes) o must be scalable, limit work of core servers o good performance o meet FS consistency requirements (?) o managable system admin (despite scale) 400 users in the “prototype” -- a great reality check (makes the conclusions meaningful) o most applications work w/o relinking or recompiling Clients: o user-level process, Venus, that handles local caching, + FS interposition to catch all requests o interaction with servers only on file open/close (implies whole-file caching) o always check cache copy on open() (in prototype) Vice (servers): o Server core is trusted; called “Vice” o servers have one process per active client o shared data among processes only via file system (!) o lock process serializes and manages all lock/unlock requests o read-only replication of namespace (centralized updates with slow propagation) o prototype supported about 20 active clients per server, goal was >50 Revised client cache: o keep data cache on disk, metadata cache in memory o still whole file caching, changes written back only on close o directory updates are write through, but cached locally for reads o instead of check on open(), assume valid unless you get an invalidation callback (server must invalidate all copies before committing an update) o allows name translation to be local (since you can now avoid round-trip for each step of the path) Revised servers: 1 o move
    [Show full text]
  • A Survey of Distributed File Systems
    A Survey of Distributed File Systems M. Satyanarayanan Department of Computer Science Carnegie Mellon University February 1989 Abstract Abstract This paper is a survey of the current state of the art in the design and implementation of distributed file systems. It consists of four major parts: an overview of background material, case studies of a number of contemporary file systems, identification of key design techniques, and an examination of current research issues. The systems surveyed are Sun NFS, Apollo Domain, Andrew, IBM AIX DS, AT&T RFS, and Sprite. The coverage of background material includes a taxonomy of file system issues, a brief history of distributed file systems, and a summary of empirical research on file properties. A comprehensive bibliography forms an important of the paper. Copyright (C) 1988,1989 M. Satyanarayanan The author was supported in the writing of this paper by the National Science Foundation (Contract No. CCR-8657907), Defense Advanced Research Projects Agency (Order No. 4976, Contract F33615-84-K-1520) and the IBM Corporation (Faculty Development Award). The views and conclusions in this document are those of the author and do not represent the official policies of the funding agencies or Carnegie Mellon University. 1 1. Introduction The sharing of data in distributed systems is already common and will become pervasive as these systems grow in scale and importance. Each user in a distributed system is potentially a creator as well as a consumer of data. A user may wish to make his actions contingent upon information from a remote site, or may wish to update remote information.
    [Show full text]
  • The Influence of Scale on Distributed File System Design
    IEEE TRANSAmIONS ON SOFIWARE ENGINEERING, VOL. 18, NO. I, JANUARY lY92 The Influence of Scale on Distributed File System Design Mahadev Satyanarayanan, Member, IEEE Abstract- Scale should be recognized as a primary factor into autonomous or semi-autonomous organizations for man- influencing the architecture and implementation of distributed agement purposes. Hence a distributed system that has many systems. This paper uses Andrew and Coda, distributed file users or nodes will also span many organizations. Regardless systems built at Carnegie Mellon University, to validate this proposition. Performance, operability, and security are dominant of the specific metric of scale, the designs of distributed considerations in the design of these systems. Availability is a systems that scale well are fundamentally different from less further consideration in the design of Coda. Client caching, scalable designs. bulk data transfer, token-based mutual authentication, and hi- In this paper we describe the lessons we have learned erarchical organization of the protection domain have emerged about scalability from the Andrew File System and the Codu as mechanisms that enhance scalability. The separation of con- cerns made possible by functional specialization has also proved File System. These systems are particularly appropriate for valuable in scaling. Heterogeneity is an important by-product our discussion, because they have been designed for growth of growth, but the mechanisms available to cope with it are to thousands of nodes and users. We focus on distributed rudimentary. Physical separation of clients and servers turns out file systems, because they are the most widely used kind of to be a critical requirement for scalability.
    [Show full text]
  • Distributed File Systems
    Please note: Please start working your research project (proposal will be due on Feb. 19 in class) Each group needs to turn in a printed version of their proposal and intermediate report. Also, before class each group needs to email me a DOC version. Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 4 © Pearson Education 2005 Remote Procedure Call (1): at-least-once or at-most-once semantics client: "stub" instead of "proxy" (same function, different names) behaves like a local procedure, marshal arguments, communicate the request server: dispatcher "stub": unmarshal arguments, communicate the results back Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 4 © Pearson Education 2005 Remote Procedure Call (2) client process server process Request Reply client stub server stub procedure procedure client service program Communication Communication procedure module module dispatcher Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 4 © Pearson Education 2005 Sun RPC (1): Designed for client-server communication in the SUN NFS (network file system) Supplied as a part of SUN and other UNIX operating systems Over either UDP or TCP Provides an interface definition language (IDL) initially XDR is for data representation, extended to be IDL less modern than CORBA IDL and Java program numbers (obtained from a central authority) instead of interface names procedure numbers (used as a procedure identifier) instead of procedure names only a single input parameter is allowed (then we have to use a ?) Offers an interface compiler (rpcgen) for C language, which generates the following: client stub server main procedure, dispatcher, and server stub XDR marshalling, unmarshaling Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn.
    [Show full text]
  • Design and Evolution of the Apache Hadoop File System(HDFS)
    Design and Evolution of the Apache Hadoop File System(HDFS) Dhruba Borthakur Engineer@Facebook Committer@Apache HDFS SDC, Sept 19 2011 2011 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Outline Introduction Yet another file-system, why? Goals of Hadoop Distributed File System (HDFS) Architecture Overview Rational for Design Decisions 2011 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Who Am I? Apache Hadoop FileSystem (HDFS) Committer and PMC Member Core contributor since Hadoop’s infancy Facebook (Hadoop, Hive, Scribe) Yahoo! (Hadoop in Yahoo Search) Veritas (San Point Direct, Veritas File System) IBM Transarc (Andrew File System) Univ of Wisconsin Computer Science Alumni (Condor Project) 2011 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Hadoop, Why? Need to process Multi Petabyte Datasets Data may not have strict schema Expensive to build reliability in each application. Failure is expected, rather than exceptional. Elasticity, # of nodes in a cluster is never constant. Need common infrastructure Efficient, reliable, Open Source Apache License 2011 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Goals of HDFS Very Large Distributed File System 10K nodes, 1 billion files, 100 PB Assumes Commodity Hardware Files are replicated to handle hardware failure Detect failures and recovers from them Optimized for Batch Processing Data locations exposed so that computations can move to where data resides Provides very high aggregate bandwidth User Space, runs on heterogeneous OS 2011 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Commodity Hardware Typically in 2 level architecture – Nodes are commodity PCs – 20-40 nodes/rack – Uplink from rack is 4 gigabit – Rack-internal is 1 gigabit 2011 Storage Developer Conference.
    [Show full text]
  • Andrew File System (AFS)
    Andrew File System ♦ Andrew File System (AFS) 8 started as a joint effort of Carnegie Mellon University and IBM 8 today basis for DCE/DFS: the distributed file system included in the Open Software Foundations’s Distributed Computing Environment 8 some UNIX file system usage observations, as pertaining to caching – infrequently updated shared files and local user files will remain valid for long periods of time (the latter because they are being updated on owners workstations) – allocate large local disk cache, e.g., 100 MByte, that can provide a large enough working set for all files of one user such that the file is still in this cache when used next time – assumptions about typical file accesses (based on empirical evidence) iusually small files, less than 10 Kbytes ireads much more common than writes (appr. 6:1) iusually sequential access, random access not frequently found iuser-locality: most files are used by only one user iburstiness of file references: once file has been used, it will be used in the nearer future with high probability Distributed Systems - Fall 2001 V - 39 © Stefan Leue 2002 tele Andrew File System ♦ Andrew File System (AFS) 8 design decisions for AFS – whole-file serving: entire contents of directories and files transfered from server to client (AFS-3: in chunks of 64 Kbytes) – whole file caching: when file transfered to client it will be stored on that client’s local disk Distributed Systems - Fall 2001 V - 40 © Stefan Leue 2002 tele Andrew File System ♦ AFS architecture: Venus, network and Vice Workstations
    [Show full text]
  • Using the Andrew File System with BSD
    Using the Andrew File System with BSD H. Meiland May 4, 2006 Abstract Since the beginning of networks, one of the basic idea’s has been sharing of files; even though with the Internet as advanced as today, simple platform independent file sharing is not common. Why is the closest thing we use WebDAV, a ’neat trick over http’, instead of a real protocol? In this paper the Andrew File System will be described which has been (and is) the file sharing core of many universities and companies world- wide. Also the reason for it’s relative unawareness in the community will be answered, and it’s actual features and performance in comparison with alternative network filesystems. Finally some information will be given on how to use it with our favorite OS: BSD. 1 History • 1984 Carnegie Mellon University, first release • 1989 TransArc Corporation formed by part of original team members • 1994 TransArc purchased by IBM • 1997 Start of Arla development at stacken.kth.se • 2000 IBM releases AFS in opensource (IBM License) • 2000 http://www.OpenAFS.org • 2006 good support for lot’s of platforms, many new features etc. 1 2 Overview 2.1 User point of view 2.1.1 Global namespace While discussing global filesystem, it is easy to dive into a organization, and explain wonderfull features like having replicas of often accessed data in branch-offices, and moving home-directories to local fileservers when mov- ing employees between departments. An essential feature of AFS is often overlooked: a common root as accesspoint of all AFS stored data.
    [Show full text]
  • The Andrew File System (AFS)
    50 The Andrew File System (AFS) The Andrew File System was introduced at Carnegie-Mellon University (CMU) 1 in the 1980’s [H+88]. Led by the well-known Professor M. Satya- narayanan of Carnegie-Mellon University (“Satya” for short), the main goal of this project was simple: scale. Specifically, how can one design a distributed file system such that a server can support as many clients as possible? Interestingly, there are numerous aspects of design and implementa- tion that affect scalability. Most important is the design of the protocol be- tween clients and servers. In NFS, for example, the protocol forces clients to check with the server periodically to determine if cached contents have changed; because each check uses server resources (including CPU and network bandwidth), frequent checks like this will limit the number of clients a server can respond to and thus limit scalability. AFS also differs from NFS in that from the beginning, reasonable user- visible behavior was a first-class concern. In NFS, cache consistency is hard to describe because it depends directly on low-level implementa- tion details, including client-side cache timeout intervals. In AFS, cache consistency is simple and readily understood: when the file is opened, a client will generally receive the latest consistent copy from the server. 50.1 AFS Version 1 We will discuss two versions of AFS [H+88, S+85]. The first version (which we will call AFSv1, but actually the original system was called the ITC distributed file system [S+85]) had some of the basic design in place, but didn’t scale as desired, which led to a re-design and the final protocol (which we will call AFSv2, or just AFS) [H+88].
    [Show full text]
  • The Andrew File System (From CMU) Case Study ­ Andrew File System
    DFS Case Studies, Part 2 The Andrew File System (from CMU) Case Study - Andrew File System ● Designed to support information sharing on a large scale by minimizing client-server communications ● Makes heavy use of caching technologies ● Adopted as the basis for the DCE/DFS file system in the Open Software Foundation©s Distributed Computing Environment (DCE) AFS Characteristics ● Provides transparent access to remote shared files for UNIX programs (using the normal UNIX file primitives) ● Programs access to remote files without modification or recompilation ● AFS is compatible with NFS, in that files may be remotely accessed using NFS ● However, AFS differs markedly from NFS in its design and implementation AFS Design Goals ● Scalability is the most important design goal for the AFS designers ● Designed to perform well with larger numbers of active users (when compared to NFS) ● Key strategy in achieving scalability is the caching of whole (complete) files in client nodes AFS Design Characteristics ● Whole-file serving - the entire contents of directories and files are transmitted to client computers by AFS servers ● Whole-file caching - once a copy of a file (or a file- chunk) has been transferred to a client computer, it is stored in a cache on the local disk; the cache is permanent, surviving reboots of the client computer and it is used to satisfy clients© open requests in preference to remote copies whenever possible AFS Observations ● Shared files that are infrequently updated (such as UNIX commands and libraries) and files accessed
    [Show full text]
  • Distributed File Systems
    CS 425 / ECE 428 Distributed Systems Fall 2020 Indranil Gupta (Indy) Lecture 25: Distributed File Systems All slides © IG Jokes for this Topic • Why did NFS like boxing, but AFS did not show up for the fight? Because NFS can do blocks, but AFS is an optimistic person who only sends callback promises. • Did you know why written exams not use too big a page size (larger than letter/A4 size)? Because it can result in false sharing. 2 File System • Contains files and directories (folders) • Higher level of abstraction – Prevents users and processes from dealing with disk blocks and memory blocks 3 File Contents • Typical File Header Block 0 Block 1 … Block N-1 File contents are in here • Timestamps: creation, read, write, header • File type, e.g., .c, .java • Ownership, e.g., edison • Access Control List: who can access this file and in what mode • Reference Count: Number of directories containing this file • May be > 1 (hard linking of files) 4 • When 0, can delete file What about Directories? • They’re just files! • With their “data” containing – The meta-information about files the directory contains – Pointers (on disk) to those files 5 Unix File System: Opening and Closing Files • Uses notion of file descriptors – Handle for a process to access a file • Each process: Needs to open a file before reading/writing file – OS creates an internal datastructure for a file descriptor, returns handle • filedes=open(name, mode) – mode = access mode, e.g., r, w, x • filedes=creat(name, mode) – Create the file, return the file descriptor • close(filedes)
    [Show full text]
  • A Distributed File System for Distributed Conferencing System
    A DISTRIBUTED FILE SYSTEM FOR DISTRIBUTED CONFERENCING SYSTEM By PHILIP S. YEAGER A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2003 Copyright 2003 by Philip S. Yeager ACKNOWLEDGMENTS I would like to thank Dr. Richard Newman for his help and guidance with this project. I would like to express my gratitude to Dr. Jonathan C.L. Liu and Dr. Beverly Sanders for serving on my committee. I would like to thank Dr. Joseph Wilson for serving as a substitute at my defense. I would also like to thank Vijay Manian and the other DCS group members for their advice and contributions. I thank my parents and friends for their encouragement and support. Finally, I would like to thank Candice Williams for everything she has done. Without these people this work would not have been possible. iii TABLE OF CONTENTS Page ACKNOWLEDGMENTS ................................................................................................. iii LIST OF FIGURES ......................................................................................................... viii ABSTRACT....................................................................................................................... ix CHAPTER 1 INTRODUCTION ........................................................................................................1 1.1 Introduction........................................................................................................1
    [Show full text]
  • Of File Systems and Storage Models
    Chapter 4 Of File Systems and Storage Models Disks are always full. It is futile to try to get more disk space. Data expands to fill any void. –Parkinson’sLawasappliedto disks 4.1 Introduction This chapter deals primarily with how we store data. Virtually all computer systems require some way to store data permanently; even so-called “diskless” systems do require access to certain files in order to boot, run and be useful. Albeit stored remotely (or in memory), these bits reside on some sort of storage system. Most frequently, data is stored on local hard disks, but over the last few years more and more of our files have moved “into the cloud”, where di↵erent providers o↵er easy access to large amounts of storage over the network. We have more and more computers depending on access to remote systems, shifting our traditional view of what constitutes a storage device. 74 CHAPTER 4. OF FILE SYSTEMS AND STORAGE MODELS 75 As system administrators, we are responsible for all kinds of devices: we build systems running entirely without local storage just as we maintain the massive enterprise storage arrays that enable decentralized data replication and archival. We manage large numbers of computers with their own hard drives, using a variety of technologies to maximize throughput before the data even gets onto a network. In order to be able to optimize our systems on this level, it is important for us to understand the principal concepts of how data is stored, the di↵erent storage models and disk interfaces.Itisimportanttobeawareofcertain physical properties of our storage media, and the impact they, as well as certain historic limitations, have on how we utilize disks.
    [Show full text]