Lustre: the Intergalactic File System for the National Labs?

Total Page:16

File Type:pdf, Size:1020Kb

Lustre: the Intergalactic File System for the National Labs? Lustre: the Intergalactic File System for the National Labs? Peter J. Braam [email protected] http://www.clusterfilesystems.com Cluster File Systems, Inc 6/10/2001 1 Cluster File Systems, Inc… n Goals n Consulting & development n Storage and file systems n Open source n Extreme level of expertise n Leading n InterMezzo – high availability file system n Lustre – next generation cluster file system n Important role in Coda, UDF and Ext3 for Linux 2 - 6/13/2001 Partners… n CMU n National Labs n Intel 3 - 6/13/2001 Talk overview n Trends n Next generation data centers n Key issues in cluster file systems n Lustre n Discussion 4 - 6/13/2001 Trends… 5 - 6/13/2001 Hot animals… n NAS n Cheap servers deliver a lot of features n NFS v4 n Finally, NFS is getting it right… n Security, concurrency n DAFS n High level storage protocol over VI storage network n Open Source OS n Best of breed file systems (XFS, JFS, Reiser) 6 - 6/13/2001 DAFS / NFS v4 DAFS server NFS v4 server high level fast & efficient client client storage protocol concurrency control with notifications to clients 7 - 6/13/2001 Key question n How to use the parts… 8 - 6/13/2001 Next generation data centers 9 - 6/13/2001 Security and Resource Cluster Control Databases Clients Access Control Coherence Management D a Storage Area Network ta (FC, GigE, IB) T Storage ra n s Management fe rs Object Storage Targets 10 - 6/13/2001 Orders of magnitude n Clients (aka compute servers) n 10,000’s n Storage controllers n 1000’s to control PB’s of storage (PB = 10**15 Bytes) n Cluster control nodes n 10’s n Aggregate bandwidth n 100’s GB/sec 11 - 6/13/2001 Applications n Scientific computing n Bio Informatics n Rich media n Entire ISP’s 12 - 6/13/2001 Key issues 13 - 6/13/2001 Scalability n I/O throughput n How to avoid bottlenecks n Meta data scalability n How can 10,000’s of nodes work on files in same folder n Cluster recovery n If something fails, how can transparent recovery happen n Management n Adding, removing, replacing, systems; data migration & backup 14 - 6/13/2001 Features n The basics… n Recovery, management, security n The desired… n Gene computations on storage controllers n Data mining for free n Content based security n … n The obstacle… n An almost 30 year old pervasive block device protocol 15 - 6/13/2001 Look back n Andrew Project at CMU n 80’s – file servers with 10,000 clients (CMU campus) n Key question: how to reduce foot print of client on server n By 1988 entire campus on AFS n Lustre n Scalable clusters? n How to reduce cluster footprint of shared resources (scalability) n How to subdivide bottlenecked resources (parallelism) 16 - 6/13/2001 Lustre Intelligent Object Storage http://www.http://www.lustrelustrelustre.org.org 17 - 6/13/2001 What is Object Based Storage?? n Object Based Storage Device n More intelligent than block device n Speak storage at “inode level” n create, unlink, read, write, getattr, setattr n iterators, security, almost arbitrary processing n So… n Protocol allocates physical blocks, no names for files n Requires n Management & security infrastructure 18 - 6/13/2001 Project history n Started between CMU – Seagate – Stelias Computing n Another road to NASD style storage n NASD now at Panasas – originated many ideas n Los Alamos n More research n Nearly built little object storage controllers n Currently looking at genomics applications n Sandia, Tri-Labs n Can Lustre meet the SGS-FS requirements? 19 - 6/13/2001 Components of OB Storage n Storage Object Device Drivers n class drivers – attach driver to interface n Targets, clients – remote access n Direct drivers – to manage physical storage n Logical drivers – for intelligence & storage management n Object storage applications: n (cluster) file systems n Advanced storage: parallel I/O, snapshots n Specialized apps: caches, db’s, filesrv 20 - 6/13/2001 Object File System Monolithic Buffer cache File system Object File System: Page Object based Cache storage device • file/dir data: lookup • set/read attrs Object • all allocation • remainder:ask obsd Device • all persistence Methods 21 - 6/13/2001 Accessing objects n Session n connect to the object storage target, present security token n Mention object id n Objects have a unique (group,id) n Perform operation n So that’s what the object file system does! 22 - 6/13/2001 Objects may be files, or not… n Common case: n Object, like inode, represents a file n Object can also: n represent a stripe (RAID) n bind an (MPI) File_View n redirect to other objects 23 - 6/13/2001 Snapshots as logical module Present multiple /current/ Object File System views of file systems /yesterday/ objectX Versioned objects: Snapshot / versioning follow redirectors logical driver objY objZ Access to raw objects Direct object driver 7am 9am bla bla bla bla 24 - 6/13/2001 System Interface n Modules n Load the kernel modules to get drivers of a certain type n Name devices to be of a certain type n Build stacks of devices with assigned types n For example: n insmod obd_xfs ; obdcontrol dev=obd1,type=xfs n insmod obd_snap ; obdcontrol current=obd2,old=obd3,driver=obd1 n insmod obdfs ; mount –t obdfs –o dev=obd3 /mnt/old 25 - 6/13/2001 Clustered Object Clustered Object Based File System Based File System on host A on host B OBD Client Driver OBD Client Driver Type RPC Type VIA Fast storage networking Shared object storage OBD Target OBD Target Type RPC Type VIA Direct OBD 26 - 6/13/2001 Storage target Implementations n Classical storage targets n Controller – expensive, redundant, proprietary n EMC: as sophisticated & feature rich as block storage can get n A bunch of disks n Lustre target n Bunch of disks n Powerful (SMP, multiple busses) commodity PC n Programmable/Secure n Could be done on disk drives but… 27 - 6/13/2001 Inside the storage controller… FC/ SCSI dual path JBOD SMP Linux Load SMP Linux RAID Balance RAID on-controller logic & on-controller logic networking Failover networking Multipath Multipath Load balanced Load balanced Storage network Storage network 28 - 6/13/2001 interface Objects in clusters… 29 - 6/13/2001 Lustre clusters Lustre Cluster Control Nodes (10’s) Lustre Locate Object Storage Targets Lustre clients Device & object (10,000’s) Meta Data transactions SAN – low latency client / storage interactions 3rd Party File & Object I/O TCP/IP file and security protocols GSS API Compatible Key/token management InterMezzo clients remote access • Private namespace (extreme security) • Persistent cache clients (mobility) HSM backends for Lustre/DMAPI solution 30 - 6/13/2001 Cluster control nodes n Database of references to objects n E.g. Lustre File System n Directory data n Points to objects that contain stripes/extents of files n More generally n Use a database of references to objects n Write object applications that access the objects directly n LANL asked for gene processing on the controllers 31 - 6/13/2001 Examples of logical modules n Tri-Lab/NSA: SGS – File system (see next slide) n Storage management, security n Parallel I/O for scientific computation n Other requests: n Data mining while target is idle n LANL: gene sequencing in object modules n Rich media industry: prioritize video streams 32 - 6/13/2001 MPI-IO interface MPI POSIX file types Collective & interface shared I/O ADIO-OBD adaptor Lustre client FS metadata sync auditing Client node object storage client networking SGS – File System SAN Object Layering object storage target networking synchronization content based collective I/O authorization data-migration aggregation Storage target node XFS-based OSD XFS-based OSD 33 - 6/13/2001 Other applications… n Genomics n Can we reproduce the previous slide? n Data mining n Can we exploit idle cycles and disk geometry? n Rich media n What storage networking helps streaming? 34 - 6/13/2001 Why Lustre… n It’s fun, it’s new n Infrastructure for storage target based computing n Storage management: components – not monolithic n File system snapshots, raid, backup, hot migration, resizing n Much simpler n File System: n Clustering FS considerably simpler, more scalable n But: close to NFS v4 and DAFS in several critical ways 35 - 6/13/2001 And finally – the reality, what exists… n At http://www.lustre.org (everything GPL’d) n Prototypes: n Direct driver for ext2 objects, Snapshot logical driver, n Management infrastructure, Object file system n Current happenings: n Collaboration with Intel Enterprise Architecture LAB: n They are building Lustre storage networking (DAFS, RDMA, TUX) n The grand scheme of things has been planned and is moving n Also on the WWW: n OBD storage specification n Lustre SGS – File System implementation plan. 36 - 6/13/2001 Linux clusters 37 - 6/13/2001 Clusters - purpose Require: n A scalable almost single system image n Fail-over capability n Load-balanced redundant services n Smooth administration 38 - 6/13/2001 Ultimate Goal n Provide generic components n OPEN SOURCE n Inspiration: VMS VAX Clusters n New: n Scalable (100,000’s nodes) n Modular n Need distributed, cluster & parallel FS’s n InterMezzo, GFS/Lustre, POBIO-FS 39 - 6/13/2001 Technology Overview Modularized VAX cluster architecture (Tweedie) Core Support Clients Transition Cluster db Distr. Computing Integrity Quorum Cluster Admin/Apps Link Layer Barrier Svc Cluster FS & LVM Channel Layer Event system DLM 40 - 6/13/2001 Events n Cluster transition: n Whenever connectivity changes n Start by electing “cluster controller” n Only merge fully connected sub-clusters n Cluster id: counts “incarnations” n Barriers: n Distributed synchronization points n Partial implementations available: n Ensemble, KimberLite, IBM-DLM, Compaq Cluster Mgr 41 - 6/13/2001 Scalability – e.g.
Recommended publications
  • Globalfs: a Strongly Consistent Multi-Site File System
    GlobalFS: A Strongly Consistent Multi-Site File System Leandro Pacheco Raluca Halalai Valerio Schiavoni University of Lugano University of Neuchatelˆ University of Neuchatelˆ Fernando Pedone Etienne Riviere` Pascal Felber University of Lugano University of Neuchatelˆ University of Neuchatelˆ Abstract consistency, availability, and tolerance to partitions. Our goal is to ensure strongly consistent file system operations This paper introduces GlobalFS, a POSIX-compliant despite node failures, at the price of possibly reduced geographically distributed file system. GlobalFS builds availability in the event of a network partition. Weak on two fundamental building blocks, an atomic multicast consistency is suitable for domain-specific applications group communication abstraction and multiple instances of where programmers can anticipate and provide resolution a single-site data store. We define four execution modes and methods for conflicts, or work with last-writer-wins show how all file system operations can be implemented resolution methods. Our rationale is that for general-purpose with these modes while ensuring strong consistency and services such as a file system, strong consistency is more tolerating failures. We describe the GlobalFS prototype in appropriate as it is both more intuitive for the users and detail and report on an extensive performance assessment. does not require human intervention in case of conflicts. We have deployed GlobalFS across all EC2 regions and Strong consistency requires ordering commands across show that the system scales geographically, providing replicas, which needs coordination among nodes at performance comparable to other state-of-the-art distributed geographically distributed sites (i.e., regions). Designing file systems for local commands and allowing for strongly strongly consistent distributed systems that provide good consistent operations over the whole system.
    [Show full text]
  • AFS - a Secure Distributed File System
    SLAC-PUB-11152 Part III: AFS - A Secure Distributed File System Alf Wachsmann Submitted to Linux Journal issue #132/April 2005 Stanford Linear Accelerator Center, Stanford University, Stanford, CA 94309 Work supported by Department of Energy contract DE–AC02–76SF00515. SLAC-PUB-11152.txt Part III: AFS - A Secure Distributed File System III.1 Introduction AFS is a secure distributed global file system providing location independence, scalability and transparent migration capabilities for data. AFS works across a multitude of Unix and non-Unix operating systems and is used at many large sites in production for many years. AFS still provides unique features that are not available with other distributed file systems even though AFS is almost 20 years old. This age might make it less appealing to some but with IBM making AFS available as open-source in 2000, new interest in use and development was sparked. When talking about AFS, people often mention other file systems as potential alternatives. Coda (http://www.coda.cs.cmu.edu/) with its disconnected mode will always be a research project and never have production quality. Intermezzo (http://www.inter-mezzo.org/) is now in the Linux kernel but not available for any other operating systems. NFSv4 (http://www.nfsv4.org/) which picked up many ideas from AFS and Coda is not mature enough yet to be used in serious production mode. This article presents the rich features of AFS and invites readers to play with it. III.2 Features and Benefits of AFS AFS client software is available for many Unix flavors: HP, Compaq, IBM, SUN, SGI, Linux.
    [Show full text]
  • Linux Kernal II 9.1 Architecture
    Page 1 of 7 Linux Kernal II 9.1 Architecture: The Linux kernel is a Unix-like operating system kernel used by a variety of operating systems based on it, which are usually in the form of Linux distributions. The Linux kernel is a prominent example of free and open source software. Programming language The Linux kernel is written in the version of the C programming language supported by GCC (which has introduced a number of extensions and changes to standard C), together with a number of short sections of code written in the assembly language (in GCC's "AT&T-style" syntax) of the target architecture. Because of the extensions to C it supports, GCC was for a long time the only compiler capable of correctly building the Linux kernel. Compiler compatibility GCC is the default compiler for the Linux kernel source. In 2004, Intel claimed to have modified the kernel so that its C compiler also was capable of compiling it. There was another such reported success in 2009 with a modified 2.6.22 version of the kernel. Since 2010, effort has been underway to build the Linux kernel with Clang, an alternative compiler for the C language; as of 12 April 2014, the official kernel could almost be compiled by Clang. The project dedicated to this effort is named LLVMLinxu after the LLVM compiler infrastructure upon which Clang is built. LLVMLinux does not aim to fork either the Linux kernel or the LLVM, therefore it is a meta-project composed of patches that are eventually submitted to the upstream projects.
    [Show full text]
  • Using the Andrew File System on *BSD [email protected], Bsdcan, 2006 Why Another Network Filesystem
    Using the Andrew File System on *BSD [email protected], BSDCan, 2006 why another network filesystem 1-slide history of Andrew File System user view admin view OpenAFS Arla AFS on OpenBSD, FreeBSD and NetBSD Filesharing on the Internet use FTP or link to HTTP file interface through WebDAV use insecure protocol over vpn History of AFS 1984: developed at Carnegie Mellon 1989: TransArc Corperation 1994: over to IBM 1997: Arla, aimed at Linux and BSD 2000: IBM releases source 2000: foundation of OpenAFS User view <1> global filesystem rooted at /afs /afs/cern.ch/... /afs/cmu.edu/... /afs/gorlaeus.net/users/h/hugo/... User view <2> authentication through Kerberos #>kinit <username> obtain krbtgt/<realm>@<realm> #>afslog obtain afs@<realm> #>cd /afs/<cell>/users/<username> User view <3> ACL (dir based) & Quota usage runs on Windows, OS X, Linux, Solaris ... and *BSD Admin view <1> <cell> <partition> <server> <volume> <volume> <server> <partition> Admin view <2> /afs/gorlaeus.net/users/h/hugo/presos/afs_slides.graffle gorlaeus.net /vicepa fwncafs1 users hugo h bram <server> /vicepb Admin view <2a> /afs/gorlaeus.net/users/h/hugo/presos/afs_slides.graffle gorlaeus.net /vicepa fwncafs1 users hugo /vicepa fwncafs2 h bram Admin view <3> servers require KeyFile ~= keytab procedure differs for Heimdal: ktutil copy MIT: asetkey add Admin view <4> entry in CellServDB >gorlaeus.net #my cell name 10.0.0.1 <dbserver host name> required on servers required on clients without DynRoot Admin view <5> File locking no databases on AFS (requires byte range locking)
    [Show full text]
  • Filesystems HOWTO Filesystems HOWTO Table of Contents Filesystems HOWTO
    Filesystems HOWTO Filesystems HOWTO Table of Contents Filesystems HOWTO..........................................................................................................................................1 Martin Hinner < [email protected]>, http://martin.hinner.info............................................................1 1. Introduction..........................................................................................................................................1 2. Volumes...............................................................................................................................................1 3. DOS FAT 12/16/32, VFAT.................................................................................................................2 4. High Performance FileSystem (HPFS)................................................................................................2 5. New Technology FileSystem (NTFS).................................................................................................2 6. Extended filesystems (Ext, Ext2, Ext3)...............................................................................................2 7. Macintosh Hierarchical Filesystem − HFS..........................................................................................3 8. ISO 9660 − CD−ROM filesystem.......................................................................................................3 9. Other filesystems.................................................................................................................................3
    [Show full text]
  • A Persistent Storage Model for Extreme Computing Shuangyang Yang Louisiana State University and Agricultural and Mechanical College
    Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 2014 A Persistent Storage Model for Extreme Computing Shuangyang Yang Louisiana State University and Agricultural and Mechanical College Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_dissertations Part of the Computer Sciences Commons Recommended Citation Yang, Shuangyang, "A Persistent Storage Model for Extreme Computing" (2014). LSU Doctoral Dissertations. 2910. https://digitalcommons.lsu.edu/gradschool_dissertations/2910 This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Doctoral Dissertations by an authorized graduate school editor of LSU Digital Commons. For more information, please [email protected]. A PERSISTENT STORAGE MODEL FOR EXTREME COMPUTING A Dissertation Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The Department of Computer Science by Shuangyang Yang B.S., Zhejiang University, 2006 M.S., University of Dayton, 2008 December 2014 Copyright © 2014 Shuangyang Yang All rights reserved ii Dedicated to my wife Miao Yu and our daughter Emily. iii Acknowledgments This dissertation would not be possible without several contributions. It is a great pleasure to thank Dr. Hartmut Kaiser @ Louisiana State University, Dr. Walter B. Ligon III @ Clemson University and Dr. Maciej Brodowicz @ Indiana University for their ongoing help and support. It is a pleasure also to thank Dr. Bijaya B. Karki, Dr. Konstantin Busch, Dr. Supratik Mukhopadhyay at Louisiana State University for being my committee members and Dr.
    [Show full text]
  • Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
    Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................
    [Show full text]
  • Using the Andrew File System with BSD
    Using the Andrew File System with BSD H. Meiland May 4, 2006 Abstract Since the beginning of networks, one of the basic idea’s has been sharing of files; even though with the Internet as advanced as today, simple platform independent file sharing is not common. Why is the closest thing we use WebDAV, a ’neat trick over http’, instead of a real protocol? In this paper the Andrew File System will be described which has been (and is) the file sharing core of many universities and companies world- wide. Also the reason for it’s relative unawareness in the community will be answered, and it’s actual features and performance in comparison with alternative network filesystems. Finally some information will be given on how to use it with our favorite OS: BSD. 1 History • 1984 Carnegie Mellon University, first release • 1989 TransArc Corporation formed by part of original team members • 1994 TransArc purchased by IBM • 1997 Start of Arla development at stacken.kth.se • 2000 IBM releases AFS in opensource (IBM License) • 2000 http://www.OpenAFS.org • 2006 good support for lot’s of platforms, many new features etc. 1 2 Overview 2.1 User point of view 2.1.1 Global namespace While discussing global filesystem, it is easy to dive into a organization, and explain wonderfull features like having replicas of often accessed data in branch-offices, and moving home-directories to local fileservers when mov- ing employees between departments. An essential feature of AFS is often overlooked: a common root as accesspoint of all AFS stored data.
    [Show full text]
  • Zfs-Ascalabledistributedfilesystemusingobjectdisks
    zFS-AScalableDistributedFileSystemUsingObjectDisks Ohad Rodeh Avi Teperman [email protected] [email protected] IBM Labs, Haifa University, Mount Carmel, Haifa 31905, Israel. Abstract retrieves the data block from the remote machine. zFS also uses distributed transactions and leases, instead of group- zFS is a research project aimed at building a decentral- communication and clustering software. We intend to test ized file system that distributes all aspects of file and stor- and show the effectiveness of these two features in our pro- age management over a set of cooperating machines inter- totype. connected by a high-speed network. zFS is designed to be zFS has six components: a Front End (FE), a Cooper- a file system that scales from a few networked computers to ative Cache (Cache), a File Manager (FMGR), a Lease several thousand machines and to be built from commodity Manager (LMGR), a Transaction Server (TSVR), and an off-the-shelf components. Object Store (OSD). These components work together to The two most prominent features of zFS are its coop- provide applications/users with a distributed file system. erative cache and distributed transactions. zFS integrates The design of zFS addresses, and is influenced by, issues the memory of all participating machines into one coher- of fault tolerance, security and backup/mirroring. How- ent cache. Thus, instead of going to the disk for a block ever, in this article, we focus on the zFS high-level archi- of data already in one of the machine memories, zFS re- tecture and briefly describe zFS’s fault tolerance character- trieves the data block from the remote machine.
    [Show full text]
  • A Parallel File System for Linux Clusters Authors: Philip H
    PVFS: A Parallel File System for Linux Clusters Authors: Philip H. Carns Walter B. Ligon III Robert B. Ross Rajeev Thakur Presented by: Pooja Nilangekar [email protected] Motivation ● Cluster computing is a mainstream method for parallel computing. ● Linux is the most common OS for cluster computing. ● Lack of production-quality performant parallel file system for Linux clusters. ● Linux clusters cannot be used for large I/O-intensive parallel applications. Solution: Parallel Virtual File System (PVFS) for linux clusters. Used at: Argonne National Laboratory, NASA Goddard Space Flight Center, and Oak Ridge National Laboratory. Objectives for PVFS 1. Basic software platform for pursuing further research in parallel I/O and parallel file systems in the context of Linux clusters. a. Requires a stable, full-featured parallel file system to begin with. 2. Meet the need for a parallel file system for Linux clusters. ● High bandwidth for concurrent read/write operations. ● Multiple API support: a native PVFS API, the UNIX/POSIX I/O API, other APIS MPI-IO. ● Common UNIX shell commands, such as ls, cp, and rm. ● Compatible with applications developed with the UNIX I/O API. ● Robust and scalable. ● Easy to install and use. ● Distribute the software as open source. Related Work ● Commercial parallel file systems ○ PFS (Intel Paragon), PIOFS & GPFS (IBM SP), HFS (HP Exemplar), XFS (SGI Origin2000). ○ High performance and functionality desired for I/O-intensive applications. ○ Available only on the specific platforms. ● Distributed file systems ○ NFS, AFS/Coda, InterMezzo, xFS, and GFS. ○ Distributed access to files from multiple client machines. ○ Varying consistency semantics and caching behavior.
    [Show full text]
  • Performance and Extension of User Space File Introduction (1 of 2) Introduction (2 of 2) Introduction
    4/15/2014 Introduction (1 of 2) Performance and Extension of • Developing in-kernel file systems challenging User Space File – Understand and deal with kernel code and data structures – Steep learning curve for kernel development Aditya Raigarhia and Ashish Gehani • No memory protection Stanford and SRI • No use of debuggers • Must be in C • No standard C library • ACM Symposium on Applied Computing (SAC) In-kernel implementations not so great Sierre, Switzerland, March 22-26, 2010 – Porting to other flavors of Unix can be difficult – Needs root to mount – tough to use/test on servers Introduction (2 of 2) Introduction - FUSE • • Modern file system research adds functionality File system in USEr space (FUSE) – framework for Unix- like OSes over basic systems, rather than designing low- • level systems Allows non-root users to develop file systems in user space – Ceph [37] – distributed file system for performance • and reliability – uses client in users space API for interface with kernel, using fs-type operations • • Programming in user space advantages Many different programming language bindings • FUSE file systems can be mounted by non-root users – Wide range of languages • Can compile without re-compiling kernel – Use of 3 rd party tools/libraries • – Fewer kernel quirks (although still need to couple user Examples code to kernel system calls) – WikipediaFS [2] lets users view/edit Wikipedia articles as if local files – SSHFS access via SFTP protocol 1 4/15/2014 Problem Statement Outline • Prevailing view – user space file systems
    [Show full text]
  • The Evolution of File Systems
    The Evolution of File Systems Thomas Rivera, Hitachi Data Systems Craig Harmer, April 2011 SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the Author nor the Presenter is an attorney and nothing in this presentation is intended to be nor should be construed as legal advice or opinion. If you need legal advice or legal opinion please contact an attorney. The information presented herein represents the Author's personal opinion and current understanding of the issues involved. The Author, the Presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. The Evolution of File Systems 2 © 2012 Storage Networking Industry Association. All Rights Reserved. 2 Abstract The File Systems Evolution Over time additional file systems appeared focusing on specialized requirements such as: data sharing, remote file access, distributed file access, parallel files access, HPC, archiving, security, etc. Due to the dramatic growth of unstructured data, files as the basic units for data containers are morphing into file objects, providing more semantics and feature- rich capabilities for content processing This presentation will: Categorize and explain the basic principles of currently available file system architectures (e.g.
    [Show full text]