The File Systems Evolution
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Protocols: 0-9, A
Protocols: 0-9, A • 3COM-AMP3, on page 4 • 3COM-TSMUX, on page 5 • 3PC, on page 6 • 4CHAN, on page 7 • 58-CITY, on page 8 • 914C G, on page 9 • 9PFS, on page 10 • ABC-NEWS, on page 11 • ACAP, on page 12 • ACAS, on page 13 • ACCESSBUILDER, on page 14 • ACCESSNETWORK, on page 15 • ACCUWEATHER, on page 16 • ACP, on page 17 • ACR-NEMA, on page 18 • ACTIVE-DIRECTORY, on page 19 • ACTIVESYNC, on page 20 • ADCASH, on page 21 • ADDTHIS, on page 22 • ADOBE-CONNECT, on page 23 • ADWEEK, on page 24 • AED-512, on page 25 • AFPOVERTCP, on page 26 • AGENTX, on page 27 • AIRBNB, on page 28 • AIRPLAY, on page 29 • ALIWANGWANG, on page 30 • ALLRECIPES, on page 31 • ALPES, on page 32 • AMANDA, on page 33 • AMAZON, on page 34 • AMEBA, on page 35 • AMAZON-INSTANT-VIDEO, on page 36 Protocols: 0-9, A 1 Protocols: 0-9, A • AMAZON-WEB-SERVICES, on page 37 • AMERICAN-EXPRESS, on page 38 • AMINET, on page 39 • AN, on page 40 • ANCESTRY-COM, on page 41 • ANDROID-UPDATES, on page 42 • ANET, on page 43 • ANSANOTIFY, on page 44 • ANSATRADER, on page 45 • ANY-HOST-INTERNAL, on page 46 • AODV, on page 47 • AOL-MESSENGER, on page 48 • AOL-MESSENGER-AUDIO, on page 49 • AOL-MESSENGER-FT, on page 50 • AOL-MESSENGER-VIDEO, on page 51 • AOL-PROTOCOL, on page 52 • APC-POWERCHUTE, on page 53 • APERTUS-LDP, on page 54 • APPLEJUICE, on page 55 • APPLE-APP-STORE, on page 56 • APPLE-IOS-UPDATES, on page 57 • APPLE-REMOTE-DESKTOP, on page 58 • APPLE-SERVICES, on page 59 • APPLE-TV-UPDATES, on page 60 • APPLEQTC, on page 61 • APPLEQTCSRVR, on page 62 • APPLIX, on page 63 • ARCISDMS, -
Oracle® ZFS Storage Appliance Security Guide, Release OS8.6.X
® Oracle ZFS Storage Appliance Security Guide, Release OS8.6.x Part No: E76480-01 September 2016 Oracle ZFS Storage Appliance Security Guide, Release OS8.6.x Part No: E76480-01 Copyright © 2014, 2016, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS. Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. -
MOAT004 1 Eprint Cs.DC/0306067 Computing in High Energy and Nuclear Physics, 24-28 March 2003, La Jolla, California Resources
Computing in High Energy and Nuclear Physics, 24-28 March 2003, La Jolla, California The AliEn system, status and perspectives P. Buncic Institut für Kernphysik, August-Euler-Str. 6, D-60486 Frankfurt, Germany and CERN, 1211, Geneva 23, Switzerland A. J. Peters, P.Saiz CERN, 1211, Geneva 23, Switzerland In preparation of the experiment at CERN's Large Hadron Collider (LHC), the ALICE collaboration has developed AliEn, a production environment that implements several components of the Grid paradigm needed to simulate, reconstruct and analyse data in a distributed way. Thanks to AliEn, the computing resources of a Virtual Organization can be seen and used as a single entity – any available node can execute jobs and access distributed datasets in a fully transparent way, wherever in the world a file or node might be. The system is built aroun d Open Source components, uses the Web Services model and standard network protocols to implement the computing platform that is currently being used to produce and analyse Monte Carlo data at over 30 sites on four continents. Several other HEP experiments as well as medical projects (EU MammoGRID, INFN GP -CALMA) have expressed their interest in AliEn or some components of it. As progress is made in the definition of Grid standards and interoperability, our aim is to interface AliEn to emerging products from both Europe and the US. In particular, it is our intention to make AliEn services compatible with the Open Grid Services Architecture (OGSA). The aim of this paper is to present the current AliEn architecture and outline its future developments in the light of emerging standards. -
Persistent 9P Sessions for Plan 9
Persistent 9P Sessions for Plan 9 Gorka Guardiola, [email protected] Russ Cox, [email protected] Eric Van Hensbergen, [email protected] ABSTRACT Traditionally, Plan 9 [5] runs mainly on local networks, where lost connections are rare. As a result, most programs, including the kernel, do not bother to plan for their file server connections to fail. These programs must be restarted when a connection does fail. If the kernel’s connection to the root file server fails, the machine must be rebooted. This approach suffices only because lost connections are rare. Across long distance networks, where connection failures are more common, it becomes woefully inadequate. To address this problem, we wrote a program called recover, which proxies a 9P session on behalf of a client and takes care of redialing the remote server and reestablishing con- nection state as necessary, hiding network failures from the client. This paper presents the design and implementation of recover, along with performance benchmarks on Plan 9 and on Linux. 1. Introduction Plan 9 is a distributed system developed at Bell Labs [5]. Resources in Plan 9 are presented as synthetic file systems served to clients via 9P, a simple file protocol. Unlike file protocols such as NFS, 9P is stateful: per-connection state such as which files are opened by which clients is maintained by servers. Maintaining per-connection state allows 9P to be used for resources with sophisticated access control poli- cies, such as exclusive-use lock files and chat session multiplexers. It also makes servers easier to imple- ment, since they can forget about file ids once a connection is lost. -
Andrew File System (AFS) Google File System February 5, 2004
Advanced Topics in Computer Systems, CS262B Prof Eric A. Brewer Andrew File System (AFS) Google File System February 5, 2004 I. AFS Goal: large-scale campus wide file system (5000 nodes) o must be scalable, limit work of core servers o good performance o meet FS consistency requirements (?) o managable system admin (despite scale) 400 users in the “prototype” -- a great reality check (makes the conclusions meaningful) o most applications work w/o relinking or recompiling Clients: o user-level process, Venus, that handles local caching, + FS interposition to catch all requests o interaction with servers only on file open/close (implies whole-file caching) o always check cache copy on open() (in prototype) Vice (servers): o Server core is trusted; called “Vice” o servers have one process per active client o shared data among processes only via file system (!) o lock process serializes and manages all lock/unlock requests o read-only replication of namespace (centralized updates with slow propagation) o prototype supported about 20 active clients per server, goal was >50 Revised client cache: o keep data cache on disk, metadata cache in memory o still whole file caching, changes written back only on close o directory updates are write through, but cached locally for reads o instead of check on open(), assume valid unless you get an invalidation callback (server must invalidate all copies before committing an update) o allows name translation to be local (since you can now avoid round-trip for each step of the path) Revised servers: 1 o move -
Netbackup ™ Enterprise Server and Server 8.0 - 8.X.X OS Software Compatibility List Created on September 08, 2021
Veritas NetBackup ™ Enterprise Server and Server 8.0 - 8.x.x OS Software Compatibility List Created on September 08, 2021 Click here for the HTML version of this document. <https://download.veritas.com/resources/content/live/OSVC/100046000/100046611/en_US/nbu_80_scl.html> Copyright © 2021 Veritas Technologies LLC. All rights reserved. Veritas, the Veritas Logo, and NetBackup are trademarks or registered trademarks of Veritas Technologies LLC in the U.S. and other countries. Other names may be trademarks of their respective owners. Veritas NetBackup ™ Enterprise Server and Server 8.0 - 8.x.x OS Software Compatibility List 2021-09-08 Introduction This Software Compatibility List (SCL) document contains information for Veritas NetBackup 8.0 through 8.x.x. It covers NetBackup Server (which includes Enterprise Server and Server), Client, Bare Metal Restore (BMR), Clustered Master Server Compatibility and Storage Stacks, Deduplication, File System Compatibility, NetBackup OpsCenter, NetBackup Access Control (NBAC), SAN Media Server/SAN Client/FT Media Server, Virtual System Compatibility and NetBackup Self Service Support. It is divided into bookmarks on the left that can be expanded. IPV6 and Dual Stack environments are supported from NetBackup 8.1.1 onwards with few limitations, refer technote for additional information <http://www.veritas.com/docs/100041420> For information about certain NetBackup features, functionality, 3rd-party product integration, Veritas product integration, applications, databases, and OS platforms that Veritas intends to replace with newer and improved functionality, or in some cases, discontinue without replacement, please see the widget titled "NetBackup Future Platform and Feature Plans" at <https://sort.veritas.com/netbackup> Reference Article <https://www.veritas.com/docs/100040093> for links to all other NetBackup compatibility lists. -
LLNL Computation Directorate Annual Report (2014)
PRODUCTION TEAM LLNL Associate Director for Computation Dona L. Crawford Deputy Associate Directors James Brase, Trish Damkroger, John Grosh, and Michel McCoy Scientific Editors John Westlund and Ming Jiang Art Director Amy Henke Production Editor Deanna Willis Writers Andrea Baron, Rose Hansen, Caryn Meissner, Linda Null, Michelle Rubin, and Deanna Willis Proofreader Rose Hansen Photographer Lee Baker LLNL-TR-668095 3D Designer Prepared by LLNL under Contract DE-AC52-07NA27344. Ryan Chen This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, Print Production manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the Charlie Arteago, Jr., and Monarch Print Copy and Design Solutions United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. CONTENTS Message from the Associate Director . 2 An Award-Winning Organization . 4 CORAL Contract Awarded and Nonrecurring Engineering Begins . 6 Preparing Codes for a Technology Transition . 8 Flux: A Framework for Resource Management . -
A Survey of Distributed File Systems
A Survey of Distributed File Systems M. Satyanarayanan Department of Computer Science Carnegie Mellon University February 1989 Abstract Abstract This paper is a survey of the current state of the art in the design and implementation of distributed file systems. It consists of four major parts: an overview of background material, case studies of a number of contemporary file systems, identification of key design techniques, and an examination of current research issues. The systems surveyed are Sun NFS, Apollo Domain, Andrew, IBM AIX DS, AT&T RFS, and Sprite. The coverage of background material includes a taxonomy of file system issues, a brief history of distributed file systems, and a summary of empirical research on file properties. A comprehensive bibliography forms an important of the paper. Copyright (C) 1988,1989 M. Satyanarayanan The author was supported in the writing of this paper by the National Science Foundation (Contract No. CCR-8657907), Defense Advanced Research Projects Agency (Order No. 4976, Contract F33615-84-K-1520) and the IBM Corporation (Faculty Development Award). The views and conclusions in this document are those of the author and do not represent the official policies of the funding agencies or Carnegie Mellon University. 1 1. Introduction The sharing of data in distributed systems is already common and will become pervasive as these systems grow in scale and importance. Each user in a distributed system is potentially a creator as well as a consumer of data. A user may wish to make his actions contingent upon information from a remote site, or may wish to update remote information. -
Release 0.5.0 Will Mcgugan
PyFilesystem Documentation Release 0.5.0 Will McGugan Aug 09, 2017 Contents 1 Guide 3 1.1 Introduction...............................................3 1.2 Getting Started..............................................4 1.3 Concepts.................................................5 1.4 Opening Filesystems...........................................7 1.5 Filesystem Interface...........................................7 1.6 Filesystems................................................9 1.7 3rd-Party Filesystems.......................................... 10 1.8 Exposing FS objects........................................... 10 1.9 Utility Modules.............................................. 11 1.10 Command Line Applications....................................... 11 1.11 A Guide For Filesystem Implementers.................................. 14 1.12 Release Notes.............................................. 16 2 Code Documentation 17 2.1 fs.appdirfs................................................ 17 2.2 fs.base.................................................. 18 2.3 fs.browsewin............................................... 30 2.4 fs.contrib................................................. 30 2.5 fs.errors.................................................. 32 2.6 fs.expose................................................. 34 2.7 fs.filelike................................................. 40 2.8 fs.ftpfs.................................................. 42 2.9 fs.httpfs.................................................. 43 2.10 fs.memoryfs.............................................. -
Virtfs—A Virtualization Aware File System Pass-Through
VirtFS—A virtualization aware File System pass-through Venkateswararao Jujjuri Eric Van Hensbergen Anthony Liguori IBM Linux Technology Center IBM Research Austin IBM Linux Technology Center [email protected] [email protected] [email protected] Badari Pulavarty IBM Linux Technology Center [email protected] Abstract operations into block device operations and then again into host file system operations. This paper describes the design and implementation of In addition to performance improvements over a tradi- a paravirtualized file system interface for Linux in the tional virtual block device, exposing guest file system KVM environment. Today’s solution of sharing host activity to the hypervisor provides greater insight to the files on the guest through generic network file systems hypervisor about the workload the guest is running. This like NFS and CIFS suffer from major performance and allows the hypervisor to make more intelligent decisions feature deficiencies as these protocols are not designed with respect to I/O caching and creates new opportuni- or optimized for virtualization. To address the needs of ties for hypervisor-based services like de-duplification. the virtualization paradigm, in this paper we are intro- ducing a new paravirtualized file system called VirtFS. In Section 2 of this paper, we explore more details about This new file system is currently under development and the motivating factors for paravirtualizing the file sys- is being built using QEMU, KVM, VirtIO technologies tem layer. In Section 3, we introduce the VirtFS design and 9P2000.L protocol. including an overview of the 9P protocol, which VirtFS is based on, along with a set of extensions introduced for greater Linux guest compatibility. -
The Influence of Scale on Distributed File System Design
IEEE TRANSAmIONS ON SOFIWARE ENGINEERING, VOL. 18, NO. I, JANUARY lY92 The Influence of Scale on Distributed File System Design Mahadev Satyanarayanan, Member, IEEE Abstract- Scale should be recognized as a primary factor into autonomous or semi-autonomous organizations for man- influencing the architecture and implementation of distributed agement purposes. Hence a distributed system that has many systems. This paper uses Andrew and Coda, distributed file users or nodes will also span many organizations. Regardless systems built at Carnegie Mellon University, to validate this proposition. Performance, operability, and security are dominant of the specific metric of scale, the designs of distributed considerations in the design of these systems. Availability is a systems that scale well are fundamentally different from less further consideration in the design of Coda. Client caching, scalable designs. bulk data transfer, token-based mutual authentication, and hi- In this paper we describe the lessons we have learned erarchical organization of the protection domain have emerged about scalability from the Andrew File System and the Codu as mechanisms that enhance scalability. The separation of con- cerns made possible by functional specialization has also proved File System. These systems are particularly appropriate for valuable in scaling. Heterogeneity is an important by-product our discussion, because they have been designed for growth of growth, but the mechanisms available to cope with it are to thousands of nodes and users. We focus on distributed rudimentary. Physical separation of clients and servers turns out file systems, because they are the most widely used kind of to be a critical requirement for scalability. -
Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................