Distributed Systems Distributed File Systems Case Studies Network File
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Cryptomator Documentation Release 1.5.0
Cryptomator Documentation Release 1.5.0 Cryptobot Sep 15, 2021 Desktop 1 Setup 3 1.1 Windows...............................................3 1.2 macOS................................................3 1.3 Linux.................................................3 2 Getting Started 5 3 Adding Vaults 7 3.1 Create a New Vault..........................................8 3.2 Open an Existing Vault........................................ 13 4 Accessing Vaults 15 4.1 Unlocking a Vault.......................................... 16 4.2 Working with the Unlocked Vault.................................. 17 4.3 Locking a vault............................................ 18 5 Password And Recovery Key 21 5.1 Change Password........................................... 21 5.2 Show Recovery Key......................................... 22 5.3 Reset Password............................................ 23 6 Vault Mounting 27 6.1 General Adapter Selection...................................... 27 6.2 Options applicable to all Systems and Adapters........................... 27 6.3 WebDAV-specific options...................................... 28 6.4 Dokany-specific options....................................... 28 6.5 FUSE-specific options........................................ 28 7 Vault Management 29 7.1 Remove Vaults............................................ 29 7.2 Reorder Vaults............................................ 29 7.3 Vault Options............................................. 29 8 Setup 33 8.1 Google PlayStore.......................................... -
A Versatile Persistent Caching Framework for File Systems Gopalan Sivathanu and Erez Zadok Stony Brook University
A Versatile Persistent Caching Framework for File Systems Gopalan Sivathanu and Erez Zadok Stony Brook University Technical Report FSL-05-05 Abstract IDE RAID array that has an Ext2 file system can cache the recently-accessed data into a smaller but faster SCSI We propose and evaluate an approach for decoupling disk to improve performance. The same could be done persistent-cache management from general file system for a local disk; it can be cached to a faster flash drive. design. Several distributed file systems maintain a per- The second problem with the persistent caching sistent cache of data to speed up accesses. Most of these mechanisms of present distributed file systems is that file systems retain complete control over various aspects they have a separate name space for the cache. Hav- of cache management, such as granularity of caching, ing the persistent cache directory structure as an exact and policies for cache placement and eviction. Hard- replica of the source file system is useful even when coding cache management into the file system often re- xCachefs is not mounted. For example, if the cache has sults in sub-optimal performance as the clients of the file the same structure as the source, disconnected reads are system are prevented from exploiting information about possible directly from the cache, as in Coda [2]. their workload in order to tune cache management. In this paper we present a decoupled caching mech- We introduce xCachefs, a framework that allows anism called xCachefs. With xCachefs, data can be clients to transparently augment the cache management cached from any file system to a faster file system. -
Cheat Sheet – Common Ports (PDF)
COMMON PORTS packetlife.net TCP/UDP Port Numbers 7 Echo 554 RTSP 2745 Bagle.H 6891-6901 Windows Live 19 Chargen 546-547 DHCPv6 2967 Symantec AV 6970 Quicktime 20-21 FTP 560 rmonitor 3050 Interbase DB 7212 GhostSurf 22 SSH/SCP 563 NNTP over SSL 3074 XBOX Live 7648-7649 CU-SeeMe 23 Telnet 587 SMTP 3124 HTTP Proxy 8000 Internet Radio 25 SMTP 591 FileMaker 3127 MyDoom 8080 HTTP Proxy 42 WINS Replication 593 Microsoft DCOM 3128 HTTP Proxy 8086-8087 Kaspersky AV 43 WHOIS 631 Internet Printing 3222 GLBP 8118 Privoxy 49 TACACS 636 LDAP over SSL 3260 iSCSI Target 8200 VMware Server 53 DNS 639 MSDP (PIM) 3306 MySQL 8500 Adobe ColdFusion 67-68 DHCP/BOOTP 646 LDP (MPLS) 3389 Terminal Server 8767 TeamSpeak 69 TFTP 691 MS Exchange 3689 iTunes 8866 Bagle.B 70 Gopher 860 iSCSI 3690 Subversion 9100 HP JetDirect 79 Finger 873 rsync 3724 World of Warcraft 9101-9103 Bacula 80 HTTP 902 VMware Server 3784-3785 Ventrilo 9119 MXit 88 Kerberos 989-990 FTP over SSL 4333 mSQL 9800 WebDAV 102 MS Exchange 993 IMAP4 over SSL 4444 Blaster 9898 Dabber 110 POP3 995 POP3 over SSL 4664 Google Desktop 9988 Rbot/Spybot 113 Ident 1025 Microsoft RPC 4672 eMule 9999 Urchin 119 NNTP (Usenet) 1026-1029 Windows Messenger 4899 Radmin 10000 Webmin 123 NTP 1080 SOCKS Proxy 5000 UPnP 10000 BackupExec 135 Microsoft RPC 1080 MyDoom 5001 Slingbox 10113-10116 NetIQ 137-139 NetBIOS 1194 OpenVPN 5001 iperf 11371 OpenPGP 143 IMAP4 1214 Kazaa 5004-5005 RTP 12035-12036 Second Life 161-162 SNMP 1241 Nessus 5050 Yahoo! Messenger 12345 NetBus 177 XDMCP 1311 Dell OpenManage 5060 SIP 13720-13721 -
Diplomarbeit Kalenderstandards Im Internet
Diplomarbeit Kalenderstandards im Internet Eingereicht von Reinhard Fischer Studienkennzahl J151 Matrikelnummer: 9852961 Diplomarbeit am Institut für Informationswirtschaft WIRTSCHAFTSUNIVERSITÄT WIEN Studienrichtung: Betriebswirtschaft Begutachter: Prof. DDr. Arno Scharl Betreuender Assistent: Dipl.-Ing. Mag. Dr. Albert Weichselbraun Wien, 20. August 2007 ii Inhaltsverzeichnis Abbildungsverzeichnis vi Abkürzungsverzeichnis vii 1 Einleitung 1 1.1 Problemstellung . 1 1.2 Inhalt und Vorgehensweise . 3 2 Standards für Kalender im Internet 5 2.1 iCalendar und darauf basierende Standards . 6 2.1.1 iCalendar und vCalendar . 6 2.1.2 Transport-Independent Interoperability Protocol (iTIP) . 8 2.1.3 iCalendar Message-Based Interoperability Protocol (iMIP) . 8 2.1.4 iCalendar über WebDAV (WebCAL) . 10 2.1.5 Storage of Groupware Objects in WebDAV (GroupDAV) . 11 2.1.6 Calendaring and Scheduling Extensions to WebDAV (CalDAV) . 12 2.1.7 IETF Calendar Access Protocol (CAP) . 13 2.2 XML-basierte Formate . 15 2.2.1 XML iCalendar (xCal) . 15 2.2.2 RDF Calendar (RDFiCal) . 16 2.2.3 RDFa (RDF/A) . 16 2.2.4 OWL-Time . 17 2.3 Mikroformate (hCalendar) . 18 2.4 SyncML . 20 2.5 Weitere Formate . 21 2.6 Zusammenfassung . 22 iii 3 Offene Kalenderanwendungen im Internet 24 3.1 Server . 24 3.1.1 Citadel/UX . 24 3.1.2 Open-Xchange . 26 3.1.3 OpenGroupware.org . 26 3.1.4 Kolab2 . 27 3.1.5 Weitere Server . 28 3.2 Clients . 29 3.2.1 Mozilla Calendar Project . 29 3.2.2 KDE Kontact . 30 3.2.3 Novell Evolution . 30 3.2.4 OSAF Chandler . 31 3.2.5 Weitere Open-Source- und Closed-Source-Clients . -
Andrew File System (AFS) Google File System February 5, 2004
Advanced Topics in Computer Systems, CS262B Prof Eric A. Brewer Andrew File System (AFS) Google File System February 5, 2004 I. AFS Goal: large-scale campus wide file system (5000 nodes) o must be scalable, limit work of core servers o good performance o meet FS consistency requirements (?) o managable system admin (despite scale) 400 users in the “prototype” -- a great reality check (makes the conclusions meaningful) o most applications work w/o relinking or recompiling Clients: o user-level process, Venus, that handles local caching, + FS interposition to catch all requests o interaction with servers only on file open/close (implies whole-file caching) o always check cache copy on open() (in prototype) Vice (servers): o Server core is trusted; called “Vice” o servers have one process per active client o shared data among processes only via file system (!) o lock process serializes and manages all lock/unlock requests o read-only replication of namespace (centralized updates with slow propagation) o prototype supported about 20 active clients per server, goal was >50 Revised client cache: o keep data cache on disk, metadata cache in memory o still whole file caching, changes written back only on close o directory updates are write through, but cached locally for reads o instead of check on open(), assume valid unless you get an invalidation callback (server must invalidate all copies before committing an update) o allows name translation to be local (since you can now avoid round-trip for each step of the path) Revised servers: 1 o move -
Genesys Interaction Recording Solution Guide
Genesys Interaction Recording Solution Guide Configuring WebDAV 9/27/2021 Contents • 1 Configuring WebDAV • 1.1 Deploying the WebDAV Server • 1.2 Configuring TLS for the WebDAV Server • 1.3 Changing storage location • 1.4 Next Step Genesys Interaction Recording Solution Guide 2 Configuring WebDAV Configuring WebDAV Interaction Recording Web Services relies on a Web Distributed Authoring and Versioning (WebDAV) server to store and manage the GIR recording files. WebDAV is an extension of the Hypertext Transfer Protocol (HTTP) that facilitates collaboration between users in editing and managing documents and files stored on World Wide Web servers. A working group of the Internet Engineering Task Force (IETF) defined WebDAV in RFC 4918. The following information represents examples of what can be done for WebDAV. Follow these procedures to get a better understanding of what needs to be done when you use a Red Hat Enterprise Linux machine with the Apache HTTP Server. Important • This document provides you with basic guidelines on configuring WebDAV on RHEL. If you wish to configure WebDAV on other operating systems or if you have additional questions regarding WebDAV on RHEL, refer to the official documentation from the operating system provider. • It is recommended that you do not install WebDAV on the same machine as Interaction Recording Web Services (RWS), since numerous deployments already install Cassandra and Elasticsearch on the same host. These are critical components for the operation of RWS. If an additional process such as WebDAV is run on the same machine as RWS, disk I/O operations will be limited and the stability of RWS may be negatively impacted. -
A Survey of Distributed File Systems
A Survey of Distributed File Systems M. Satyanarayanan Department of Computer Science Carnegie Mellon University February 1989 Abstract Abstract This paper is a survey of the current state of the art in the design and implementation of distributed file systems. It consists of four major parts: an overview of background material, case studies of a number of contemporary file systems, identification of key design techniques, and an examination of current research issues. The systems surveyed are Sun NFS, Apollo Domain, Andrew, IBM AIX DS, AT&T RFS, and Sprite. The coverage of background material includes a taxonomy of file system issues, a brief history of distributed file systems, and a summary of empirical research on file properties. A comprehensive bibliography forms an important of the paper. Copyright (C) 1988,1989 M. Satyanarayanan The author was supported in the writing of this paper by the National Science Foundation (Contract No. CCR-8657907), Defense Advanced Research Projects Agency (Order No. 4976, Contract F33615-84-K-1520) and the IBM Corporation (Faculty Development Award). The views and conclusions in this document are those of the author and do not represent the official policies of the funding agencies or Carnegie Mellon University. 1 1. Introduction The sharing of data in distributed systems is already common and will become pervasive as these systems grow in scale and importance. Each user in a distributed system is potentially a creator as well as a consumer of data. A user may wish to make his actions contingent upon information from a remote site, or may wish to update remote information. -
8144 CMU Updates: 7240 April 2017 Category: Standards Track ISSN: 2070-1721
Internet Engineering Task Force (IETF) K. Murchison Request for Comments: 8144 CMU Updates: 7240 April 2017 Category: Standards Track ISSN: 2070-1721 Use of the Prefer Header Field in Web Distributed Authoring and Versioning (WebDAV) Abstract This document defines how the Prefer header field (RFC 7240) can be used by a Web Distributed Authoring and Versioning (WebDAV) client to request that certain behaviors be employed by a server while constructing a response to a request. Furthermore, it defines the new "depth-noroot" preference. This document updates RFC 7240. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 7841. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc8144. Copyright Notice Copyright (c) 2017 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. -
Implementing Nfsv4 in the Enterprise: Planning and Migration Strategies
Front cover Implementing NFSv4 in the Enterprise: Planning and Migration Strategies Planning and implementation examples for AFS and DFS migrations NFSv3 to NFSv4 migration examples NFSv4 updates in AIX 5L Version 5.3 with 5300-03 Recommended Maintenance Package Gene Curylo Richard Joltes Trishali Nayar Bob Oesterlin Aniket Patel ibm.com/redbooks International Technical Support Organization Implementing NFSv4 in the Enterprise: Planning and Migration Strategies December 2005 SG24-6657-00 Note: Before using this information and the product it supports, read the information in “Notices” on page xi. First Edition (December 2005) This edition applies to Version 5, Release 3, of IBM AIX 5L (product number 5765-G03). © Copyright International Business Machines Corporation 2005. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . xi Trademarks . xii Preface . xiii The team that wrote this redbook. xiv Acknowledgments . xv Become a published author . xvi Comments welcome. xvii Part 1. Introduction . 1 Chapter 1. Introduction. 3 1.1 Overview of enterprise file systems. 4 1.2 The migration landscape today . 5 1.3 Strategic and business context . 6 1.4 Why NFSv4? . 7 1.5 The rest of this book . 8 Chapter 2. Shared file system concepts and history. 11 2.1 Characteristics of enterprise file systems . 12 2.1.1 Replication . 12 2.1.2 Migration . 12 2.1.3 Federated namespace . 13 2.1.4 Caching . 13 2.2 Enterprise file system technologies. 13 2.2.1 Sun Network File System (NFS) . 13 2.2.2 Andrew File System (AFS) . -
The Influence of Scale on Distributed File System Design
IEEE TRANSAmIONS ON SOFIWARE ENGINEERING, VOL. 18, NO. I, JANUARY lY92 The Influence of Scale on Distributed File System Design Mahadev Satyanarayanan, Member, IEEE Abstract- Scale should be recognized as a primary factor into autonomous or semi-autonomous organizations for man- influencing the architecture and implementation of distributed agement purposes. Hence a distributed system that has many systems. This paper uses Andrew and Coda, distributed file users or nodes will also span many organizations. Regardless systems built at Carnegie Mellon University, to validate this proposition. Performance, operability, and security are dominant of the specific metric of scale, the designs of distributed considerations in the design of these systems. Availability is a systems that scale well are fundamentally different from less further consideration in the design of Coda. Client caching, scalable designs. bulk data transfer, token-based mutual authentication, and hi- In this paper we describe the lessons we have learned erarchical organization of the protection domain have emerged about scalability from the Andrew File System and the Codu as mechanisms that enhance scalability. The separation of con- cerns made possible by functional specialization has also proved File System. These systems are particularly appropriate for valuable in scaling. Heterogeneity is an important by-product our discussion, because they have been designed for growth of growth, but the mechanisms available to cope with it are to thousands of nodes and users. We focus on distributed rudimentary. Physical separation of clients and servers turns out file systems, because they are the most widely used kind of to be a critical requirement for scalability. -
Distributed File Systems
Please note: Please start working your research project (proposal will be due on Feb. 19 in class) Each group needs to turn in a printed version of their proposal and intermediate report. Also, before class each group needs to email me a DOC version. Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 4 © Pearson Education 2005 Remote Procedure Call (1): at-least-once or at-most-once semantics client: "stub" instead of "proxy" (same function, different names) behaves like a local procedure, marshal arguments, communicate the request server: dispatcher "stub": unmarshal arguments, communicate the results back Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 4 © Pearson Education 2005 Remote Procedure Call (2) client process server process Request Reply client stub server stub procedure procedure client service program Communication Communication procedure module module dispatcher Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 4 © Pearson Education 2005 Sun RPC (1): Designed for client-server communication in the SUN NFS (network file system) Supplied as a part of SUN and other UNIX operating systems Over either UDP or TCP Provides an interface definition language (IDL) initially XDR is for data representation, extended to be IDL less modern than CORBA IDL and Java program numbers (obtained from a central authority) instead of interface names procedure numbers (used as a procedure identifier) instead of procedure names only a single input parameter is allowed (then we have to use a ?) Offers an interface compiler (rpcgen) for C language, which generates the following: client stub server main procedure, dispatcher, and server stub XDR marshalling, unmarshaling Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. -
Design and Evolution of the Apache Hadoop File System(HDFS)
Design and Evolution of the Apache Hadoop File System(HDFS) Dhruba Borthakur Engineer@Facebook Committer@Apache HDFS SDC, Sept 19 2011 2011 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Outline Introduction Yet another file-system, why? Goals of Hadoop Distributed File System (HDFS) Architecture Overview Rational for Design Decisions 2011 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Who Am I? Apache Hadoop FileSystem (HDFS) Committer and PMC Member Core contributor since Hadoop’s infancy Facebook (Hadoop, Hive, Scribe) Yahoo! (Hadoop in Yahoo Search) Veritas (San Point Direct, Veritas File System) IBM Transarc (Andrew File System) Univ of Wisconsin Computer Science Alumni (Condor Project) 2011 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Hadoop, Why? Need to process Multi Petabyte Datasets Data may not have strict schema Expensive to build reliability in each application. Failure is expected, rather than exceptional. Elasticity, # of nodes in a cluster is never constant. Need common infrastructure Efficient, reliable, Open Source Apache License 2011 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Goals of HDFS Very Large Distributed File System 10K nodes, 1 billion files, 100 PB Assumes Commodity Hardware Files are replicated to handle hardware failure Detect failures and recovers from them Optimized for Batch Processing Data locations exposed so that computations can move to where data resides Provides very high aggregate bandwidth User Space, runs on heterogeneous OS 2011 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Commodity Hardware Typically in 2 level architecture – Nodes are commodity PCs – 20-40 nodes/rack – Uplink from rack is 4 gigabit – Rack-internal is 1 gigabit 2011 Storage Developer Conference.