Title Is Times New Roman 28 Pt., Line Spacing .9 Lines

Total Page:16

File Type:pdf, Size:1020Kb

Title Is Times New Roman 28 Pt., Line Spacing .9 Lines Guide Guide @ 4.77 Guide @ 2.68 Subtitle Guide @ 2.64 Guide @ 1.97 Guide @ 1.57 Guide @ 0.22 OpenAFS On Solaris 11 x86 Robert Milkowski Unix Engineering Nothing Nothing below this below this point point Guide @ 2.99 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ Why Solaris? 2.68 Subtitle Guide @ 2.64 ZFS Guide @ 1.95 Guide @ Transparent and in-line data compression and deduplication 1.64 Big $$ savings Transactional file system (no fsck) Guide @ 0.22 End-to-end data and meta-data checksumming Encryption DTrace Online profiling and debugging of AFS Only Many improvements to AFS performance and scalability Only Source / Source / Footnotes Footnotes below this below this line Safe to use in production line Guide @ 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ ZFS – Estimated Disk Space Savings 2.68 Subtitle Guide @ 2.64 Disk space usage Guide @ 1.95 ~3.8x Guide @ 1.64 ZFS 128KB GZIP ~2.9x ZFS 32KB GZIP ZFS 128KB LZJB Guide @ 0.22 ~2x ZFS 32KB LZJB ZFS 64KB no-comp Linux ext3 0 200 400 600 800 1000 1200 GB Only Only Source / Source / Footnotes Footnotes below this 1TB sample of production data from AFS plant in 2010 below this line line Guide @ Currently, the overall average compression ratio for AFS on ZFS/gzip is over 3.2x 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ Compression – Performance Impact 2.68 Subtitle Guide @ 2.64 Read Test Guide @ 1.95 Guide @ Linux ext3 1.64 ZFS 32KB no-comp ZFS 64KB no-comp ZFS 128KB no-comp ZFS 32KB DEDUP + LZJB ZFS 32KB LZJB Guide @ 0.22 ZFS 64KB LZJB ZFS 128KB LZJB ZFS 32KB DEDUP + GZIP ZFS 32KB GZIP ZFS 128KB GZIP ZFS 64KB GZIP 0 100 200 300 400 500 600 700 800 Only MB/s Only Source / Source / Footnotes Footnotes below this below this line line Guide @ 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ Compression – Performance Impact 2.68 Subtitle Guide @ 2.64 Write Test Guide @ 1.95 Guide @ ZFS 128KB GZIP 1.64 ZFS 64KB GZIP ZFS 32KB GZIP Linux ext3 ZFS 32KB no-comp Guide @ 0.22 ZFS 64KB no-comp ZFS 128KB no-comp ZFS 128KB LZJB ZFS 64KB LZJB ZFS 32KB LZJB 0 100 200 300 400 500 600 Only MB/s Only Source / Source / Footnotes Footnotes below this below this line line Guide @ 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ Solaris – Cost Perspective 2.68 Subtitle Guide @ 2.64 Linux server Guide @ 1.95 x86 hardware Guide @ 1.64 Linux support (optional for some organizations) Directly attached storage (10TB+ logical) Guide @ Solaris server 0.22 The same x86 hardware as on Linux 1,000$ per CPU socket per year for Solaris support (list price) on non-Oracle x86 server Over 3x compression ratio on ZFS/GZIP Only Only Source / Source / Footnotes 3x fewer servers, disk arrays Footnotes below this below this line line Guide @ 2.80 3x less rack space, power, cooling, maintenance ... Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ AFS Unique Disk Space Usage – last 5 years 2.68 Subtitle Guide @ 2.64 25000 Guide @ 1.95 Guide @ 1.64 20000 15000 Guide @ 0.22 GB 10000 5000 Only 0 Only Source / Source / Footnotes Footnotes below this below this line line 2007-09 2008-09 2009-09 2010-09 2011-09 2012-08 Guide @ 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ MS AFS High-Level Overview 2.68 Subtitle Guide @ 2.64 AFS RW Cells Guide @ 1.95 Guide @ Canonical data, not available in prod 1.64 AFS RO Cells Globally distributed Guide @ 0.22 Data replicated from RW cells In most cases each volume has 3 copies in each cell ~80 RO cells world-wide, almost 600 file servers This means that a single AFS volume in a RW cell, when Only promoted to prod, is replicated ~240 times (80x3) Only Source / Source / Footnotes Footnotes below this below this line line Guide @ Currently, there is over 3PB of storage presented to AFS 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ Typical AFS RO Cell 2.68 Subtitle Guide @ 2.64 Before Guide @ 1.95 5-15 x86 Linux servers, each with directly attached disk Guide @ 1.64 array, ~6-9RU per server Now Guide @ 4-8 x86 Solaris 11 servers, each with directly attached disk 0.22 array, ~6-9RU per server Significantly lower TCO Soon 4-8 x86 Solaris 11 servers, internal disks only, 2RU Only Only Source / Lower TCA Source / Footnotes Footnotes below this below this line Significantly lower TCO line Guide @ 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ Migration to ZFS 2.68 Subtitle Guide @ 2.64 Guide @ 1.95 Completely transparent migration to clients Guide @ 1.64 Migrate all data away from a couple of servers in a cell Rebuild them with Solaris 11 x86 with ZFS Re-enable them and repeat with others Guide @ 0.22 Over 300 servers (+disk array) to decommission Less rack space, power, cooling, maintenance... and yet more available disk space Fewer servers to buy due to increased capacity Only Only Source / Source / Footnotes Footnotes below this below this line line Guide @ 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ q.ny cell migration to Solaris/ZFS 2.68 Subtitle Guide @ 2.64 Guide @ 1.95 Guide @ 1.64 Guide @ 0.22 Cell size reduced from 13 servers down to 3 Only Only Source / Disk space capacity expanded from ~44TB to ~90TB (logical) Source / Footnotes Footnotes below this below this line Rack space utilization went down from ~90U to 6U line Guide @ 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ Solaris Tuning 2.68 Subtitle Guide @ 2.64 Guide @ 1.95 ZFS Guide @ 1.64 Largest possible record size (128k on pre GA Solaris 11, 1MB on 11 GA and onwards) Disable SCSI CACHE FLUSHES Guide @ 0.22 zfs:zfs_nocacheflush = 1 Increase DNLC size ncsize = 4000000 Disable access time updates on all vicep partitions Only Only Source / Multiple vicep partitions within a ZFS pool (AFS Source / Footnotes Footnotes below this below this line scalability) line Guide @ 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ Summary 2.68 Subtitle Guide @ 2.64 Guide @ 1.95 Guide @ More than 3x disk space savings thanks to ZFS 1.64 Big $$ savings No performance regression compared to ext3 Guide @ 0.22 No modifications required to AFS to take advantage of ZFS Several optimizations and bugs already fixed in AFS thanks to DTrace Better and easier monitoring and debugging of AFS Only Only Source / Source / Footnotes Footnotes below this Moving away from disk arrays in AFS RO cells below this line line Guide @ 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ Why Internal Disks? 2.68 Subtitle Guide @ 2.64 Guide @ 1.95 Most expensive part of AFS is storage and rack space Guide @ 1.64 AFS on internal disks 9U->2U Guide @ More local/branch AFS cells 0.22 How? ZFS GZIP compression (3x) 256GB RAM for cache (no SSD) 24+ internal disk drives in 2U x86 server Only Only Source / Source / Footnotes Footnotes below this below this line line Guide @ 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ HW Requirements 2.68 Subtitle Guide @ 2.64 RAID controller Guide @ 1.95 Guide @ Ideally pass-thru mode (JBOD) 1.64 RAID in ZFS (initially RAID-10) No batteries (less FRUs) Guide @ 0.22 Well tested driver 2U, 24+ hot-pluggable disks Front disks for data, rear disks for OS SAS disks, not SATA Only Only Source / 2x CPU, 144GB+ of memory, 2x GbE (or 2x 10GbE) Source / Footnotes Footnotes below this below this line line Guide @ 2.80 Redundant PSU, Fans, etc. Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ SW Requirements 2.68 Subtitle Guide @ 2.64 Disk replacement without having to log into OS Guide @ 1.95 Guide @ Physically remove a failed disk 1.64 Put a new disk in Resynchronization should kick-in automatically Guide @ 0.22 Easy way to identify physical disks Logical <-> physical disk mapping Locate and Faulty LEDs RAID monitoring Only Only Source / Source / Footnotes Monitoring of disk service times, soft and hard errors, etc. Footnotes below this below this line line Guide @ 2.80 Proactive and automatic hot-spare activation Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ Oracle/Sun X3-2L (x4270 M3) 2.68 Subtitle Guide @ 2.64 2U Guide @ 1.95 Guide @ 1.64 2x Intel Xeon E5-2600 Up-to 512GB RAM (16x DIMM) Guide @ 12x 3.5” disks + 2x 2.5” (rear) 0.22 24x 2.5” disks + 2x 2.5” (rear) 4x On-Board 10GbE 6x PCIe 3.0 Only Only Source / Source / Footnotes SAS/SATA JBOD mode Footnotes below this below this line line Guide @ 2.80 Guide Guide @ 4.69 prototype template (5428278)\print library_new_final.ppt 10/15/2012 Guide @ SSDs? 2.68 Subtitle Guide @ 2.64 ZIL (SLOG) Guide @ 1.95 Guide @ Not really necessary on RO servers 1.64 MS AFS releases >=1.4.11-3 do most writes as async L2ARC Guide @ 0.22 Currently given 256GB RAM doesn’t seem necessary Might be an option in the future Main storage on SSD Too expensive for AFS RO Only Only Source / AFS RW? Source / Footnotes Footnotes below this below this line line Guide @ 2.80 Guide Guide @ 4.69 prototype template (5428278)\print
Recommended publications
  • Elastic Storage for Linux on System Z IBM Research & Development Germany
    Peter Münch T/L Test and Development – Elastic Storage for Linux on System z IBM Research & Development Germany Elastic Storage for Linux on IBM System z © Copyright IBM Corporation 2014 9.0 Elastic Storage for Linux on System z Session objectives • This presentation introduces the Elastic Storage, based on General Parallel File System technology that will be available for Linux on IBM System z. Understand the concepts of Elastic Storage and which functions will be available for Linux on System z. Learn how you can integrate and benefit from the Elastic Storage in a Linux on System z environment. Finally, get your first impression in a live demo of Elastic Storage. 2 © Copyright IBM Corporation 2014 Elastic Storage for Linux on System z Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. AIX* FlashSystem Storwize* Tivoli* DB2* IBM* System p* WebSphere* DS8000* IBM (logo)* System x* XIV* ECKD MQSeries* System z* z/VM* * Registered trademarks of IBM Corporation The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
    [Show full text]
  • BESIII Physical Analysis on Hadoop Platform
    20th International Conference on Computing in High Energy and Nuclear Physics (CHEP2013) IOP Publishing Journal of Physics: Conference Series 513 (2014) 032044 doi:10.1088/1742-6596/513/3/032044 BESIII Physical Analysis on Hadoop Platform Jing HUO12, Dongsong ZANG12, Xiaofeng LEI12, Qiang LI12, Gongxing SUN1 1Institute of High Energy Physics, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China E-mail: [email protected] Abstract. In the past 20 years, computing cluster has been widely used for High Energy Physics data processing. The jobs running on the traditional cluster with a Data-to-Computing structure, have to read large volumes of data via the network to the computing nodes for analysis, thereby making the I/O latency become a bottleneck of the whole system. The new distributed computing technology based on the MapReduce programming model has many advantages, such as high concurrency, high scalability and high fault tolerance, and it can benefit us in dealing with Big Data. This paper brings the idea of using MapReduce model to do BESIII physical analysis, and presents a new data analysis system structure based on Hadoop platform, which not only greatly improve the efficiency of data analysis, but also reduces the cost of system building. Moreover, this paper establishes an event pre-selection system based on the event level metadata(TAGs) database to optimize the data analyzing procedure. 1. Introduction 1.1 The current BESIII computing architecture High Energy Physics experiment is a typical data-intensive application. The BESIII computing system now consist of 3PB+ data, 6500 CPU cores, and it is estimated that there will be more than 10PB data produced in the future 5 years.
    [Show full text]
  • Globalfs: a Strongly Consistent Multi-Site File System
    GlobalFS: A Strongly Consistent Multi-Site File System Leandro Pacheco Raluca Halalai Valerio Schiavoni University of Lugano University of Neuchatelˆ University of Neuchatelˆ Fernando Pedone Etienne Riviere` Pascal Felber University of Lugano University of Neuchatelˆ University of Neuchatelˆ Abstract consistency, availability, and tolerance to partitions. Our goal is to ensure strongly consistent file system operations This paper introduces GlobalFS, a POSIX-compliant despite node failures, at the price of possibly reduced geographically distributed file system. GlobalFS builds availability in the event of a network partition. Weak on two fundamental building blocks, an atomic multicast consistency is suitable for domain-specific applications group communication abstraction and multiple instances of where programmers can anticipate and provide resolution a single-site data store. We define four execution modes and methods for conflicts, or work with last-writer-wins show how all file system operations can be implemented resolution methods. Our rationale is that for general-purpose with these modes while ensuring strong consistency and services such as a file system, strong consistency is more tolerating failures. We describe the GlobalFS prototype in appropriate as it is both more intuitive for the users and detail and report on an extensive performance assessment. does not require human intervention in case of conflicts. We have deployed GlobalFS across all EC2 regions and Strong consistency requires ordering commands across show that the system scales geographically, providing replicas, which needs coordination among nodes at performance comparable to other state-of-the-art distributed geographically distributed sites (i.e., regions). Designing file systems for local commands and allowing for strongly strongly consistent distributed systems that provide good consistent operations over the whole system.
    [Show full text]
  • Considerations When Choosing a Backup System for AFS
    Considerations when Choosing a Backup System for AFS By Kristen J. Webb President and CTO Teradactyl LLC. June 18, 2005 The Andrew File System® has a proven track record as a scalable and secure network file system. Organizations everywhere depend on AFS for the sharing of data both internally and externally. The most telling sign of the longevity of AFS is the adoption of the open source version of the software (OpenAFS). One of the important challenges facing AFS system administrators is the need to provide efficient, cost-effective backup strategies that can scale with ever increasing cell sizes. This paper provides an overview of the issues to consider when putting together a long term backup and recovery strategy. The topics include: Methods available for data backup and recovery; Impact of the backup function on cell performance and scale; Volume management; Centralized vs. Distributed backups; Disk-to-Tape, Disk-to-Disk and Disk-to-Disk-to-Tape comparisons; Disaster Recovery requirements; Advanced Solutions. The reader should have a basic understanding of backup and recovery concepts and familiarity with AFS architecture and cell administration. Introduction The purpose of this paper it to examine the general behavior of backup systems in relation to working with the Andrew File System (AFS). It is written as a guide for system administers to help make them aware of all of the considerations for backup, restore, and disaster recovery that they need to take into account when putting together a backup and recovery strategy for AFS. The paper includes discussions on the features and limitations of the native backup and recovery programs provided with AFS.
    [Show full text]
  • Distributed Systems 14
    Distributed Systems 14. Network File Systems (Network Attached Storage) Paul Krzyzanowski Rutgers University Fall 2018 October 22, 2018 © 2014-2018 Paul Krzyzanowski 1 Accessing files File sharing with socket-based programs HTTP, FTP, telnet: – Explicit access – User-directed connection to access remote resources We want more transparency – Allow user to access remote resources just as local ones NAS: Network Attached Storage October 22, 2018 © 2014-2018 Paul Krzyzanowski 2 File service models Upload/Download model Remote access model – Read file: copy file from server to client File service provides functional interface: – Write file: copy file from client to server – create, delete, read bytes, write bytes, etc… Advantage: Advantages: – Simple – Client gets only what’s needed – Server can manage coherent view of file system Problems: – Wasteful: what if client needs small Problem: piece? – Possible server and network congestion – Problematic: what if client doesn’t have enough space? • Servers are accessed for duration of file access – Consistency: what if others need to modify the same file? • Same data may be requested repeatedly October 22, 2018 © 2014-2018 Paul Krzyzanowski 3 Semantics of file sharing Sequential Semantics Session Semantics Read returns result of last write Relax the rules Easily achieved if • Changes to an open file are – Only one server initially visible only to the process – Clients do not cache data (or machine) that modified it. BUT • Need to hide or lock file under – Performance problems if no cache modification
    [Show full text]
  • CIFS SMB2 SMB3 Meets Linux a Year in Review
    The Future of File Protocols: CIFS SMB2 SMB3 Meets Linux A Year in Review Steve French Senior Engineer – File System Architect IBM Linux Technology Center 1 IBM, Linux, and Building a Smarter Planet © 2012 IBM Corporation Legal Statement – This work represents the views of the author(s) and does not necessarily reflect the views of IBM Corporation – A full list of U.S. trademarks owned by IBM may be found at http://www.ibm.com/legal/copytrade.shtml. – Linux is a registered trademark of Linus Torvalds. – Other company, product, and service names may be trademarks or service marks of others. 2 © 2012 IBM Corporation Who am I? – Steve French ([email protected] or [email protected]) – Author and maintainer of Linux cifs vfs (for accessing Samba, Windows and various SMB/CIFS based NAS appliances) – Wrote initial SMB2 kernel client prototype – Member of the Samba team, coauthor of SNIA CIFS Technical Reference and former SNIA CIFS Working Group chair – Architect: File Systems/NFS/Samba for IBM Linux Technology Center © 2012 IBM Corporation SMB3: Great Feature Set, Broad Deployment, Amazing Performance ● Introduction of new storage features in Windows 8 causing one of most dramatic shifts in storage industry as companies rapidly move to support “SMB3” (aka SMB2.2) ● “SMB2.2 (CIFS) screams over InfiniBand” (Storage CH Blog) • Is (traditional) SAN use going to die? – “Market trends show virtualization workloads moving to NAS” (Dennis Chapman, Technical Director NetApp, SNIA SDC 2011) – “Unstructured data (file-based) storage is growing faster than structured data” (Thomas Pfenning, Microsoft GM, SNIA SDC 2011) – Customers prefer “file” • SMB2/CIFS market share is MUCH larger than NFS.
    [Show full text]
  • Introduction to Distributed File Systems. Orangefs Experience 0.05 [Width=0.4]Lvee-Logo-Winter
    Introduction to distributed file systems. OrangeFS experience Andrew Savchenko NRNU MEPhI, Moscow, Russia 16 February 2013 . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. Outline 1. Introduction 2. Species of distributed file systems 3. Behind the curtain 4. OrangeFS 5. Summary . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. Introduction Why one needs a non-local file system? • a large data storage • a high performance data storage • redundant and highly available solutions There are dozens of them: 72 only on wiki[1], more IRL. Focus on free software solutions. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. Introduction Why one needs a non-local file system? • a large data storage • a high performance data storage • redundant and highly available solutions There are dozens of them: 72 only on wiki[1], more IRL. Focus on free software solutions. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. Species of distributed file systems Network Clustered Distributed Terminology is ambiguous!. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. Network file systems client1 client2 clientN server storage A single server (or at least an appearance) and multiple network clients. Examples: NFS, CIFS. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. Clustered file systems server1 server2 serverN storage Servers sharing the same local storage (usually SAN[2] at block level). shared storage architecture. Examples: GFS2[3], OCFS2[4]. .. .. .. .. .
    [Show full text]
  • Building Windows File Systems: a Look at the Openafs Client for Windows
    Building Windows File Systems: A Look at the OpenAFS Client for Windows Peter Scott Kernel Drivers, LLC Jeffrey Altman Your File System, Inc 2010 Storage Developer Conference. Kernel Drivers, LLC. All Rights Reserved. AFS History … on Windows The SMB Interface Used the Loopback Adapter for NetBIOS name registration All requests were processed by the AFS Service in user mode Minimal kernel level caching Lack of support for network browsing within Explorer 2010 Storage Developer Conference. Kernel Drivers, LLC. All Rights Reserved. 2 More History … ‘Relatively’ easy to implement but problems … Performance Generic solution to fit all situations Microsoft components, very difficult to get bugs fixed Several critical bugs found in SMB with no fix in sight Ex: Static thread worker pool count leads to dead locks Documentation is minimal, at best Reliant on the Windows redirector for network behavior … good and bad 2010 Storage Developer Conference. Kernel Drivers, LLC. All Rights Reserved. 3 A Native Approach … Custom File System Pros … Complete control of name space parsing Reparse Points Mount Points DFS Links Better optimization of IO pathways Minimize Kernel to User transitions Cache invalidation requirements Better system cache integration Support for write-through behavior when needed 2010 Storage Developer Conference. Kernel Drivers, LLC. All Rights Reserved. 4 A Native Approach … And the down side … Complex implementation Undocumented interfaces srvsvc, wkssvc, IPC$ Network Provider support Cache and Memory Manager Must deal with 3rd party products File system filters 2010 Storage Developer Conference. Kernel Drivers, LLC. All Rights Reserved. 5 Windows File System Model Windows IFS (Installable File System) Interface IRP Based model IO Request Packets ‘Fast IO’ Interface used for more than just IO Direct-call model, no IRPs … almost Network Provider Interface for Network Redirectors only Support for drive mappings UNC Browsing 2010 Storage Developer Conference.
    [Show full text]
  • The Evolution of File Systems
    The Evolution of File Systems Thomas Rivera, Hitachi Data Systems Craig Harmer, April 2011 SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the Author nor the Presenter is an attorney and nothing in this presentation is intended to be nor should be construed as legal advice or opinion. If you need legal advice or legal opinion please contact an attorney. The information presented herein represents the Author's personal opinion and current understanding of the issues involved. The Author, the Presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. The Evolution of File Systems 2 © 2012 Storage Networking Industry Association. All Rights Reserved. 2 Abstract The File Systems Evolution Over time additional file systems appeared focusing on specialized requirements such as: data sharing, remote file access, distributed file access, parallel files access, HPC, archiving, security, etc. Due to the dramatic growth of unstructured data, files as the basic units for data containers are morphing into file objects, providing more semantics and feature- rich capabilities for content processing This presentation will: Categorize and explain the basic principles of currently available file system architectures (e.g.
    [Show full text]
  • The Selinux Notebook - the Foundations
    The SELinux Notebook - The Foundations The SELinux Notebook The Foundations Page 1 The SELinux Notebook - The Foundations 0. Notebook Information 0.1 Copyright Information Copyright © 2009 Richard Haines. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNUFree Documentation License”. The scripts and source code in this Notebook are covered by the GNU General Public License. The scripts and code are free source: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or any later version. These are distributed in the hope that they will be useful in researching SELinux, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with scripts and source code. If not, see <http://www.gnu.org/licenses/>. 0.2 Revision History Version Date Description of Change Changed By 1.0 20th Nov ‘09 First release. Richard Haines 0.3 Acknowledgements Logo designed by Máirín Duffy 0.4 Abbreviations Term Definition apol Policy analysis tool AV Access Vector AVC Access Vector Cache BLP Bell-La Padula CC Common Criteria CMW Compartmented Mode Workstation DAC Discretionary Access Control Page 2 The SELinux Notebook - The Foundations Term Definition F-10 Fedora 10 FLASK Flux Advanced Security Kernel - A security-enhanced version of the Fluke kernel and OS developed by the Utah Flux team and the US Department of Defence.
    [Show full text]
  • Around the Linux File System World in 45 Minutes
    Around the Linux File System World in 45 minutes Steve French IBM Samba Team [email protected] Abstract lines of code), most active (averaging 10 changesets a day!), and most important kernel subsystems. What makes the Linux file system interface unique? What has improved over the past year in this important part of the kernel? Why are there more than 50 Linux 2 The Linux Virtual File System Layer File Systems? Why might you choose ext4 or XFS, NFS or CIFS, or OCFS2 or GFS2? The differences are not al- The heart of the Linux file system, and what makes ways obvious. This paper will describe the new features Linux file systems unique, is the virtual file system in the Linux VFS, how various Linux file systems dif- layer, or VFS, which they connect to. fer in their use, and compare some of the key Linux file systems. 2.1 Virtual File System Structures and Relation- ships File systems are one of the largest and most active parts of the Linux kernel, but some key sections of the file sys- tem interface are little understood, and with more than The relationships between Linux file system compo- than 50 Linux file systems the differences between them nents is described in various papers [3] and is impor- can be confusing even to developers. tant to understand when carefully comparing file system implementations. 1 Introduction: What is a File System? 2.2 Comparison with other Operating Systems Linux has a rich file system interface, and a surprising number of file system implementations.
    [Show full text]
  • Openafs Quick Start Guide for UNIX I
    OpenAFS Quick Start Guide for UNIX i OpenAFS Quick Start Guide for UNIX OpenAFS Quick Start Guide for UNIX ii Copyright © 2000-2014 IBM Corporation and other contributors. All Rights Reserved This documentation is covered by the IBM Public License Version 1.0. OpenAFS Quick Start Guide for UNIX iii COLLABORATORS TITLE : OpenAFS Quick Start Guide for UNIX ACTION NAME DATE SIGNATURE WRITTEN BY February 12, 2016 REVISION HISTORY NUMBER DATE DESCRIPTION NAME BP--1.6.x-4781- gc0876-dirty OpenAFS Quick Start Guide for UNIX iv Contents 1 Installation Overview 1 1.1 The Procedures Described in this Guide . .1 1.1.1 Required Initial Procedures . .1 1.1.1.1 Incorporating AFS Into the Kernel . .1 1.1.1.2 Installing the First AFS Machine . .1 1.1.2 As-needed Procedures . .2 1.1.2.1 Upgrading the Operating System . .2 1.1.2.2 Installing Additional File Server Machines . .2 1.1.2.3 Configuring or Decommissioning Database Server Machines . .2 1.1.2.4 Installing Additional AFS Client Machines . .2 1.1.2.5 Building AFS from Source Code . .2 1.1.2.6 Configuring Legacy Components . .2 1.2 Recommended Reading List . .2 1.3 Requirements . .3 1.3.1 Login Identity . .3 1.3.2 General Requirements . .3 1.3.3 File Server Machine Requirements . .4 1.3.4 Client Machine Requirements . .4 1.4 Supported System Types . .4 1.5 About Upgrading the Operating System . .4 1.6 The OpenAFS Binary Distribution . .5 1.7 How to Continue . .5 2 Installing the First AFS Machine 6 2.1 Requirements and Configuration Decisions .
    [Show full text]