Proving Confidentiality in a File System Using DISKSEC

Total Page:16

File Type:pdf, Size:1020Kb

Proving Confidentiality in a File System Using DISKSEC Proving confidentiality in a file system using DISKSEC Atalay Ileri, Tej Chajed, Adam Chlipala, Frans Kaashoek, and Nickolai Zeldovich, MIT CSAIL https://www.usenix.org/conference/osdi18/presentation/ileri This paper is included in the Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’18). October 8–10, 2018 • Carlsbad, CA, USA ISBN 978-1-939133-08-3 Open access to the Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation is sponsored by USENIX. Proving confidentiality in a file system using DISKSEC Atalay Ileri,˙ Tej Chajed, Adam Chlipala, M. Frans Kaashoek, and Nickolai Zeldovich MIT CSAIL Abstract challenging to specify. For example, consider a natural specification for readdir, which allows the file system to SFSCQ is the first file system with a machine-checked return the names in a directory in any order. This nonde- proof of security. To develop, specify, and prove SFSCQ, terminism could be abused by a buggy or malicious file this paper introduces DISKSEC, a novel approach for rea- system to leak confidential file data through careful ma- soning about confidentiality of storage systems, such asa nipulation of the order of readdir results. Furthermore, file system. DISKSEC addresses the challenge of specify- nondeterminism is essential to a file system, because file ing confidentiality using the notion of data noninterfer- systems must deal with crashes, which can occur nonde- ence to find a middle ground between strong and precise terministically at any time. information-flow-control guarantees and the weaker but One approach to specifying confidentiality is to for- more practical discretionary access control. DISKSEC mulate it as a noninterference property, such as in most factors out reasoning about confidentiality from other information-flow-control systems. This means that the properties (such as functional correctness) using a no- execution of one process (a potential victim processing tion of sealed blocks. Sealed blocks enforce that the file confidential data) cannot influence the execution ofan- system treats confidential file blocks as opaque inthe other process (an adversary trying to learn that data). Non- bulk of the code, greatly reducing the effort of proving interference can be stated concisely, and is easy for appli- data noninterference. An evaluation of SFSCQ shows cations to use. However, information-flow-control style that its theorems preclude security bugs that have been guarantees are stronger than what file systems aim for. found in real file systems, that DISKSEC imposes little Instead, file systems aim for weaker notions of confiden- performance overhead, and that SFSCQ’s incremental tiality, along the lines of discretionary access control on development effort, on top of DISKSEC and DFSCQ, on files that reveal some metadata, such as file lengths. which it is based, is moderate. A second challenge lies in proving confidentiality. Con- 1 Introduction fidentiality is a “two-safety” property [34], which requires reasoning about pairs of executions to show that an ad- Many security problems today stem from bugs in software. versary cannot observe any differences correlated with Although there has been significant effort in reducing confidential data. However, reasoning about pairs of exe- bugs through better testing, fuzzing, model checking, and cutions is more complicated than reasoning about a single so on, subtle bugs remain and continue to be exploited. execution, which is sufficient for proving integrity and Machine-checked verification is a powerful approach that functional correctness. can eliminate a large class of bugs by proving that an This paper presents DISKSEC, an approach for proving implementation meets a precise specification. the security, and specifically confidentiality, for storage Prominent examples of machine-checked security systems, such as file systems. The paper demonstrates proofs include verification of strict isolation (with confi- the benefits of DISKSEC by developing, specifying, and dentiality) for an OS kernel in CertiKOS [15], seL4 [26], proving the security of a file system in a prototype called and Komodo [17], as well as security proofs in Iron- SFSCQ, based on the DFSCQ file system [13]. clad [20] about applications like a password hasher and DISKSEC addresses the specification challenge by us- a notary service. However, proving the security of sys- ing a notion of data noninterference that both matches tems with rich sharing semantics, such as file systems, what file systems aim to provide and is concise andeasy is an open problem. For example, unlike prior examples to use for applications. Data noninterference requires that that focus on strict isolation without controlled sharing, an adversary’s execution be independent of the contents users in a file system can share files with one another, and of individual files, but it allows the adversary to observe the underlying implementation has shared data structures other metadata, such as file length and directory entries, (such as a buffer cache or write-ahead log) that contain and allows for discretionary access control (i.e., a user data from different users. can choose to disclose their data). Proving security for a file system requires addressing To address the challenge of proving security, DISKSEC two key challenges. The first challenge lies in specify- factors out reasoning about confidentiality from all other ing security. Integrity can be expressed as simply as a properties, such as functional correctness. DISKSEC does functional correctness property. Confidentiality is more so by introducing a notion of sealed blocks. This builds USENIX Association 13th USENIX Symposium on Operating Systems Design and Implementation 323 on the intuition that file systems do not look inside of data noninterference can be thought of as a specializa- the blocks that represent user file contents. As a result, tion of abstract noninterference [18], relaxed noninterfer- DISKSEC is able to treat confidential file blocks as opaque ence [24], or observation functions [15]. One difference in much of the file-system code, greatly reducing the need in our approach is that data noninterference stops at the for manual proofs of two-safety that consider pairs of file-system API boundary; applications are not subject executions. The only manual proofs of two-safety are in to data-noninterference policies. This matches well the the top-level read and write system calls. traditional discretionary access-control policies enforced We implemented DISKSEC and SFSCQ in the Coq by file systems. proof assistant [35]. All proofs of security are machine- Formalizing data noninterference requires reasoning checked by Coq, eliminating the possibility of bugs that about two executions, since confidentiality is a two-safety violate SFSCQ’s specification. An evaluation of SFSCQ property [34]. In this context, our contribution lies in a shows that its specifications are complete enough to prove specification and a proof style based on sealed blocks that confidentiality of a simple application. The evaluation helps us prove a data-noninterference two-safety property also shows that DISKSEC’s approach allowed us to de- about the file system. velop SFSCQ with a modest amount of effort, and that SFSCQ achieves comparable performance to the DFSCQ Machine-checked security in systems. Several prior file system that it is based on. projects have proven security (and, specifically, confi- The contributions of this paper are: dentiality) properties about their system implementations: • SFSCQ, the first file system with a machine-checked seL4 [23, 26], CertiKOS [15], and Ironclad [20]. For proof of confidentiality. SFSCQ has a concise specifi- seL4 and CertiKOS, the theorems prove complete isola- cation that captures discretionary access control using tion: CertiKOS requires disabling IPC to prove its security data noninterference, and deals with nondeterminism theorems, and seL4’s security theorem requires disjoint due to crashes. sets of capabilities. In the context of a file system, com- plete isolation is not possible: one of the main goals of a • DISKSEC, an approach for specifying and proving file system is to enable sharing. Furthermore, CertiKOS confidentiality for storage systems that reduces proof is limited to proving security with deterministic specifi- effort. DISKSEC uses the idea of sealed blocks to cations. Nondeterminism is important in a file system factor out reasoning about confidentiality from most to handle crashes and to abstract away implementation of the file system code. details in specifications. • An evaluation that demonstrates that DISKSEC’s ap- Ironclad proves that several applications, such as a no- proach leads to negligible performance overheads in tary service and a password-hashing application, do not SFSCQ, that it precludes the possibility of confiden- disclose their own secrets (e.g., a private key), formu- tiality bugs that have been found in existing file sys- lated as noninterference. Also using noninterference, Ko- tems, and that SFSCQ’s specification allows applica- modo [17] reasons about confidential data in an enclave tions to reason about their confidentiality. and showing that an adversary cannot learn the confiden- Our SFSCQ prototype has several limitations. Since tial data. Ironclad and Komodo’s approach cannot specify it relies on Coq’s extraction to Haskell, inherited from or prove a file system: both systems have no notion
Recommended publications
  • Elastic Storage for Linux on System Z IBM Research & Development Germany
    Peter Münch T/L Test and Development – Elastic Storage for Linux on System z IBM Research & Development Germany Elastic Storage for Linux on IBM System z © Copyright IBM Corporation 2014 9.0 Elastic Storage for Linux on System z Session objectives • This presentation introduces the Elastic Storage, based on General Parallel File System technology that will be available for Linux on IBM System z. Understand the concepts of Elastic Storage and which functions will be available for Linux on System z. Learn how you can integrate and benefit from the Elastic Storage in a Linux on System z environment. Finally, get your first impression in a live demo of Elastic Storage. 2 © Copyright IBM Corporation 2014 Elastic Storage for Linux on System z Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. AIX* FlashSystem Storwize* Tivoli* DB2* IBM* System p* WebSphere* DS8000* IBM (logo)* System x* XIV* ECKD MQSeries* System z* z/VM* * Registered trademarks of IBM Corporation The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
    [Show full text]
  • Membrane: Operating System Support for Restartable File Systems Swaminathan Sundararaman, Sriram Subramanian, Abhishek Rajimwale, Andrea C
    Membrane: Operating System Support for Restartable File Systems Swaminathan Sundararaman, Sriram Subramanian, Abhishek Rajimwale, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau, Michael M. Swift Computer Sciences Department, University of Wisconsin, Madison Abstract and most complex code bases in the kernel. Further, We introduce Membrane, a set of changes to the oper- file systems are still under active development, and new ating system to support restartable file systems. Mem- ones are introduced quite frequently. For example, Linux brane allows an operating system to tolerate a broad has many established file systems, including ext2 [34], class of file system failures and does so while remain- ext3 [35], reiserfs [27], and still there is great interest in ing transparent to running applications; upon failure, the next-generation file systems such as Linux ext4 and btrfs. file system restarts, its state is restored, and pending ap- Thus, file systems are large, complex, and under develop- plication requests are serviced as if no failure had oc- ment, the perfect storm for numerous bugs to arise. curred. Membrane provides transparent recovery through Because of the likely presence of flaws in their imple- a lightweight logging and checkpoint infrastructure, and mentation, it is critical to consider how to recover from includes novel techniques to improve performance and file system crashes as well. Unfortunately, we cannot di- correctness of its fault-anticipation and recovery machin- rectly apply previous work from the device-driver litera- ery. We tested Membrane with ext2, ext3, and VFAT. ture to improving file-system fault recovery. File systems, Through experimentation, we show that Membrane in- unlike device drivers, are extremely stateful, as they man- duces little performance overhead and can tolerate a wide age vast amounts of both in-memory and persistent data; range of file system crashes.
    [Show full text]
  • Developments in GFS2
    Developments in GFS2 Andy Price <[email protected]> Software Engineer, GFS2 OSSEU 2018 11 GFS2 recap ● Shared storage cluster filesystem ● High availability clusters ● Uses glocks (“gee-locks”) based on DLM locking ● One journal per node ● Divided into resource groups ● Requires clustered lvm: clvmd/lvmlockd ● Userspace tools: gfs2-utils 22 Not-so-recent developments ● Full SELinux support (linux 4.5) ● Previously no way to tell other nodes that cached labels are invalid ● Workaround was to mount with -o context=<label> ● Relabeling now causes security label invalidation across cluster 33 Not-so-recent developments ● Resource group stripe alignment (gfs2-utils 3.1.11) ● mkfs.gfs2 tries to align resource groups to RAID stripes where possible ● Uses libblkid topology information to find stripe unit and stripe width values ● Tries to spread resource groups evenly across disks 44 Not-so-recent developments ● Resource group LVBs (linux 3.6) ● -o rgrplvb (all nodes) ● LVBs: data attached to DLM lock requests ● Allows resource group metadata to be cached and transferred with glocks ● Decreases resource group-related I/O 55 Not-so-recent developments ● Location-based readdir cookies (linux 4.5) ● -o loccookie (all nodes) ● Uniqueness required by NFS ● Previously 31 bits of filename hash ● Problem: collisions with large directories (100K+ files) ● Cookies now based on the location of the directory entry within the directory 66 Not-so-recent developments ● gfs_controld is gone (linux 3.3) ● dlm now triggers gfs2 journal recovery via callbacks ●
    [Show full text]
  • Globalfs: a Strongly Consistent Multi-Site File System
    GlobalFS: A Strongly Consistent Multi-Site File System Leandro Pacheco Raluca Halalai Valerio Schiavoni University of Lugano University of Neuchatelˆ University of Neuchatelˆ Fernando Pedone Etienne Riviere` Pascal Felber University of Lugano University of Neuchatelˆ University of Neuchatelˆ Abstract consistency, availability, and tolerance to partitions. Our goal is to ensure strongly consistent file system operations This paper introduces GlobalFS, a POSIX-compliant despite node failures, at the price of possibly reduced geographically distributed file system. GlobalFS builds availability in the event of a network partition. Weak on two fundamental building blocks, an atomic multicast consistency is suitable for domain-specific applications group communication abstraction and multiple instances of where programmers can anticipate and provide resolution a single-site data store. We define four execution modes and methods for conflicts, or work with last-writer-wins show how all file system operations can be implemented resolution methods. Our rationale is that for general-purpose with these modes while ensuring strong consistency and services such as a file system, strong consistency is more tolerating failures. We describe the GlobalFS prototype in appropriate as it is both more intuitive for the users and detail and report on an extensive performance assessment. does not require human intervention in case of conflicts. We have deployed GlobalFS across all EC2 regions and Strong consistency requires ordering commands across show that the system scales geographically, providing replicas, which needs coordination among nodes at performance comparable to other state-of-the-art distributed geographically distributed sites (i.e., regions). Designing file systems for local commands and allowing for strongly strongly consistent distributed systems that provide good consistent operations over the whole system.
    [Show full text]
  • Add-Ons for Red Hat Enterprise Linux
    DATASHEET ADD-ONS FOR RED HAT ENTERPRISE LINUX WORKLOAD AND MISSION-CRITICAL PLATFORM ENHANCEMENTS In conjunction with Red Hat® Enterprise Linux®, Red Hat offers a portfolio of Add-Ons to extend the features of your Red Hat Enterprise Linux subscription. Add-Ons to Red Hat Enterprise Linux tailor your application environment to suit your particular computing requirements. With increased flexibility and choice, customers can deploy what they need, when they need it. ADD-ON OPTIONS FOR AVAILABILITY MANAGEMENT High Availability Add-On Red Hat’s High Availability Add-On provides on-demand failover services between nodes within a cluster, making applications highly available. The High Availability Add-On supports up to 16 nodes and may be configured for most applications that use customizable agents, as well as for virtual guests. The High Availability Add-On also includes failover support for off-the-shelf applications like Apache, MySQL, and PostgreSQL. When using the High Availability Add-On, a highly available service can fail over from one node to another with no apparent interruption to cluster clients. The High Availability Add-On also ensures data integrity when one cluster node takes over control of a service from another clus- ter node. It achieves this by promptly evicting nodes from the cluster that are deemed to be faulty using a method called “fencing” that prevents data corruption. www.redhat.com DATASHEET ADD-ONS For RED Hat ENterprise LINUX 6 Resilient Storage Add-On Red Hat’s Resilient Storage Add-On enables a shared storage or clustered file system to access the same storage device over a network.
    [Show full text]
  • Veritas Access Administrator's Guide
    Veritas Access Administrator's Guide 7.4.2 Linux To be used with the Veritas Access Appliance v7.4.2 (3340) documentation April 2021 Veritas Access Administrator's Guide Last updated: 2021-04-15 Document version: 7.4.2 Rev 5 Legal Notice Copyright © 2018 Veritas Technologies LLC. All rights reserved. Veritas, the Veritas Logo, Veritas InfoScale, and NetBackup are trademarks or registered trademarks of Veritas Technologies LLC or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners. This product may contain third-party software for which Veritas is required to provide attribution to the third party (“Third-Party Programs”). Some of the Third-Party Programs are available under open source or free software licenses. The License Agreement accompanying the Software does not alter any rights or obligations you may have under those open source or free software licenses. Refer to the third-party legal notices document accompanying this Veritas product or available at: https://www.veritas.com/licensing/process The product described in this document is distributed under licenses restricting its use, copying, distribution, and decompilation/reverse engineering. No part of this document may be reproduced in any form by any means without prior written authorization of Veritas Technologies LLC and its licensors, if any. THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. VERITAS TECHNOLOGIES LLC SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION.
    [Show full text]
  • Shared File Systems: Determining the Best Choice for Your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC
    Paper SAS569-2017 Shared File Systems: Determining the Best Choice for your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC ABSTRACT If you are planning on deploying SAS® Grid Manager and SAS® Enterprise BI (or other distributed SAS® Foundation applications) with load balanced servers on multiple operating systems instances, , a shared file system is required. In order to determine the best shared file system choice for a given deployment, it is important to understand how the file system is used, the SAS® I/O workload characteristics performed on it, and the stressors that SAS Foundation applications produce on the file system. For the purposes of this paper, we use the term "shared file system" to mean both a clustered file system and shared file system, even though" shared" can denote a network file system and a distributed file system – not clustered. INTRODUCTION This paper examines the shared file systems that are most commonly used with SAS and reviews their strengths and weaknesses. SAS GRID COMPUTING REQUIREMENTS FOR SHARED FILE SYSTEMS Before we get into the reasons why a shared file system is needed for SAS® Grid Computing, let’s briefly discuss the SAS I/O characteristics. GENERAL SAS I/O CHARACTERISTICS SAS Foundation creates a high volume of predominately large-block, sequential access I/O, generally at block sizes of 64K, 128K, or 256K, and the interactions with data storage are significantly different from typical interactive applications and RDBMSs. Here are some major points to understand (more details about the bullets below can be found in this paper): SAS tends to perform large sequential Reads and Writes.
    [Show full text]
  • Ubuntu Server Guide Basic Installation Preparing to Install
    Ubuntu Server Guide Welcome to the Ubuntu Server Guide! This site includes information on using Ubuntu Server for the latest LTS release, Ubuntu 20.04 LTS (Focal Fossa). For an offline version as well as versions for previous releases see below. Improving the Documentation If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with any page. PDFs and Previous Releases Below are links to the previous Ubuntu Server release server guides as well as an offline copy of the current version of this site: Ubuntu 20.04 LTS (Focal Fossa): PDF Ubuntu 18.04 LTS (Bionic Beaver): Web and PDF Ubuntu 16.04 LTS (Xenial Xerus): Web and PDF Support There are a couple of different ways that the Ubuntu Server edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per-server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions.
    [Show full text]
  • Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
    Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................
    [Show full text]
  • Red Hat Linux
    Red Hat Linux RH437: Red Hat Enterprise Clustering and Storage Management with Exam The intensive, hands-on Red Hat Enterprise Clustering and Storage Management course covers storage management, the Red Hat High Availability Add-On, and the shared storage technology delivered by Red Hat Global File System 2 (GFS2) and Red Hat Storage Server. Created for senior Linux® system administrators, this 5-day course strongly emphasizes lab- based activities. You’ll learn how to deploy and manage shared storage and server clusters that provide highly available network services to a mission- critical enterprise environment. Audience “The instructors This is course is intended for: » Senior Linux system administrators responsible for maximizing resiliency though high- and courses at availability clustering services and using fault-tolerant shared-storage technologies. Interface have been great – A++ rating!” PRerequisites This course is intended for students who already have their RHCE Certification or Interface Student equivalent experience. Phoenix, AZ WHAT YOU WILL LEARN After completing this course, students will learn: » Install and configure the Red Hat High Availability Add-On. $3610.00 » Create and manage highly available services. » Work with shared storage (iSCSI) and configure multipathing. • 5-day course » Configure GFS2 file systems. • Prepares for Exam EX437 » Configure XFS® file systems. » Work with the Red Hat Storage Server. Questions? Call 602-266-8585 Can’t make it to class in person? Attend many classes online with RemoteLive.™ Call 602-266-8585 today for a live demo. (course outline on back side) ©2014 Interface Technical Training All rights reserved COURSE OUTLINE RH437: Red Hat Enterprise Clustering and Storage Management with Exam 1.
    [Show full text]
  • Linux File Systems Enabling Cutting Edge Features in RHEL 6 & 7 Ric Wheeler, Senior Manager Steve Dickson, Consulting Engineer File and Storage Team Red Hat, Inc
    Linux File Systems Enabling Cutting Edge Features in RHEL 6 & 7 Ric Wheeler, Senior Manager Steve Dickson, Consulting Engineer File and Storage Team Red Hat, Inc. June 12, 2013 Red Hat Enterprise Linux 6 RHEL6 File and Storage Themes ● LVM Support for Scalable Snapshots and Thin Provisioned Storage ● Expanded options for file systems ● General performance enhancements ● Industry leading support for new pNFS protocol RHEL6 LVM Redesign ● Shared exception store ● Eliminates the need to have a dedicated partition for each file system that uses snapshots ● Mechanism is much more scalable ● LVM thin provisioned target ● dm-thinp is part of the device mapper stack ● Implements a high end, enterprise storage array feature ● Makes re-sizing a file system almost obsolete! RHEL Storage Provisioning Improved Resource Utilization & Ease of Use Avoids wasting space Less administration * Lower costs * Traditional Provisioning RHEL Thin Provisioning Free Allocated Unused Space Volume 1 Available Allocation Storage { Data Pool { Volume 1 Volume 2 Allocated Unused { Data { Data Volume 2 { Data Inefficient resource utilization Efficient resource utilization Manually manage unused storage Automatically managed Thin Provisioned Targets Leverage Discard ● Red Hat Enterprise Linux 6 introduced support to notify storage devices when a blocks becomes free in a file system ● Useful for wear leveling in SSD's or space reclamation in thin provisioned storage ● Discard maps to the appropriate low level command ● TRIM for SSD devices ● UNMAP or WRITE_SAME for SCSI
    [Show full text]
  • CIFS SMB2 SMB3 Meets Linux a Year in Review
    The Future of File Protocols: CIFS SMB2 SMB3 Meets Linux A Year in Review Steve French Senior Engineer – File System Architect IBM Linux Technology Center 1 IBM, Linux, and Building a Smarter Planet © 2012 IBM Corporation Legal Statement – This work represents the views of the author(s) and does not necessarily reflect the views of IBM Corporation – A full list of U.S. trademarks owned by IBM may be found at http://www.ibm.com/legal/copytrade.shtml. – Linux is a registered trademark of Linus Torvalds. – Other company, product, and service names may be trademarks or service marks of others. 2 © 2012 IBM Corporation Who am I? – Steve French ([email protected] or [email protected]) – Author and maintainer of Linux cifs vfs (for accessing Samba, Windows and various SMB/CIFS based NAS appliances) – Wrote initial SMB2 kernel client prototype – Member of the Samba team, coauthor of SNIA CIFS Technical Reference and former SNIA CIFS Working Group chair – Architect: File Systems/NFS/Samba for IBM Linux Technology Center © 2012 IBM Corporation SMB3: Great Feature Set, Broad Deployment, Amazing Performance ● Introduction of new storage features in Windows 8 causing one of most dramatic shifts in storage industry as companies rapidly move to support “SMB3” (aka SMB2.2) ● “SMB2.2 (CIFS) screams over InfiniBand” (Storage CH Blog) • Is (traditional) SAN use going to die? – “Market trends show virtualization workloads moving to NAS” (Dennis Chapman, Technical Director NetApp, SNIA SDC 2011) – “Unstructured data (file-based) storage is growing faster than structured data” (Thomas Pfenning, Microsoft GM, SNIA SDC 2011) – Customers prefer “file” • SMB2/CIFS market share is MUCH larger than NFS.
    [Show full text]