Why Panic()?: Improving Reliability with Restartable File Systems

Total Page:16

File Type:pdf, Size:1020Kb

Why Panic()?: Improving Reliability with Restartable File Systems Why panic()? Improving Reliability with Restartable File Systems Swaminathan Sundararaman, Sriram Subramanian, Abhishek Rajimwale, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau, Michael M. Swift Computer Sciences Department, University of Wisconsin, Madison ABSTRACT Device drivers are not the only OS subsystem, nor are they The file system is one of the most critical components of the necessarily where the most important bugs reside. Many re- operating system. Almost all applications running in the op- cent studies have shown that file systems contain a large num- erating system require file systems to be available for their ber of bugs [4, 6, 11, 24]. Perhaps this is not surprising, as file proper operation. Though file-system availability is critical systems are one of the largest and most complex code bases in in many cases, very little work has been done on tolerating the kernel. Further, file systems are still under active develop- file system crashes. In this paper, we propose Membrane, a ment, and new ones are introduced quite frequently. set of changes to the operating system to support restartable file systems. Membrane allows an operating system to tol- Due to the presence of flaws in file system implementation, erate a broad class of file system failures and does so while it is critical to consider how to handle their crashes. Unfor- remaining transparent to running applications; upon failure, tunately, two problems prevent us from directly applying pre- the file system restarts, its state is restored, and pending ap- vious work from the device-driver literature to improve file- plication requests are serviced as if no failure had occurred. system fault tolerance. First, most approaches mentioned above Our initial evaluation of Membrane with ext2 shows that Mem- are heavyweight due to the high costs of data movement and brane induces little performance overhead and can tolerate a page-table manipulation across address-space boundaries. Sec- wide range of file system crashes. More critically, Membrane ond, file systems, unlike device drivers, are extremely stateful, does so with few changes to ext2, thus improving robustness to as they manage vast amounts of both in-memory and persistent crashes without mandating intrusive changes to existing file- data; making matters worse is the fact that file systems spread system code. such state across many parts of the kernel, including the page cache, dynamically-allocated memory, and locks. Thus, when a file system crashes (typically by calling panic()), a great 1. INTRODUCTION deal of care is required to recover from the crash while keeping Operating systems crash. Whether due to software bugs or the rest of the OS intact. hardware bit-flips, the reality is clear: large code bases are brittle and the smallest problem in software implementation or In this paper, we propose Membrane, an operating system hardware environment can lead the entire monolithic operating framework to support lightweight, stateful recovery from file system to fail. system crashes. During normal operation, Membrane logs file system operations and periodically performs lightweight Recent research has made great headway in operating-system checkpoints of file system state. If a file system crashes, Mem- crash tolerance, particularly in surviving device driver fail- brane parks pending requests, cleans up existing state, restarts ures [17, 18, 23, 25]. Many of these approaches achieve some the file system from the most recent checkpoint, and replays level of fault resilience by building a hard wall around OS the in-memory operation log to restore the state of the running subsystems using address-space based isolation and microre- file system. Once finished with recovery, Membrane begins to booting said drivers upon fault detection [17, 18]. Other ap- service on-going application requests again; applications are proaches are similar, using variants of microkernel-based ar- kept unaware of the crash and restart except for the small per- chitectures [2, 23] or virtual machines [5, 9] to isolate drivers formance blip during recovery. from the kernel. Membrane does not place an address-space boundary between the file system and the rest of the kernel. Hence, it is possible that some types of crashes (e.g., wild writes) will corrupt ker- nel data structures and thus prohibit Membrane from properly Permission to make digital or hard copies of all or part of this work for recovering from a file system crash, an inherent weakness (and personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies strength!) of Membrane’s architecture. However, we believe bear this notice and the full citation on the first page. To copy otherwise, to that this approach will have the propaedeutic side-effect of en- republish, to post on servers or to redistribute to lists, requires prior specific couraging file system developers to add a higher degree of in- permission and/or a fee. tegrity checking (e.g., via assertions, either by hand or through ACM SIGOPS Operating Systems Review (c) 2010 ACM ISSN 0163-5980 automated techniques such as SafeDrive [25]) into their code /10/01 ...$10.00. 25 in order to fail quickly rather than run the risk of further cor- comes useless, as a majority of the applications running in the rupting the system. If such faults are transient (as many im- operating system depend on the file system for their proper portant classes of bugs are), crashing and quickly restarting is operation. The currently available solution for handling file a sensible course of action. system crashes is to restart the operating system and run re- covery (or a repair utility such as fsck for file systems that do Membrane, being a lightweight and generic operating system not have any in-built crash consistency mechanism). More- framework, requires little or no change to existing Linux file over, applications that were using the file systems are killed, systems. We believe that Membrane can be used to restart making their services unavailable during this period. This mo- most of the existing file systems. We have prototyped Mem- tivates the need for a restartable framework in the operating brane with the ext2 file system. From our initial evaluation, we system, to tolerate failures in file systems. find that Membrane enables ext2 to recover from a wide range of fault scenarios. We also find that Membrane is relatively 3. CHALLENGES lightweight, adding a small amount of overhead across a set of Building restartable file systems is hard. We now discuss the file system benchmarks. Membrane achieves these goals with challenges in building such a system. little or no intrusiveness: only five lines were added to trans- Fault Tolerant. A gamut of faults can occur in file systems. form ext2 into its restartable counterpart. Finally, Membrane Failures can be caused due to faulty hardware and/or buggy improves robustness with complete application transparency; software. These failures can be permanent or transient, and even though the underlying file system has crashed, applica- can corrupt data arbitrarily or be fail-stop. The ideal restartable tions continue to run. file system should recover from all possible faults. Transparency. File-system failures must be transparent to ap- The rest of this paper is organized as follows. Sections 2, plications. Applications should not require any modifications 3, and 4 describe the motivation, the challenges in building to work with restartable file systems. Moreover, multiple ap- restartable file systems, and the design of Membrane respec- plications (or threads) could be modifying the file system state tively. Section 5 and Section 6 discuss the consequence of at the same time and one of them could trigger a crash (through having Membrane in the operating system and evaluates Mem- a bug). brane’s robustness and performance. Section 7 places Mem- brane in the context of other relevant work; finally Section 8 Performance. Since file systems are fine-tuned for perfor- concludes the paper. mance, the infrastructure to restart file systems should have little or no overhead during regular operations. Also, the re- 2. MOTIVATION covery time should be as small as possible to hide file-system We first motivate the need for restartability in file systems.The crashes from applications. file system is one of the critical components of the operating Consistency. Care must be taken to ensure that the on-disk system that is responsible for storing and retrieving user, appli- state of the file system is consistent after recovery. Hence, cation, and OS data from the disk. File systems are also one of infrastructure for creating light-weight recovery points should the largest and complex codes in the operating system; for ex- be built as most file systems do not have any inbuilt crash con- ample, modern file systems such as Sun’s Zettabyte File Sys- sistency mechanism (e.g., only 8 out of 30 on-disk file system tem (ZFS) [1], SGI’s XFS [16], and older code bases such as have such feature in Linux 2.6.27). Also, the kernel data struc- Sun’s UFS [10] contain nearly 100,000 lines of code [14]. Fur- tures (locks, memory, list, etc.) should be kept consistent after ther, file systems are still under active development, and new recovery. ones are introduced quite frequently. For example, Linux has Generic. A large number of commodity file systems exist and many established file systems, including ext2 [19], ext3 [20], each has its own strengths and weaknesses. The infrastructure reiserfs [13], and still there is great interest in next-generation should not be designed for a specific file system. Ideally, the file systems such as Linux ext4 [21] and Btrfs [22]. Thus, file infrastructure should enable any file system to be transformed systems are large, complex, and under development: the per- into a restartable file system with little or no changes.
Recommended publications
  • The Kernel Report
    The kernel report (ELC 2012 edition) Jonathan Corbet LWN.net [email protected] The Plan Look at a year's worth of kernel work ...with an eye toward the future Starting off 2011 2.6.37 released - January 4, 2011 11,446 changes, 1,276 developers VFS scalability work (inode_lock removal) Block I/O bandwidth controller PPTP support Basic pNFS support Wakeup sources What have we done since then? Since 2.6.37: Five kernel releases have been made 59,000 changes have been merged 3069 developers have contributed to the kernel 416 companies have supported kernel development February As you can see in these posts, Ralink is sending patches for the upstream rt2x00 driver for their new chipsets, and not just dumping a huge, stand-alone tarball driver on the community, as they have done in the past. This shows a huge willingness to learn how to deal with the kernel community, and they should be strongly encouraged and praised for this major change in attitude. – Greg Kroah-Hartman, February 9 Employer contributions 2.6.38-3.2 Volunteers 13.9% Wolfson Micro 1.7% Red Hat 10.9% Samsung 1.6% Intel 7.3% Google 1.6% unknown 6.9% Oracle 1.5% Novell 4.0% Microsoft 1.4% IBM 3.6% AMD 1.3% TI 3.4% Freescale 1.3% Broadcom 3.1% Fujitsu 1.1% consultants 2.2% Atheros 1.1% Nokia 1.8% Wind River 1.0% Also in February Red Hat stops releasing individual kernel patches March 2.6.38 released – March 14, 2011 (9,577 changes from 1198 developers) Per-session group scheduling dcache scalability patch set Transmit packet steering Transparent huge pages Hierarchical block I/O bandwidth controller Somebody needs to get a grip in the ARM community.
    [Show full text]
  • Rootless Containers with Podman and Fuse-Overlayfs
    CernVM Workshop 2019 (4th June 2019) Rootless containers with Podman and fuse-overlayfs Giuseppe Scrivano @gscrivano Introduction 2 Rootless Containers • “Rootless containers refers to the ability for an unprivileged user (i.e. non-root user) to create, run and otherwise manage containers.” (https://rootlesscontaine.rs/ ) • Not just about running the container payload as an unprivileged user • Container runtime runs also as an unprivileged user 3 Don’t confuse with... • sudo podman run --user foo – Executes the process in the container as non-root – Podman and the OCI runtime still running as root • USER instruction in Dockerfile – same as above – Notably you can’t RUN dnf install ... 4 Don’t confuse with... • podman run --uidmap – Execute containers as a non-root user, using user namespaces – Most similar to rootless containers, but still requires podman and runc to run as root 5 Motivation of Rootless Containers • To mitigate potential vulnerability of container runtimes • To allow users of shared machines (e.g. HPC) to run containers without the risk of breaking other users environments • To isolate nested containers 6 Caveat: Not a panacea • Although rootless containers could mitigate these vulnerabilities, it is not a panacea , especially it is powerless against kernel (and hardware) vulnerabilities – CVE 2013-1858, CVE-2015-1328, CVE-2018-18955 • Castle approach : it should be used in conjunction with other security layers such as seccomp and SELinux 7 Podman 8 Rootless Podman Podman is a daemon-less alternative to Docker • $ alias
    [Show full text]
  • Dm-X: Protecting Volume-Level Integrity for Cloud Volumes and Local
    dm-x: Protecting Volume-level Integrity for Cloud Volumes and Local Block Devices Anrin Chakraborti Bhushan Jain Jan Kasiak Stony Brook University Stony Brook University & Stony Brook University UNC, Chapel Hill Tao Zhang Donald Porter Radu Sion Stony Brook University & Stony Brook University & Stony Brook University UNC, Chapel Hill UNC, Chapel Hill ABSTRACT on Amazon’s Virtual Private Cloud (VPC) [2] (which provides The verified boot feature in recent Android devices, which strong security guarantees through network isolation) store deploys dm-verity, has been overwhelmingly successful in elim- data on untrusted storage devices residing in the public cloud. inating the extremely popular Android smart phone rooting For example, Amazon VPC file systems, object stores, and movement [25]. Unfortunately, dm-verity integrity guarantees databases reside on virtual block devices in the Amazon are read-only and do not extend to writable volumes. Elastic Block Storage (EBS). This paper introduces a new device mapper, dm-x, that This allows numerous attack vectors for a malicious cloud efficiently (fast) and reliably (metadata journaling) assures provider/untrusted software running on the server. For in- volume-level integrity for entire, writable volumes. In a direct stance, to ensure SEC-mandated assurances of end-to-end disk setup, dm-x overheads are around 6-35% over ext4 on the integrity, banks need to guarantee that the external cloud raw volume while offering additional integrity guarantees. For storage service is unable to remove log entries documenting cloud storage (Amazon EBS), dm-x overheads are negligible. financial transactions. Yet, without integrity of the storage device, a malicious cloud storage service could remove logs of a coordinated attack.
    [Show full text]
  • ECE 598 – Advanced Operating Systems Lecture 19
    ECE 598 { Advanced Operating Systems Lecture 19 Vince Weaver http://web.eece.maine.edu/~vweaver [email protected] 7 April 2016 Announcements • Homework #7 was due • Homework #8 will be posted 1 Why use FAT over ext2? • FAT simpler, easy to code • FAT supported on all major OSes • ext2 faster, more robust filename and permissions 2 btrfs • B-tree fs (similar to a binary tree, but with pages full of leaves) • overwrite filesystem (overwite on modify) vs CoW • Copy on write. When write to a file, old data not overwritten. Since old data not over-written, crash recovery better Eventually old data garbage collected • Data in extents 3 • Copy-on-write • Forest of trees: { sub-volumes { extent-allocation { checksum tree { chunk device { reloc • On-line defragmentation • On-line volume growth 4 • Built-in RAID • Transparent compression • Snapshots • Checksums on data and meta-data • De-duplication • Cloning { can make an exact snapshot of file, copy-on- write different than link, different inodles but same blocks 5 Embedded • Designed to be small, simple, read-only? • romfs { 32 byte header (magic, size, checksum,name) { Repeating files (pointer to next [0 if none]), info, size, checksum, file name, file data • cramfs 6 ZFS Advanced OS from Sun/Oracle. Similar in idea to btrfs indirect still, not extent based? 7 ReFS Resilient FS, Microsoft's answer to brtfs and zfs 8 Networked File Systems • Allow a centralized file server to export a filesystem to multiple clients. • Provide file level access, not just raw blocks (NBD) • Clustered filesystems also exist, where multiple servers work in conjunction.
    [Show full text]
  • NOVA: a Log-Structured File System for Hybrid Volatile/Non
    NOVA: A Log-structured File System for Hybrid Volatile/Non-volatile Main Memories Jian Xu and Steven Swanson, University of California, San Diego https://www.usenix.org/conference/fast16/technical-sessions/presentation/xu This paper is included in the Proceedings of the 14th USENIX Conference on File and Storage Technologies (FAST ’16). February 22–25, 2016 • Santa Clara, CA, USA ISBN 978-1-931971-28-7 Open access to the Proceedings of the 14th USENIX Conference on File and Storage Technologies is sponsored by USENIX NOVA: A Log-structured File System for Hybrid Volatile/Non-volatile Main Memories Jian Xu Steven Swanson University of California, San Diego Abstract Hybrid DRAM/NVMM storage systems present a host of opportunities and challenges for system designers. These sys- Fast non-volatile memories (NVMs) will soon appear on tems need to minimize software overhead if they are to fully the processor memory bus alongside DRAM. The result- exploit NVMM’s high performance and efficiently support ing hybrid memory systems will provide software with sub- more flexible access patterns, and at the same time they must microsecond, high-bandwidth access to persistent data, but provide the strong consistency guarantees that applications managing, accessing, and maintaining consistency for data require and respect the limitations of emerging memories stored in NVM raises a host of challenges. Existing file sys- (e.g., limited program cycles). tems built for spinning or solid-state disks introduce software Conventional file systems are not suitable for hybrid mem- overheads that would obscure the performance that NVMs ory systems because they are built for the performance char- should provide, but proposed file systems for NVMs either in- acteristics of disks (spinning or solid state) and rely on disks’ cur similar overheads or fail to provide the strong consistency consistency guarantees (e.g., that sector updates are atomic) guarantees that applications require.
    [Show full text]
  • Oracle® Linux 7 Managing File Systems
    Oracle® Linux 7 Managing File Systems F32760-07 August 2021 Oracle Legal Notices Copyright © 2020, 2021, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • Filesystem Considerations for Embedded Devices ELC2015 03/25/15
    Filesystem considerations for embedded devices ELC2015 03/25/15 Tristan Lelong Senior embedded software engineer Filesystem considerations ABSTRACT The goal of this presentation is to answer a question asked by several customers: which filesystem should you use within your embedded design’s eMMC/SDCard? These storage devices use a standard block interface, compatible with traditional filesystems, but constraints are not those of desktop PC environments. EXT2/3/4, BTRFS, F2FS are the first of many solutions which come to mind, but how do they all compare? Typical queries include performance, longevity, tools availability, support, and power loss robustness. This presentation will not dive into implementation details but will instead summarize provided answers with the help of various figures and meaningful test results. 2 TABLE OF CONTENTS 1. Introduction 2. Block devices 3. Available filesystems 4. Performances 5. Tools 6. Reliability 7. Conclusion Filesystem considerations ABOUT THE AUTHOR • Tristan Lelong • Embedded software engineer @ Adeneo Embedded • French, living in the Pacific northwest • Embedded software, free software, and Linux kernel enthusiast. 4 Introduction Filesystem considerations Introduction INTRODUCTION More and more embedded designs rely on smart memory chips rather than bare NAND or NOR. This presentation will start by describing: • Some context to help understand the differences between NAND and MMC • Some typical requirements found in embedded devices designs • Potential filesystems to use on MMC devices 6 Filesystem considerations Introduction INTRODUCTION Focus will then move to block filesystems. How they are supported, what feature do they advertise. To help understand how they compare, we will present some benchmarks and comparisons regarding: • Tools • Reliability • Performances 7 Block devices Filesystem considerations Block devices MMC, EMMC, SD CARD Vocabulary: • MMC: MultiMediaCard is a memory card unveiled in 1997 by SanDisk and Siemens based on NAND flash memory.
    [Show full text]
  • NOVA: the Fastest File System for Nvdimms
    NOVA: The Fastest File System for NVDIMMs Steven Swanson, UC San Diego XFS F2FS NILFS EXT4 BTRFS © 2017 SNIA Persistent Memory Summit. All Rights Reserved. Disk-based file systems are inadequate for NVMM Disk-based file systems cannot 1-Sector 1-Block N-Block 1-Sector 1-Block N-Block Atomicity overwrit overwrit overwrit append append append exploit NVMM performance e e e Ext4 ✓ ✗ ✗ ✗ ✗ ✗ wb Ext4 ✓ ✓ ✗ ✓ ✗ ✓ Performance optimization Order Ext4 ✓ ✓ ✓ ✓ ✗ ✓ compromises consistency on Dataj system failure [1] Btrfs ✓ ✓ ✓ ✓ ✗ ✓ xfs ✓ ✓ ✗ ✓ ✗ ✓ Reiserfs ✓ ✓ ✗ ✓ ✗ ✓ [1] Pillai et al, All File Systems Are Not Created Equal: On the Complexity of Crafting Crash-Consistent Applications, OSDI '14. © 2017 SNIA Persistent Memory Summit. All Rights Reserved. BPFS SCMFS PMFS Aerie EXT4-DAX XFS-DAX NOVA M1FS © 2017 SNIA Persistent Memory Summit. All Rights Reserved. Previous Prototype NVMM file systems are not strongly consistent DAX does not provide data ATomic Atomicity Metadata Data Snapshot atomicity guarantee Mmap [1] So programming is more BPFS ✓ ✓ [2] ✗ ✗ difficult PMFS ✓ ✗ ✗ ✗ Ext4-DAX ✓ ✗ ✗ ✗ Xfs-DAX ✓ ✗ ✗ ✗ SCMFS ✗ ✗ ✗ ✗ Aerie ✓ ✗ ✗ ✗ © 2017 SNIA Persistent Memory Summit. All Rights Reserved. Ext4-DAX and xfs-DAX shortcomings No data atomicity support Single journal shared by all the transactions (JBD2- based) Poor performance Development teams are (rightly) “disk first”. © 2017 SNIA Persistent Memory Summit. All Rights Reserved. NOVA provides strong atomicity guarantees 1-Sector 1-Sector 1-Block 1-Block N-Block N-Block Atomicity Atomicity Metadata Data Mmap overwrite append overwrite append overwrite append Ext4 ✓ ✗ ✗ ✗ ✗ ✗ BPFS ✓ ✓ ✗ wb Ext4 ✓ ✓ ✗ ✓ ✗ ✓ PMFS ✓ ✗ ✗ Order Ext4 ✓ ✓ ✓ ✓ ✗ ✓ Ext4-DAX ✓ ✗ ✗ Dataj Btrfs ✓ ✓ ✓ ✓ ✗ ✓ Xfs-DAX ✓ ✗ ✗ xfs ✓ ✓ ✗ ✓ ✗ ✓ SCMFS ✗ ✗ ✗ Reiserfs ✓ ✓ ✗ ✓ ✗ ✓ Aerie ✓ ✗ ✗ © 2017 SNIA Persistent Memory Summit. All Rights Reserved.
    [Show full text]
  • Ted Ts'o on Linux File Systems
    Ted Ts’o on Linux File Systems An Interview RIK FARROW Rik Farrow is the Editor of ;login:. ran into Ted Ts’o during a tutorial luncheon at LISA ’12, and that later [email protected] sparked an email discussion. I started by asking Ted questions that had I puzzled me about the early history of ext2 having to do with the perfor- mance of ext2 compared to the BSD Fast File System (FFS). I had met Rob Kolstad, then president of BSDi, because of my interest in the AT&T lawsuit against the University of California and BSDi. BSDi was being sued for, among other things, Theodore Ts’o is the first having a phone number that could be spelled 800-ITS-UNIX. I thought that it was important North American Linux for the future of open source operating systems that AT&T lose that lawsuit. Kernel Developer, having That said, when I compared the performance of early versions of Linux to the current version started working with Linux of BSDi, I found that they were closely matched, with one glaring exception. Unpacking tar in September 1991. He also archives using Linux (likely .9) was blazingly fast compared to BSDi. I asked Rob, and he served as the tech lead for the MIT Kerberos explained that the issue had to do with synchronous writes, finally clearing up a mystery for me. V5 development team, and was the architect at IBM in charge of bringing real-time Linux Now I had a chance to ask Ted about the story from the Linux side, as well as other questions in support of real-time Java to the US Navy.
    [Show full text]
  • OSD-Btrfs, a Novel Lustre OSD Based on Btrfs
    2015/04/15 OSD-Btrfs, A Novel Lustre OSD based on Btrfs Shuichi Ihara Li Xi DataDirect Networks, Inc DataDirect Networks, Inc ©2015 DataDirect Networks. All Rights Reserved. ddn.com Why Btrfs for Lustre? ► Lustre today uses ldiskfs(ext4) or, more recently, ZFS as backend file system ► Ldiskfs(ext4) • ldiskfs(ext4) is used widely and exhibits well-tuned performance and proven reliability • But, its technical limitations are increasingly apparent, it is missing important novel features, lacks scalability, while offline fsck is extremely time-consuming ► ZFS • ZFS is an extremely feature-rich file system • but, for use as Lustre OSD further performance optimization is required • Licensing and potential legal issues have prevented a broader distribution ► Btrfs • Btrfs shares many features and goals with ZFS • Btrfs is becoming sufficiently stable for production usage • Btrfs is available through the major mainstream Linux distributions • Pachless server support. Simple Lustre setup and easy maintenance 2 ©2015 DataDirect Networks. All Rights Reserved. ddn.com Attractive Btrfs Features for Lustre (1) ► Features that improve OSD internally • Integrated multiple device support: RAID 0/1/10/5/6 o RAID 0 results in less OSTs per OSS and simplifies management, while improving the performance characteristics of a single OSD o RAID 1/10/5/6 improves reliability of OSDs • Dynamic inode allocation o No need to worry about formatting the MDT with wrong option to cause insufficient inode number • crc32c checksums on data and metadata o Improves OSD reliability • Online filesystem check and very fast offline filesystem check (planed) o Drives are getting bigger, but not faster, which leads to increasing fsck times 3 ©2015 DataDirect Networks.
    [Show full text]
  • Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 15
    SUSE Linux Enterprise Server 15 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 15 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xi 1 Available Documentation xi 2 Giving Feedback xiii 3 Documentation Conventions xiii 4 Product Life Cycle and Support xv Support Statement for SUSE Linux Enterprise Server xvi • Technology Previews xvii I FILE SYSTEMS AND MOUNTING 1 1 Overview of File Systems
    [Show full text]
  • Petalinux Tools Documentation: Reference Guide
    See all versions of this document PetaLinux Tools Documentation Reference Guide UG1144 (v2021.1) June 16, 2021 Revision History Revision History The following table shows the revision history for this document. Section Revision Summary 06/16/2021 Version 2021.1 Chapter 7: Customizing the Project Added a new section: Configuring UBIFS Boot. Chapter 5: Booting and Packaging Updated Steps to Boot a PetaLinux Image on Hardware with SD Card. Appendix A: Migration Added FPGA Manager Changes, Yocto Recipe Name Changes, Host GCC Version Upgrade. Chapter 10: Advanced Configurations Updated U-Boot Configuration and Image Packaging Configuration. UG1144 (v2021.1) June 16, 2021Send Feedback www.xilinx.com PetaLinux Tools Documentation Reference Guide 2 Table of Contents Revision History...............................................................................................................2 Chapter 1: Overview.................................................................................................... 8 Introduction................................................................................................................................. 8 Navigating Content by Design Process.................................................................................... 9 Chapter 2: Setting Up Your Environment...................................................... 11 Installation Steps.......................................................................................................................11 PetaLinux Working Environment Setup................................................................................
    [Show full text]