File Systems Unfit As Distributed Storage Back Ends

File Systems Unfit As Distributed Storage Back Ends

File Systems Unfit as Distributed Storage Back Ends LessonsSYSTEMS from 10 Years of Ceph Evolution ABUTALIB AGHAYEV, SAGE WEIL, MICHAEL KUCHNIK, MARK NELSON, GREG GANGER, AND GEORGE AMVROSIADIS Abutalib Aghayev is a PhD stu- or a decade, the Ceph distributed file system followed the conventional dent in the Computer Science wisdom of building its storage back end on top of local file systems. Department at Carnegie Mellon The experience with different file systems showed that this approach University. He has broad research F interests in computer systems, always leaves significant performance on the table while incurring signifi- including storage and file systems, distributed cant accidental complexity [2]. Therefore, the Ceph team embarked on an systems, and operating systems. ambitious project to build BlueStore, a new back end designed to run directly [email protected] on raw storage devices. Somewhat surprisingly, BlueStore matured in less Sage Weil is the Lead Architect than two years. It outperformed back ends built atop file systems and got and co-creator of the Ceph open adopted by 70% of users in production. source distributed storage sys- Figure 1 shows the high-level architecture of Ceph. At the core of Ceph is the Reliable Auto- tem. Ceph was created to pro- nomic Distributed Object Store (RADOS) service. RADOS scales to thousands of Object vide a stable, next generation Storage Devices (OSDs), providing self-healing, self-managing, replicated object storage with distributed storage system for Linux. Inktank strong consistency. Ceph’s librados library provides a transactional interface for manipu- was co-founded by Sage in 2012 to support lating objects and object collections in RADOS. Out of the box, Ceph provides three services enterprise Ceph users, and then acquired by implemented using librados: the RADOS Gateway (RGW), an object storage similar to Red Hat in 2014. Today Sage continues to lead Amazon S3; the RADOS Block Device (RBD), a virtual block device similar to Amazon EBS; the Ceph developer community and to help and CephFS, a distributed file system with POSIX semantics. shape Red Hat’s overall storage strategy. [email protected] Objects in RADOS are stored in logical partitions called pools. Pools can be configured to provide redundancy for the contained objects either through replication or erasure coding. Michael Kuchnik is a PhD stu- Within a pool, the objects are sharded among aggregation units called placement groups dent in the Computer Science (PGs). Depending on the replication factor, PGs are mapped to multiple OSDs using CRUSH, Department at Carnegie Mellon a pseudo-random data distribution algorithm. Clients also use CRUSH to determine the OSD University and a member of the that should contain a given object, obviating the need for a centralized metadata service. PGs Parallel Data Lab. His research and CRUSH form an indirection layer between clients and OSDs that allows the migration of interests are in the design and analysis of com- objects between OSDs to adapt to cluster or workload changes. puter systems, specifically those involving stor- age, high performance computing, or machine In every node of a RADOS cluster, there is a separate Ceph OSD daemon per local storage learning. Before coming to CMU, he earned his device. Each OSD processes I/O requests from librados clients and cooperates with peer BS in computer engineering from the Georgia OSDs to replicate or erasure code updates, migrate data, or recover from failures. Data is Institute of Technology. [email protected] persisted to the local device via the internal ObjectStore interface, which is the storage back-end interface in Ceph. ObjectStore provides abstractions for objects, object collections, a set of primitives to inspect data, and transactions to update data. A transaction combines an arbitrary number of primitives operating on objects and object collections into an atomic operation. The FileStore storage back end is an ObjectStore implementation on top of a local file system. In FileStore, an object collection is mapped to a directory and object data is stored in a file. Throughout the years, FileStore was ported to run on top of Btrfs, XFS, ext4, and ZFS, with FileStore on XFS becoming the de facto back end because it scaled better and had faster metadata performance [7]. 6 SPRING 2020 VOL. 45, NO. 1 www.usenix.org SYSTEMS File Systems Unfit as Distributed Storage Back Ends: Lessons from 10 Years of Ceph Evolution Mark Nelson joined the Ceph BlueStore: A Clean-Slate Approach team in January 2012 and has The BlueStore storage back end is a new implementation of ObjectStore designed from 12 years of experience in distrib- scratch to run on raw block devices, aiming to solve the challenges [2] faced by FileStore. uted systems, HPC, and bioin- Some of the main goals of BlueStore were: formatics. Mark works on Ceph 1. Fast metadata operations performance analysis and is the primary author of the Ceph Benchmarking Toolkit. He runs the 2. No consistency overhead for object writes weekly Ceph performance meeting and is cur- 3. Copy-on-write clone operation rently focused on research and development of 4. No journaling double-writes Ceph’s next-generation object store. 5. Optimized I/O patterns for HDD and SSD [email protected] BlueStore achieved all of these goals within just two years and became the default storage Greg Ganger is the Jatras Pro - back end in Ceph. Two factors played a key role in why BlueStore matured so quickly com- fessor of Electrical and Computer pared to general-purpose POSIX file systems that take a decade to mature. First, BlueStore Engineering at Carnegie Mellon implements a small, special-purpose interface and not a complete POSIX I/O specification. University and Director of the Second, BlueStore is implemented in userspace, which allows it to leverage well-tested and Parallel Data Lab (www.pdl.cmu high-performance third-party libraries. Finally, BlueStore’s control of the I/O stack enables .edu). He has broad research interests, with additional features (see “Features Enabled by BlueStore,” below). current projects exploring system support for large-scale ML (Big Learning), resource man- The high-level architecture of BlueStore is shown in Figure 2. A space allocator within agement in cloud computing, and software BlueStore determines the location of new data, which is asynchronously written to raw disk systems for heterogeneous storage clusters, using direct I/O. Internal metadata and user object metadata is stored in RocksDB. The HPC storage, and NVM. His PhD in CS&E is BlueStore space allocator and BlueFS share the disk and periodically communicate to bal- from the University of Michigan. ance free space. The remainder of this section describes metadata and data management in [email protected] BlueStore. George Amvrosiadis is an BlueFS and RocksDB Assistant Research Profes- BlueStore achieves its first goal, fast metadata operations, by storing metadata in RocksDB. sor of Electrical and Computer BlueStore achieves its second goal of no consistency overhead with two changes. First, it Engineering at Carnegie Mel- writes data directly to raw disk, resulting in one cache flush [10] for data write, as opposed to lon University and a member having two cache flushes when writing data to a file on top of a journaling file system. Sec- of the Parallel Data Lab. His current research ond, it changes RocksDB to reuse write-ahead log files as a circular buffer, resulting in one focuses on distributed and cloud storage, new cache flush for metadata write—a feature that was upstreamed to the mainline RocksDB. storage technologies, high performance com- puting, and storage for machine learning. His RocksDB itself runs on BlueFS, a minimal file system designed specifically for RocksDB that team’s research has received an R&D100 Award runs on a raw storage device. RocksDB abstracts out its requirements from the underlying and was featured on WIRED, The Morning Paper, file system in the Env interface. BlueFS is an implementation of this interface in the form and Hacker News. He co-teaches two graduate of a userspace, extent-based, and journaling file system. It implements basic system calls courses on Storage Systems and Advanced Cloud Computing attended by 100+ graduate students each. [email protected] Figure 1: High-level depiction of Ceph’s architecture. Figure 2: The high-level architecture of BlueStore. A single pool with 3× replication is shown. There- Data is written to the raw storage device using fore, each placement group (PG) is replicated on direct I/O. Metadata is written to RocksDB running three OSDs. on top of BlueFS. BlueFS is a userspace library file system designed for RocksDB, and it also runs on top of the raw storage device. www.usenix.org SPRING 2020 VOL. 45, NO. 1 7 SYSTEMS File Systems Unfit as Distributed Storage Back Ends: Lessons from 10 Years of Ceph Evolution required by RocksDB, such as open, mkdir, and pwrite. BlueFS For writes smaller than the minimum allocation size, both maintains an inode for each file that includes the list of extents data and metadata are first inserted to RocksDB as promises allocated to the file. The superblock is stored at a fixed offset of future I/O and then asynchronously written to disk after the and contains an inode for the journal. The journal has the only transaction commits. This deferred write mechanism has two copy of all file-system metadata, which is loaded into memory purposes. First, it batches small writes to increase efficiency, at mount time. On every metadata operation, such as directory because new data writes require two I/O operations whereas creation, file creation, and extent allocation, the journal and an insert to RocksDB requires one. Second, it optimizes I/O in-memory metadata are updated.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us