Reducing File System Tail Latencies with Chopper 1 Introduction

Total Page:16

File Type:pdf, Size:1020Kb

Reducing File System Tail Latencies with Chopper 1 Introduction Reducing File System Tail Latencies with Chopper Jun He, Duy Nguyeny, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau Department of Computer Sciences, yDepartment of Statistics University of Wisconsin–Madison Abstract One critical contributor to tail latency is the local file We present Chopper, a tool that efficiently explores the system [8]. Found at the heart of most distributed file sys- vast input space of file system policies to find behav- tems [20, 47], local file systems such as Linux ext4, XFS, iors that lead to costly performance problems. We fo- and btrfs serve as the building block for modern scalable cus specifically on block allocation, as unexpected poor storage. Thus, if rare-case performance of the local file layouts can lead to high tail latencies. Our approach system is poor, the performance of the distributed file sys- utilizes sophisticated statistical methodologies, based on tem built on top of it will suffer. Latin Hypercube Sampling (LHS) and sensitivity analy- In this paper, we present Chopper, a tool that en- sis, to explore the search space efficiently and diagnose ables developers to discover (and subsequently repair) intricate design problems. We apply Chopper to study the high-latency operations within local file systems. Chop- overall behavior of two file systems, and to study Linux per currently focuses on a critical contributor to unusual ext4 in depth. We identify four internal design issues in behavior in modern systems: block allocation, which the block allocator of ext4 which form a large tail in the can reduce file system performance by one or more or- distribution of layout quality. By removing the underly- ders of magnitude on both hard disk and solid state ing problems in the code, we cut the size of the tail by an drives [1, 11, 13, 30, 36]. With Chopper, we show how to order of magnitude, producing consistent and satisfactory find such poor behaviors, and then how to fix them (usu- file layouts that reduce data access latencies. ally through simple file-system repairs). The key and most novel aspect of Chopper is its usage 1 Introduction of advanced statistical techniques to search and investi- As the distributed systems that power the cloud have ma- gate an infinite performance space systematically. Specif- tured, a new performance focus has come into play: tail ically, we use Latin hypercube sampling [29] and sensi- latency. As Dean and Barroso describe, long tails can dra- tivity analysis [40], which has been proven efficient in matically harm interactive performance and thus limit the the investigation of many-factor systems in other appli- applications that can be effectively deployed at scale in cations [24,31,39]. We show how to apply such advanced modern cloud-based services [17]. As a result, a great techniques to the domain of file-system performance anal- deal of recent research effort has attacked tail latency di- ysis, and in doing so make finding tail behavior tractable. rectly [6,49,51]; for example, Alizadeh et al. show how to We use Chopper to analyze the allocation performance reduce the network latency of the 99th percentile by a fac- of Linux ext4 and XFS, and then delve into a detailed tor of ten through a combination of novel techniques [6]. analysis of ext4 as its behavior is more complex and var- The fundamental reason that reducing such tail latency ied. We find four subtle flaws in ext4, including behav- is challenging is that rare, corner-case behaviors, which iors that spread sequentially-written files over the entire have little impact on a single system, can dominate when disk volume, greatly increasing fragmentation and induc- running a system at scale [17]. Thus, while the well- ing large latency when the data is later accessed. We also tested and frequently-exercised portions of a system per- show how simple fixes can remedy these problems, result- form well, the unusual behaviors that are readily ignored ing in an order-of-magnitude improvement in the tail lay- on one machine become the common case upon one thou- out quality of the block allocator. Chopper and the ext4 sand (or more) machines. patches are publicly available at: To build the next generation of robust, predictably per- research.cs.wisc.edu/adsl/Software/chopper forming systems, we need an approach that can read- The rest of the paper is organized as follows. Section 2 ily discover corner-case behaviors, thus enabling a de- introduces the experimental methodology and implemen- veloper to find and fix intrinsic tail-latency problems be- tation of Chopper. In Section 3, we evaluate ext4 and fore deployment. Unfortunately, finding unusual behav- XFS as black boxes and then go further to explore ext4 as ior is hard: just like exploring an infinite state space a white box since ext4 has a much larger tail than XFS. for correctness bugs remains an issue for today’s model We present detailed analysis and fixes for internal allo- checkers [10, 19], discovering the poorly-performing tail- cator design issues of ext4. Section 4 introduces related influencing behaviors presents a significant challenge. work. Section 5 concludes this paper. 2 Diagnosis Methodology in Table 1 (introduced in detail later), there are about 8 × 9 We now describe our methodology for discovering inter- 10 combinations to explore. With an overly optimistic esting tail behaviors in file system performance, particu- speed of one treatment per second, it still would take 250 larly as related to block allocation. The file system input compute-years to finish just one such exploration. space is vast, and thus cannot be explored exhaustively; Latin Hypercube Sampling (LHS) is a sampling method we thus treat each file system experiment as a simulation, that efficiently explores many-factor systems with a large and apply a sophisticated sampling technique to ensure input space and helps discover surprising behaviors [25, that the large input space is explored carefully. 29, 40]. A Latin hypercube is a generalization of a Latin In this section, we first describe our general experimen- square, which is a square grid with only one sample point tal approach, the inputs we use, and the output metric of in each row and each column, to an arbitrary number of di- choice. We conclude by presenting our implementation. mensions [12]. LHS is very effective in examining the in- fluence of each factor when the number of runs in the ex- 2.1 Experimental Framework periment is much larger than the number of factors. It aids The Monte Carlo method is a process of exploring simu- visual analysis as it exercises the system over the entire lation by obtaining numeric results through repeated ran- range of each input factor and ensures all levels of it are dom sampling of inputs [38,40,43]. Here, we treat the file explored evenly [38]. LHS can effectively discover which system itself as a simulator, thus placing it into the Monte factors and which combinations of factors have a large in- Carlo framework. Each run of the file system, given a set fluence on the response. A poor sampling method, such of inputs, produces a single output, and we use this frame- as a completely random one, could have input points clus- work to explore the file system as a black box. tered in the input space, leaving large unexplored gaps in- Each input factor Xi (i = 1; 2; :::; K) (described fur- between [38]. Our experimental plan, based on LHS, con- ther in Section 2.2) is estimated to follow a distribution. tains 16384 runs, large enough to discover subtle behav- For example, if small files are of particular interest, one iors but not so large as to require an impractical amount can utilize a distribution that skews toward small file sizes. of time. In the experiments of this paper, we use a uniform distri- bution for fair searching. For each factor Xi, we draw 2.2 Factors to Explore a sample from its distribution and get a vector of values 1 2 3 N File systems are complex. It is virtually impossible to (Xi ;Xi ;Xi ; ::; Xi ). Collecting samples of all the fac- tors, we obtain a matrix M. study all possible factors influencing performance. For example, the various file system formatting and mounting 2 1 1 1 3 2 1 3 X1 X2 ::: XK Y options alone yield a large number of combinations. In 2 2 2 2 6 X1 X2 ::: XK 7 6 Y 7 addition, the run-time environment is complex; for exam- M = 6 7 Y = 6 7 4 ::: 5 4 ::: 5 ple, file system data is often buffered in OS page caches in N N N N X1 X2 ::: XK Y memory, and differences in memory size can dramatically Each row in M, i.e., a treatment, is a vector to be used change file system behavior. as input of one run, which produces one row in vector Y . In this study, we choose to focus on a subset of factors In our experiment, M consists of columns such as the size that we believe are most relevant to allocation behavior. of the file system and how much of it is currently in use. As we will see, these factors are broad enough to dis- Y is a vector of the output metric; as described below, cover interesting performance oddities; they are also not we use a metric that captures how much a file is spread so broad as to make a thorough exploration intractable. out over the disk called d-span. M and Y are used for There are three categories of input factors in Chopper. exploratory data analysis.
Recommended publications
  • A Large-Scale Study of File-System Contents
    A Large-Scale Study of File-System Contents John R. Douceur and William J. Bolosky Microsoft Research Redmond, WA 98052 {johndo, bolosky}@microsoft.com ABSTRACT sizes are fairly consistent across file systems, but file lifetimes and file-name extensions vary with the job function of the user. We We collect and analyze a snapshot of data from 10,568 file also found that file-name extension is a good predictor of file size systems of 4801 Windows personal computers in a commercial but a poor predictor of file age or lifetime, that most large files are environment. The file systems contain 140 million files totaling composed of records sized in powers of two, and that file systems 10.5 TB of data. We develop analytical approximations for are only half full on average. distributions of file size, file age, file functional lifetime, directory File-system designers require usage data to test hypotheses [8, size, and directory depth, and we compare them to previously 10], to drive simulations [6, 15, 17, 29], to validate benchmarks derived distributions. We find that file and directory sizes are [33], and to stimulate insights that inspire new features [22]. File- fairly consistent across file systems, but file lifetimes vary widely system access requirements have been quantified by a number of and are significantly affected by the job function of the user. empirical studies of dynamic trace data [e.g. 1, 3, 7, 8, 10, 14, 23, Larger files tend to be composed of blocks sized in powers of two, 24, 26]. However, the details of applications’ and users’ storage which noticeably affects their size distribution.
    [Show full text]
  • Latency-Aware, Inline Data Deduplication for Primary Storage
    iDedup: Latency-aware, inline data deduplication for primary storage Kiran Srinivasan, Tim Bisson, Garth Goodson, Kaladhar Voruganti NetApp, Inc. fskiran, tbisson, goodson, [email protected] Abstract systems exist that deduplicate inline with client requests for latency sensitive primary workloads. All prior dedu- Deduplication technologies are increasingly being de- plication work focuses on either: i) throughput sensitive ployed to reduce cost and increase space-efficiency in archival and backup systems [8, 9, 15, 21, 26, 39, 41]; corporate data centers. However, prior research has not or ii) latency sensitive primary systems that deduplicate applied deduplication techniques inline to the request data offline during idle time [1, 11, 16]; or iii) file sys- path for latency sensitive, primary workloads. This is tems with inline deduplication, but agnostic to perfor- primarily due to the extra latency these techniques intro- mance [3, 36]. This paper introduces two novel insights duce. Inherently, deduplicating data on disk causes frag- that enable latency-aware, inline, primary deduplication. mentation that increases seeks for subsequent sequential Many primary storage workloads (e.g., email, user di- reads of the same data, thus, increasing latency. In addi- rectories, databases) are currently unable to leverage the tion, deduplicating data requires extra disk IOs to access benefits of deduplication, due to the associated latency on-disk deduplication metadata. In this paper, we pro- costs. Since offline deduplication systems impact la-
    [Show full text]
  • On the Performance Variation in Modern Storage Stacks
    On the Performance Variation in Modern Storage Stacks Zhen Cao1, Vasily Tarasov2, Hari Prasath Raman1, Dean Hildebrand2, and Erez Zadok1 1Stony Brook University and 2IBM Research—Almaden Appears in the proceedings of the 15th USENIX Conference on File and Storage Technologies (FAST’17) Abstract tions on different machines have to compete for heavily shared resources, such as network switches [9]. Ensuring stable performance for storage stacks is im- In this paper we focus on characterizing and analyz- portant, especially with the growth in popularity of ing performance variations arising from benchmarking hosted services where customers expect QoS guaran- a typical modern storage stack that consists of a file tees. The same requirement arises from benchmarking system, a block layer, and storage hardware. Storage settings as well. One would expect that repeated, care- stacks have been proven to be a critical contributor to fully controlled experiments might yield nearly identi- performance variation [18, 33, 40]. Furthermore, among cal performance results—but we found otherwise. We all system components, the storage stack is the corner- therefore undertook a study to characterize the amount stone of data-intensive applications, which become in- of variability in benchmarking modern storage stacks. In creasingly more important in the big data era [8, 21]. this paper we report on the techniques used and the re- Although our main focus here is reporting and analyz- sults of this study. We conducted many experiments us- ing the variations in benchmarking processes, we believe ing several popular workloads, file systems, and storage that our observations pave the way for understanding sta- devices—and varied many parameters across the entire bility issues in production systems.
    [Show full text]
  • Filesystems HOWTO Filesystems HOWTO Table of Contents Filesystems HOWTO
    Filesystems HOWTO Filesystems HOWTO Table of Contents Filesystems HOWTO..........................................................................................................................................1 Martin Hinner < [email protected]>, http://martin.hinner.info............................................................1 1. Introduction..........................................................................................................................................1 2. Volumes...............................................................................................................................................1 3. DOS FAT 12/16/32, VFAT.................................................................................................................2 4. High Performance FileSystem (HPFS)................................................................................................2 5. New Technology FileSystem (NTFS).................................................................................................2 6. Extended filesystems (Ext, Ext2, Ext3)...............................................................................................2 7. Macintosh Hierarchical Filesystem − HFS..........................................................................................3 8. ISO 9660 − CD−ROM filesystem.......................................................................................................3 9. Other filesystems.................................................................................................................................3
    [Show full text]
  • CD ROM Drives and Their Handling Under RISC OS Explained
    28th April 1995 Support Group Application Note Number: 273 Issue: 0.4 **DRAFT** Author: DW CD ROM Drives and their Handling under RISC OS Explained This Application Note details the interfacing of, and protocols underlying, CD ROM drives and their handling under RISC OS. Applicable Related Hardware : Application All Acorn computers running Notes: 266: Developing CD ROM RISC OS 3.1 or greater products for Acorn machines 269: File transfer between Acorn and foreign platforms Every effort has been made to ensure that the information in this leaflet is true and correct at the time of printing. However, the products described in this leaflet are subject to continuous development and Support Group improvements and Acorn Computers Limited reserves the right to change its specifications at any time. Acorn Computers Limited Acorn Computers Limited cannot accept liability for any loss or damage arising from the use of any information or particulars in this leaflet. Acorn, the Acorn Logo, Acorn Risc PC, ECONET, AUN, Pocket Acorn House Book and ARCHIMEDES are trademarks of Acorn Computers Limited. Vision Park ARM is a trademark of Advance RISC Machines Limited. Histon, Cambridge All other trademarks acknowledged. ©1995 Acorn Computers Limited. All rights reserved. CB4 4AE Support Group Application Note No. 273, Issue 0.4 **DRAFT** 28th April 1995 Table of Contents Introduction 3 Interfacing CD ROM Drives 3 Features of CD ROM Drives 4 CD ROM Formatting Standards ("Coloured Books") 5 CD ROM Data Storage Standards 7 CDFS and CD ROM Drivers 8 Accessing Data on CD ROMs Produced for Non-Acorn Platforms 11 Troubleshooting 12 Appendix A: Useful Contacts 13 Appendix B: PhotoCD Mastering Bureaux 14 Appendix C: References 14 Support Group Application Note No.
    [Show full text]
  • Porting FUSE to L4re
    Großer Beleg Porting FUSE to L4Re Florian Pester 23. Mai 2013 Technische Universit¨at Dresden Fakult¨at Informatik Institut fur¨ Systemarchitektur Professur Betriebssysteme Betreuender Hochschullehrer: Prof. Dr. rer. nat. Hermann H¨artig Betreuender Mitarbeiter: Dipl.-Inf. Carsten Weinhold Erkl¨arung Hiermit erkl¨are ich, dass ich diese Arbeit selbstst¨andig erstellt und keine anderen als die angegebenen Hilfsmittel benutzt habe. Declaration I hereby declare that this thesis is a work of my own, and that only cited sources have been used. Dresden, den 23. Mai 2013 Florian Pester Contents 1. Introduction 1 2. State of the Art 3 2.1. FUSE on Linux . .3 2.1.1. FUSE Internal Communication . .4 2.2. The L4Re Virtual File System . .5 2.3. Libfs . .5 2.4. Communication and Access Control in L4Re . .6 2.5. Related Work . .6 2.5.1. FUSE . .7 2.5.2. Pass-to-Userspace Framework Filesystem . .7 3. Design 9 3.1. FUSE Server parts . 11 4. Implementation 13 4.1. Example Request handling . 13 4.2. FUSE Server . 14 4.2.1. LibfsServer . 14 4.2.2. Translator . 14 4.2.3. Requests . 15 4.2.4. RequestProvider . 15 4.2.5. Node Caching . 15 4.3. Changes to the FUSE library . 16 4.4. Libfs . 16 4.5. Block Device Server . 17 4.6. File systems . 17 5. Evaluation 19 6. Conclusion and Further Work 25 A. FUSE operations 27 B. FUSE library changes 35 C. Glossary 37 V List of Figures 2.1. The architecture of FUSE on Linux . .3 2.2. The architecture of libfs .
    [Show full text]
  • Paratrac: a Fine-Grained Profiler for Data-Intensive Workflows
    ParaTrac: A Fine-Grained Profiler for Data-Intensive Workflows Nan Dun Kenjiro Taura Akinori Yonezawa Department of Computer Department of Information and Department of Computer Science Communication Engineering Science The University of Tokyo The University of Tokyo The University of Tokyo 7-3-1 Hongo, Bunkyo-Ku 7-3-1 Hongo, Bunkyo-Ku 7-3-1 Hongo, Bunkyo-Ku Tokyo, 113-5686 Japan Tokyo, 113-5686 Japan Tokyo, 113-5686 Japan [email protected] [email protected] [email protected] tokyo.ac.jp tokyo.ac.jp tokyo.ac.jp ABSTRACT 1. INTRODUCTION The realistic characteristics of data-intensive workflows are With the advance of high performance distributed com- critical to optimal workflow orchestration and profiling is puting, users are able to execute various data-intensive an effective approach to investigate the behaviors of such applications by harnessing massive computing resources [1]. complex applications. ParaTrac is a fine-grained profiler Though workflow management systems have been developed for data-intensive workflows by using user-level file system to alleviate the difficulties of planning, scheduling, and exe- and process tracing techniques. First, ParaTrac enables cuting complex workflows in distributed environments [2{5], users to quickly understand the I/O characteristics of from optimal workflow management still remains a challenge entire application to specific processes or files by examining because of the complexity of applications. Therefore, one low-level I/O profiles. Second, ParaTrac automatically of important and practical demands is to understand and exploits fine-grained data-processes interactions in workflow characterize the data-intensive applications to help workflow to help users intuitively and quantitatively investigate management systems (WMS) refine their orchestration for realistic execution of data-intensive workflows.
    [Show full text]
  • Eventtracker: Removable Media Device Monitoring Version 7.X
    EventTracker: Removable Media Device Monitoring Version 7.x EventTracker 8815 Centre Park Drive Columbia MD 21045 Publication Date: Dec 21, 2011 www.eventtracker.com EventTracker: Removable Media Device Monitoring Abstract With the introduction of newer portable devices, the security needs of protecting integrity and confidential data has been changed. An increasing need of portable access to the data has also increased the risk of sensitive or confidential data exposure. Therefore, to keep a record of removable media device activities has become one of the most important compliance factor for the enterprise. EventTracker’s advanced removable media monitoring capacity protects and monitors system(s) from illegal access or data theft. EventTracker helps user(s) to disable the unauthorized access to the machine and allow the trusted devices connection. Purpose This document will help you to enable the removable device monitoring and explains the procedure to find the Device ID and USB serial number. It also monitors insertion/removal and files written to and read from removable media such as CD/DVD and USB. Intended Audience Administrators who are assigned the task to monitor and manage events using EventTracker. Scope The configurations detailed in this guide are consistent with EventTracker Enterprise version 7.x. The instructions can be used while working with later releases of EventTracker Enterprise. The information contained in this document represents the current view of Prism Microsystems Inc. on the issues discussed as of the date of publication. Because Prism Microsystems must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Prism Microsystems, and Prism Microsystems cannot guarantee the accuracy of any information presented after the date of publication.
    [Show full text]
  • 2 Linux Installation 5
    MRtrix Documentation Release 3.0 MRtrix contributors May 05, 2017 Install 1 Before you install 3 2 Linux installation 5 3 macOS installation 11 4 Windows installation 15 5 HPC clusters installation 19 6 Key features 23 7 Commands and scripts 25 8 Beginner DWI tutorial 27 9 Images and other data 29 10 Command-line usage 41 11 Configuration file 47 12 DWI denoising 49 13 DWI distortion correction using dwipreproc 51 14 Response function estimation 55 15 Maximum spherical harmonic degree lmax 61 16 Multi-tissue constrained spherical deconvolution 63 17 Anatomically-Constrained Tractography (ACT) 65 18 Spherical-deconvolution Informed Filtering of Tractograms (SIFT) 69 19 Structural connectome construction 73 20 Using the connectome visualisation tool 77 i 21 labelconvert: Explanation & demonstration 81 22 Global tractography 85 23 ISMRM tutorial - Structural connectome for Human Connectome Project (HCP) 89 24 Fibre density and cross-section - Single shell DWI 93 25 Fibre density and cross-section - Multi-tissue CSD 103 26 Expressing the effect size relative to controls 111 27 Displaying results with streamlines 113 28 Warping images using warps generated from other packages 115 29 Diffusion gradient scheme handling 117 30 Global intensity normalisation 123 31 Orthonormal Spherical Harmonic basis 125 32 Dixels and Fixels 127 33 Motivation for afdconnectivity 129 34 Batch processing with foreach 131 35 Frequently Asked Questions (FAQ) 135 36 Display issues 141 37 Unusual symbols on terminal 145 38 Compiler error during build 149 39 Hanging or Crashing 151 40 Advanced debugging 155 41 List of MRtrix3 commands 157 42 List of MRtrix3 scripts 307 43 List of MRtrix3 configuration file options 333 44 MRtrix 0.2 equivalent commands 341 ii MRtrix Documentation, Release 3.0 MRtrix provides a large suite of tools for image processing, analysis and visualisation, with a focus on the analysis of white matter using diffusion-weighted MRI Features include the estimation of fibre orientation distributions using constrained spherical deconvolution (Tournier et al.
    [Show full text]
  • Basics of UNIX
    Basics of UNIX August 23, 2012 By UNIX, I mean any UNIX-like operating system, including Linux and Mac OS X. On the Mac you can access a UNIX terminal window with the Terminal application (under Applica- tions/Utilities). Most modern scientific computing is done on UNIX-based machines, often by remotely logging in to a UNIX-based server. 1 Connecting to a UNIX machine from {UNIX, Mac, Windows} See the file on bspace on connecting remotely to SCF. In addition, this SCF help page has infor- mation on logging in to remote machines via ssh without having to type your password every time. This can save a lot of time. 2 Getting help from SCF More generally, the department computing FAQs is the place to go for answers to questions about SCF. For questions not answered there, the SCF requests: “please report any problems regarding equipment or system software to the SCF staff by sending mail to ’trouble’ or by reporting the prob- lem directly to room 498/499. For information/questions on the use of application packages (e.g., R, SAS, Matlab), programming languages and libraries send mail to ’consult’. Questions/problems regarding accounts should be sent to ’manager’.” Note that for the purpose of this class, questions about application packages, languages, li- braries, etc. can be directed to me. 1 3 Files and directories 1. Files are stored in directories (aka folders) that are in a (inverted) directory tree, with “/” as the root of the tree 2. Where am I? > pwd 3. What’s in a directory? > ls > ls -a > ls -al 4.
    [Show full text]
  • 27Th Large Installation System Administration Conference (LISA '13)
    conference proceedings Proceedings of the 27th Large Installation System Administration Conference 27th Large Installation System Administration Conference (LISA ’13) Washington, D.C., USA November 3–8, 2013 Washington, D.C., USA November 3–8, 2013 Sponsored by In cooperation with LOPSA Thanks to Our LISA ’13 Sponsors Thanks to Our USENIX and LISA SIG Supporters Gold Sponsors USENIX Patrons Google InfoSys Microsoft Research NetApp VMware USENIX Benefactors Akamai EMC Hewlett-Packard Linux Journal Linux Pro Magazine Puppet Labs Silver Sponsors USENIX and LISA SIG Partners Cambridge Computer Google USENIX Partners Bronze Sponsors Meraki Nutanix Media Sponsors and Industry Partners ACM Queue IEEE Security & Privacy LXer ADMIN IEEE Software No Starch Press CiSE InfoSec News O’Reilly Media Computer IT/Dev Connections Open Source Data Center Conference Distributed Management Task Force IT Professional (OSDC) (DMTF) Linux Foundation Server Fault Free Software Magazine Linux Journal The Data Center Journal HPCwire Linux Pro Magazine Userfriendly.org IEEE Pervasive © 2013 by The USENIX Association All Rights Reserved This volume is published as a collective work. Rights to individual papers remain with the author or the author’s employer. Permission is granted for the noncommercial reproduction of the complete work for educational or research purposes. Permission is granted to print, primarily for one person’s exclusive use, a single copy of these Proceedings. USENIX acknowledges all trademarks herein. ISBN 978-1-931971-05-8 USENIX Association Proceedings of the 27th Large Installation System Administration Conference November 3–8, 2013 Washington, D.C. Conference Organizers Program Co-Chairs David Nalley, Apache Cloudstack Narayan Desai, Argonne National Laboratory Adele Shakal, Metacloud, Inc.
    [Show full text]
  • Understanding and Taming SSD Read Performance Variability: HDFS Case Study
    Understanding and taming SSD read performance variability: HDFS case study Mar´ıa F. Borge Florin Dinu Willy Zwaenepoel University of Sydney University of Sydney EPFL and University of Sydney Abstract found in hardware. On the other hand, they can introduce their own performance bottlenecks and sources of variabil- In this paper we analyze the influence that lower layers (file ity [6, 21, 26]. system, OS, SSD) have on HDFS’ ability to extract max- In this paper we perform a deep dive into the performance imum performance from SSDs on the read path. We un- of a critical layer in today’s big data storage stacks, namely cover and analyze three surprising performance slowdowns the Hadoop Distributed File System (HDFS [25]). We par- induced by lower layers that result in HDFS read through- ticularly focus on the influence that lower layers have on put loss. First, intrinsic slowdown affects reads from every HDFS’ ability to extract the maximum throughput that the new file system extent for a variable amount of time. Sec- underlying storage is capable of providing. We focus on the ond, temporal slowdown appears temporarily and periodi- HDFS read path and on SSD as a storage media. We use cally and is workload-agnostic. Third, in permanent slow- ext4 as the file system due to its popularity. The HDFS read down, some files can individually and permanently become path is important because it can easily become a performance slower after a period of time. bottleneck for an application’s input stage which accesses We analyze the impact of these slowdowns on HDFS and slower storage media compared to later stages which are of- show significant throughput loss.
    [Show full text]