Paratrac: a Fine-Grained Profiler for Data-Intensive Workflows

Total Page:16

File Type:pdf, Size:1020Kb

Paratrac: a Fine-Grained Profiler for Data-Intensive Workflows ParaTrac: A Fine-Grained Profiler for Data-Intensive Workflows Nan Dun Kenjiro Taura Akinori Yonezawa Department of Computer Department of Information and Department of Computer Science Communication Engineering Science The University of Tokyo The University of Tokyo The University of Tokyo 7-3-1 Hongo, Bunkyo-Ku 7-3-1 Hongo, Bunkyo-Ku 7-3-1 Hongo, Bunkyo-Ku Tokyo, 113-5686 Japan Tokyo, 113-5686 Japan Tokyo, 113-5686 Japan [email protected] [email protected] [email protected] tokyo.ac.jp tokyo.ac.jp tokyo.ac.jp ABSTRACT 1. INTRODUCTION The realistic characteristics of data-intensive workflows are With the advance of high performance distributed com- critical to optimal workflow orchestration and profiling is puting, users are able to execute various data-intensive an effective approach to investigate the behaviors of such applications by harnessing massive computing resources [1]. complex applications. ParaTrac is a fine-grained profiler Though workflow management systems have been developed for data-intensive workflows by using user-level file system to alleviate the difficulties of planning, scheduling, and exe- and process tracing techniques. First, ParaTrac enables cuting complex workflows in distributed environments [2{5], users to quickly understand the I/O characteristics of from optimal workflow management still remains a challenge entire application to specific processes or files by examining because of the complexity of applications. Therefore, one low-level I/O profiles. Second, ParaTrac automatically of important and practical demands is to understand and exploits fine-grained data-processes interactions in workflow characterize the data-intensive applications to help workflow to help users intuitively and quantitatively investigate management systems (WMS) refine their orchestration for realistic execution of data-intensive workflows. Experiments workflow execution. Research has been conducted to on thoroughly profiling Montage workflow demonstrate both elaborate the characterization of a wide variety of scientific the scalability and effectiveness of ParaTrac. The overhead workflows using synthetic approaches [6, 7]. However, these of tracing thousands of processes is around 16%. We use methods require application knowledge from domain re- low-level I/O profiles and informative workflow DAGs to searchers and do not include runtime information extracted illustrate the vantage of fine-grained profiling by helping from realistic execution. Thus, it will be helpful for workflow users comprehensively understand the application behaviors researchers if they can acquire the workflow essentials simply and refine the scheduling for complex workflows. Our study by running their own original applications. also suggests that current workflow management systems Profiling is an effective dynamic analysis approach to may use fine-grained profiles to provide more flexible control investigate complex applications in practice. By using for optimal workflow execution. user-level file system and process tracing techniques, we designed and implemented a fine-grained profiler called ParaTrac, for data-intensive workflows. ParaTrac allows Categories and Subject Descriptors users to trace unmodified applications and investigate them D.2.5 [Software]: Testing and Debugging|tracing; D.4.8 in different aspects by automatically generated workflow [Software]: Performance|modeling and prediction profiles. First, ParaTrac produces valuable low-level I/O profiles that allow users to quickly understand the I/O General Terms characteristics of from entire application to specific processes or files. In addition, ParaTrac is novel because it can exploit Performance fine-grained data-process interactions in realistic execution to help users intuitively and quantitatively investigate data- Keywords intensive workflows. ParaTrac automatically generates workflow DAGs annotated with runtime information and Profiling, workflow, tracing the characteristics of workflow can be explored by applying graph manipulations and graph-theoretic algorithms to cor- responding DAGs. Besides, users are allowed to customize those profiles or create new ones by querying raw trace logs Permission to make digital or hard copies of all or part of this work for stored in SQL database. personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies The significance of this work is twofold. First, Para- bear this notice and the full citation on the first page. To copy otherwise, to Trac provides an effortless way to extract and generate republish, to post on servers or to redistribute to lists, requires prior specific informative and comprehensive workflow profiles from fine- permission and/or a fee. grained profiling data, which enables users to intuitively HPDC ’10 Chicago, Illinois USA and quantitatively analysis, debug, and refine their own Copyright 200X ACM X-XXXXX-XX-X/XX/XX ...$10.00. 1 workflows. Second, fine-grained profiling implies its po- processing power or by striping data for transfer instead of tential support for fine-grained and realistic scheduling of passing the whole data. Besides, fine-grained scheduling is workflows. Therefore, fine-grained profiling by ParaTrac demonstrated to be able to improve the execution accuracy suggests not only the vantage of more accurate study of in high volatility environments [22]. workflows, but also the feasilbility of enabling more flexible As the abstract of workflow execution, profile can be used workflow controls for future workflow management systems. for head-to-head comparisons between different workflows for many purposes. For example, a public repository 2. RELATED WORK of scientific workflow benchmarks will contribute to the evaluation of workflow systems [6]. For this reason, profile File system tracing, either in kernel-level or user-level, data generated by ParaTrac is designed to be portable and is widely used for file system performance and application extensible so they can be reused by other researchers. I/O analysis [8{11]. Though kernel-level tracing provides a deeper probe of file system status with less overhead, user-level tracing demonstrates more advantages in practice. 3. PROFILING APPROACHES First, user-level approaches can be used by non-privilege users. Second, user-level interfaces (e.g., FUSE APIs [12]) 3.1 User-Level Tracing Techniques are more portable than kernel-level interfaces in most ParaTrac traces all file system calls invoked during the architectures. For example, DFSTrace [8] and Tracefs [10] workflow execution, as well as files and processes that refer uses kernel trace module that has to be modified when the to these file system calls. Table 1 summarized the logged kernel changes. Third, kernel approaches target on tracing runtime information by ParaTrac. the activities of the whole file systems thus introducing overhead to the whole file system. Different from DFSTrace 3.1.1 File System Call Tracing and Tracefs, ParaTrac is designed for tracing individual FUSE (Filesystem in Userspace) [12] is a framework that user applications and is non-obtrusive for other users in lets users develop their own user-level file systems without the time-sharing system. Additionally, ParaTrac logs both touching the kernel. By implementing FUSE APIs, users file system calls and process runtime to generate various can map file system calls to specific userspace interfaces workflow profiles, which distinguishes itself from other user- and unmodified binaries can run above the corresponding level file system tracer, such as LoggedFS [11]. FUSE file systems. For example, the read file system call The understanding of I/O characterization of workloads in ParaTrac is implemented as shown in Figure 1, where can benefit many areas of data-intensive computing prac- logging procedures are shown in lines marked with \+". tice [13{16]. The static analysis [14] can be used if the However, there are two limitations when using FUSE to knowledge of application (e.g., source code, documentation, trace file system calls. First, only those file system calls data structure and algorithms, etc.) is available. However, accessing files or directories under the directory mounted this approach requires non-trivial effort and even users may by FUSE program (i.e, FUSE mount point) can be traced. not be able to acquire enough knowledge of applications Files outside the trace directory (e.g., shared libraries (e.g., no source code but only binaries). Instead, the and header files) are out of the tracing coverage. Using dynamic approach, generally trace-based, is more practical chroot [23] is a straightforward workaround, but it usually and effective even for parallel applications [13, 15]. requires superuser privileges. For non-privileged users, Data-job interactions in workflow execution are critical to fakechroot [24] and fakeroot [25] utilities can be used scheduling and management of data-intensive workflows [17{ instead. Second, old FUSE implementation (version < 19]. Besides the problem of acquiring application knowledge, 2.7.x) [12] does not allow accesses of character devices static analysis is unable to capture the runtime behaviors (e.g., /dev/null) from user-space. CUSE (Character Device of workflow, especially with constraints encountered in in Userspace) [26] coming with latest versions of FUSE dynamic environments, such as data congestion due to unex- 2.8.0 [12] enables the control of character devices in user pected usages of shared network links by other applications. space and thus solves this problem. In response to this problem, recent
Recommended publications
  • A Large-Scale Study of File-System Contents
    A Large-Scale Study of File-System Contents John R. Douceur and William J. Bolosky Microsoft Research Redmond, WA 98052 {johndo, bolosky}@microsoft.com ABSTRACT sizes are fairly consistent across file systems, but file lifetimes and file-name extensions vary with the job function of the user. We We collect and analyze a snapshot of data from 10,568 file also found that file-name extension is a good predictor of file size systems of 4801 Windows personal computers in a commercial but a poor predictor of file age or lifetime, that most large files are environment. The file systems contain 140 million files totaling composed of records sized in powers of two, and that file systems 10.5 TB of data. We develop analytical approximations for are only half full on average. distributions of file size, file age, file functional lifetime, directory File-system designers require usage data to test hypotheses [8, size, and directory depth, and we compare them to previously 10], to drive simulations [6, 15, 17, 29], to validate benchmarks derived distributions. We find that file and directory sizes are [33], and to stimulate insights that inspire new features [22]. File- fairly consistent across file systems, but file lifetimes vary widely system access requirements have been quantified by a number of and are significantly affected by the job function of the user. empirical studies of dynamic trace data [e.g. 1, 3, 7, 8, 10, 14, 23, Larger files tend to be composed of blocks sized in powers of two, 24, 26]. However, the details of applications’ and users’ storage which noticeably affects their size distribution.
    [Show full text]
  • File System Design Approaches
    File System Design Approaches Dr. Brijender Kahanwal Department of Computer Science & Engineering Galaxy Global Group of Institutions Dinarpur, Ambala, Haryana, INDIA [email protected] Abstract—In this article, the file system development design The experience with file system development is limited approaches are discussed. The selection of the file system so the research served to identify the different techniques design approach is done according to the needs of the that can be used. The variety of file systems encountered developers what are the needed requirements and show what an active area of research file system specifications for the new design. It allowed us to identify development is. The file systems researched fell into one of where our proposal fitted in with relation to current and past file system development. Our experience with file system the following four categories: development is limited so the research served to identify the 1. The file system is developed in user space and runs as a different techniques that can be used. The variety of file user process. systems encountered show what an active area of research file 2. The file system is developed in the user space using system development is. The file systems may be from one of the FUSE (File system in USEr space) kernel module and two fundamental categories. In one category, the file system is runs as a user process. developed in user space and runs as a user process. Another 3. The file system is developed in the kernel and runs as a file system may be developed in the kernel space and runs as a privileged process.
    [Show full text]
  • Latency-Aware, Inline Data Deduplication for Primary Storage
    iDedup: Latency-aware, inline data deduplication for primary storage Kiran Srinivasan, Tim Bisson, Garth Goodson, Kaladhar Voruganti NetApp, Inc. fskiran, tbisson, goodson, [email protected] Abstract systems exist that deduplicate inline with client requests for latency sensitive primary workloads. All prior dedu- Deduplication technologies are increasingly being de- plication work focuses on either: i) throughput sensitive ployed to reduce cost and increase space-efficiency in archival and backup systems [8, 9, 15, 21, 26, 39, 41]; corporate data centers. However, prior research has not or ii) latency sensitive primary systems that deduplicate applied deduplication techniques inline to the request data offline during idle time [1, 11, 16]; or iii) file sys- path for latency sensitive, primary workloads. This is tems with inline deduplication, but agnostic to perfor- primarily due to the extra latency these techniques intro- mance [3, 36]. This paper introduces two novel insights duce. Inherently, deduplicating data on disk causes frag- that enable latency-aware, inline, primary deduplication. mentation that increases seeks for subsequent sequential Many primary storage workloads (e.g., email, user di- reads of the same data, thus, increasing latency. In addi- rectories, databases) are currently unable to leverage the tion, deduplicating data requires extra disk IOs to access benefits of deduplication, due to the associated latency on-disk deduplication metadata. In this paper, we pro- costs. Since offline deduplication systems impact la-
    [Show full text]
  • On the Performance Variation in Modern Storage Stacks
    On the Performance Variation in Modern Storage Stacks Zhen Cao1, Vasily Tarasov2, Hari Prasath Raman1, Dean Hildebrand2, and Erez Zadok1 1Stony Brook University and 2IBM Research—Almaden Appears in the proceedings of the 15th USENIX Conference on File and Storage Technologies (FAST’17) Abstract tions on different machines have to compete for heavily shared resources, such as network switches [9]. Ensuring stable performance for storage stacks is im- In this paper we focus on characterizing and analyz- portant, especially with the growth in popularity of ing performance variations arising from benchmarking hosted services where customers expect QoS guaran- a typical modern storage stack that consists of a file tees. The same requirement arises from benchmarking system, a block layer, and storage hardware. Storage settings as well. One would expect that repeated, care- stacks have been proven to be a critical contributor to fully controlled experiments might yield nearly identi- performance variation [18, 33, 40]. Furthermore, among cal performance results—but we found otherwise. We all system components, the storage stack is the corner- therefore undertook a study to characterize the amount stone of data-intensive applications, which become in- of variability in benchmarking modern storage stacks. In creasingly more important in the big data era [8, 21]. this paper we report on the techniques used and the re- Although our main focus here is reporting and analyz- sults of this study. We conducted many experiments us- ing the variations in benchmarking processes, we believe ing several popular workloads, file systems, and storage that our observations pave the way for understanding sta- devices—and varied many parameters across the entire bility issues in production systems.
    [Show full text]
  • Filesystems HOWTO Filesystems HOWTO Table of Contents Filesystems HOWTO
    Filesystems HOWTO Filesystems HOWTO Table of Contents Filesystems HOWTO..........................................................................................................................................1 Martin Hinner < [email protected]>, http://martin.hinner.info............................................................1 1. Introduction..........................................................................................................................................1 2. Volumes...............................................................................................................................................1 3. DOS FAT 12/16/32, VFAT.................................................................................................................2 4. High Performance FileSystem (HPFS)................................................................................................2 5. New Technology FileSystem (NTFS).................................................................................................2 6. Extended filesystems (Ext, Ext2, Ext3)...............................................................................................2 7. Macintosh Hierarchical Filesystem − HFS..........................................................................................3 8. ISO 9660 − CD−ROM filesystem.......................................................................................................3 9. Other filesystems.................................................................................................................................3
    [Show full text]
  • Alpha Release of the Data Service
    This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 777533. PROviding Computing solutions for ExaScale ChallengeS D5.2 Alpha release of the Data service Start / 01 November 2017 Project: PROCESS H2020 – 777533 Duration: 36 Months Dissemination1: Public Nature2: R Due Date: 31 January 2019 Work Package: WP 5 Filename3 PROCESS_D5.2_Alpha_release_of_the_Data_service_v1.0.docx ABSTRACT During the first 15 months of its implementation, PROCESS has progressed from architecture design based on use cases’ requirements (D4.1) and through architecture validation again based on use cases (D4.2) towards initial implementations of computing services (D6.1) and data services - effort presented here in the deliverable D5.2. In D5.2 we provide an initial demonstrator of the data services, which works in cooperation with the computation services demonstrated in D6.1. The demonstrator is based on the design of the PROCESS data infrastructure described in D5.1. The implementation and initial integration of the infrastructure are based on use case requirements, formulated here as custom application-specific services which are part of the infrastructure. The central, connecting component of the data infrastructure is LOBCDER. It implements a micro-infrastructure of data services, based on dynamically provisioned Docker containers. Additionally to LOBCDER and use case-specific services, the data infrastructure contains generic data and metadata- handling services (DISPEL, DataNet). Finally, Cloudify integrates the micro-infrastructure and the orchestration components of WP7. 1 PU = Public; CO = Confidential, only for members of the Consortium (including the EC services). 2 R = Report; R+O = Report plus Other.
    [Show full text]
  • CD ROM Drives and Their Handling Under RISC OS Explained
    28th April 1995 Support Group Application Note Number: 273 Issue: 0.4 **DRAFT** Author: DW CD ROM Drives and their Handling under RISC OS Explained This Application Note details the interfacing of, and protocols underlying, CD ROM drives and their handling under RISC OS. Applicable Related Hardware : Application All Acorn computers running Notes: 266: Developing CD ROM RISC OS 3.1 or greater products for Acorn machines 269: File transfer between Acorn and foreign platforms Every effort has been made to ensure that the information in this leaflet is true and correct at the time of printing. However, the products described in this leaflet are subject to continuous development and Support Group improvements and Acorn Computers Limited reserves the right to change its specifications at any time. Acorn Computers Limited Acorn Computers Limited cannot accept liability for any loss or damage arising from the use of any information or particulars in this leaflet. Acorn, the Acorn Logo, Acorn Risc PC, ECONET, AUN, Pocket Acorn House Book and ARCHIMEDES are trademarks of Acorn Computers Limited. Vision Park ARM is a trademark of Advance RISC Machines Limited. Histon, Cambridge All other trademarks acknowledged. ©1995 Acorn Computers Limited. All rights reserved. CB4 4AE Support Group Application Note No. 273, Issue 0.4 **DRAFT** 28th April 1995 Table of Contents Introduction 3 Interfacing CD ROM Drives 3 Features of CD ROM Drives 4 CD ROM Formatting Standards ("Coloured Books") 5 CD ROM Data Storage Standards 7 CDFS and CD ROM Drivers 8 Accessing Data on CD ROMs Produced for Non-Acorn Platforms 11 Troubleshooting 12 Appendix A: Useful Contacts 13 Appendix B: PhotoCD Mastering Bureaux 14 Appendix C: References 14 Support Group Application Note No.
    [Show full text]
  • Hands-On Linux Administration on Azure
    Hands-On Linux Administration on Azure Explore the essential Linux administration skills you need to deploy and manage Azure-based workloads Frederik Vos BIRMINGHAM - MUMBAI Hands-On Linux Administration on Azure Copyright © 2018 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Commissioning Editor: Vijin Boricha Acquisition Editor: Rahul Nair Content Development Editor: Nithin George Varghese Technical Editor: Komal Karne Copy Editor: Safis Editing Project Coordinator: Drashti Panchal Proofreader: Safis Editing Indexer: Mariammal Chettiyar Graphics: Tom Scaria Production Coordinator: Deepika Naik First published: August 2018 Production reference: 1310818 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-78913-096-6 www.packtpub.com mapt.io Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career.
    [Show full text]
  • Tahoe-LAFS Documentation Release 1.X
    Tahoe-LAFS Documentation Release 1.x The Tahoe-LAFS Developers January 19, 2017 Contents 1 Welcome to Tahoe-LAFS! 3 1.1 What is Tahoe-LAFS?..........................................3 1.2 What is “provider-independent security”?................................3 1.3 Access Control..............................................4 1.4 Get Started................................................4 1.5 License..................................................4 2 Installing Tahoe-LAFS 5 2.1 First: In Case Of Trouble.........................................5 2.2 Pre-Packaged Versions..........................................5 2.3 Preliminaries...............................................5 2.4 Install the Latest Tahoe-LAFS Release.................................6 2.5 Running the tahoe executable.....................................8 2.6 Running the Self-Tests..........................................8 2.7 Common Problems............................................9 2.8 Using Tahoe-LAFS............................................9 3 How To Run Tahoe-LAFS 11 3.1 Introduction............................................... 11 3.2 Do Stuff With It............................................. 12 3.3 Socialize................................................. 13 3.4 Complain................................................. 13 4 Configuring a Tahoe-LAFS node 15 4.1 Node Types................................................ 16 4.2 Overall Node Configuration....................................... 16 4.3 Connection Management........................................
    [Show full text]
  • Porting FUSE to L4re
    Großer Beleg Porting FUSE to L4Re Florian Pester 23. Mai 2013 Technische Universit¨at Dresden Fakult¨at Informatik Institut fur¨ Systemarchitektur Professur Betriebssysteme Betreuender Hochschullehrer: Prof. Dr. rer. nat. Hermann H¨artig Betreuender Mitarbeiter: Dipl.-Inf. Carsten Weinhold Erkl¨arung Hiermit erkl¨are ich, dass ich diese Arbeit selbstst¨andig erstellt und keine anderen als die angegebenen Hilfsmittel benutzt habe. Declaration I hereby declare that this thesis is a work of my own, and that only cited sources have been used. Dresden, den 23. Mai 2013 Florian Pester Contents 1. Introduction 1 2. State of the Art 3 2.1. FUSE on Linux . .3 2.1.1. FUSE Internal Communication . .4 2.2. The L4Re Virtual File System . .5 2.3. Libfs . .5 2.4. Communication and Access Control in L4Re . .6 2.5. Related Work . .6 2.5.1. FUSE . .7 2.5.2. Pass-to-Userspace Framework Filesystem . .7 3. Design 9 3.1. FUSE Server parts . 11 4. Implementation 13 4.1. Example Request handling . 13 4.2. FUSE Server . 14 4.2.1. LibfsServer . 14 4.2.2. Translator . 14 4.2.3. Requests . 15 4.2.4. RequestProvider . 15 4.2.5. Node Caching . 15 4.3. Changes to the FUSE library . 16 4.4. Libfs . 16 4.5. Block Device Server . 17 4.6. File systems . 17 5. Evaluation 19 6. Conclusion and Further Work 25 A. FUSE operations 27 B. FUSE library changes 35 C. Glossary 37 V List of Figures 2.1. The architecture of FUSE on Linux . .3 2.2. The architecture of libfs .
    [Show full text]
  • Eventtracker: Removable Media Device Monitoring Version 7.X
    EventTracker: Removable Media Device Monitoring Version 7.x EventTracker 8815 Centre Park Drive Columbia MD 21045 Publication Date: Dec 21, 2011 www.eventtracker.com EventTracker: Removable Media Device Monitoring Abstract With the introduction of newer portable devices, the security needs of protecting integrity and confidential data has been changed. An increasing need of portable access to the data has also increased the risk of sensitive or confidential data exposure. Therefore, to keep a record of removable media device activities has become one of the most important compliance factor for the enterprise. EventTracker’s advanced removable media monitoring capacity protects and monitors system(s) from illegal access or data theft. EventTracker helps user(s) to disable the unauthorized access to the machine and allow the trusted devices connection. Purpose This document will help you to enable the removable device monitoring and explains the procedure to find the Device ID and USB serial number. It also monitors insertion/removal and files written to and read from removable media such as CD/DVD and USB. Intended Audience Administrators who are assigned the task to monitor and manage events using EventTracker. Scope The configurations detailed in this guide are consistent with EventTracker Enterprise version 7.x. The instructions can be used while working with later releases of EventTracker Enterprise. The information contained in this document represents the current view of Prism Microsystems Inc. on the issues discussed as of the date of publication. Because Prism Microsystems must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Prism Microsystems, and Prism Microsystems cannot guarantee the accuracy of any information presented after the date of publication.
    [Show full text]
  • Extendable Storage Framework for Reliable Clustered Storage Systems by Sumit Narayan B.E., University of Madras, 2002 M.S., University of Connecticut, 2004
    Extendable Storage Framework for Reliable Clustered Storage Systems Sumit Narayan, Ph.D. University of Connecticut, 2010 The total amount of information stored on disks has increased tremendously in re­ cent years with data storage, sharing and backup becoming more important than ever. The demand for storage has not only changed in size, but also in speed, reli­ ability and security. These requirements not only create a big challenge for storage administrators who must decide on several aspects of storage policy with respect to provisioning backups, retention, redundancy, security, performance, etc. but also for storage system architects who must aim for a one system fits all design. Storage poli­ cies like backup and security are typically set by system administrators for an entire file system, logical volume or storage pool. However, this granularity is too large and can sacrifice storage efficiency and performance - particularly since different files have different storage requirements. In the same context, clustered storage systems that are typically used for data storage or as file servers, provide very high performance and maximum scalability by striping data across multiple nodes. However, high num­ ber of storage nodes in such large systems also raises concerns for reliability in terms of loss of data due to failed nodes, or corrupt blocks on disk drives. Redundancy techniques similar to RAID across these nodes are not common because of the high overhead incurred owing to the parity calculations for all the files present on the file system. In the same way, data integrity checks are often omitted or disabled in file systems to guarantee high throughput from the storage system.
    [Show full text]