CARS: a Contention-Aware Scheduler for Efficient Resource Management

CARS: a Contention-Aware Scheduler for Efficient Resource Management

Parallel Computing 87 (2019) 25–34 Contents lists available at ScienceDirect Parallel Computing journal homepage: www.elsevier.com/locate/parco CARS: A contention-aware scheduler for efficient resource R management of HPC storage systems ∗ Weihao Liang a, Yong Chen b, , Jialin Liu c, Hong An a a Department of Computer Science, University of Science and Technology of China, Hefei, China b Department of Computer Science, Texas Tech University, Lubbock, TX, USA c Lawrence Berkeley National Laboratory Berkeley, CA, USA a r t i c l e i n f o a b s t r a c t Article history: Many scientific applications are becoming more and more data intensive. As the data volume continues Received 11 December 2018 to grow, the data movement between storage and compute nodes has turned into a crucial performance Revised 10 March 2019 bottleneck for many data-intensive applications. Burst buffer provides a promising solution for these ap- Accepted 24 April 2019 plications by absorbing bursty I/O traffic. However, the resource allocation and management strategies Available online 30 April 2019 for burst buffer are not well studied. The existing bandwidth based strategies may cause severe I/O con- Keywords: tention when a large number of I/O-intensive jobs access the burst buffer system concurrently. In this Burst buffer study, we present a contention-aware resource scheduling (CARS) strategy to manage the burst buffer re- Resource management sources and coordinate concurrent data-intensive jobs. The experimental results show that the proposed Scheduling algorithm CARS framework outperforms the existing allocation strategies and improves both the job performance I/O system and the system utilization. Modeling I/O contention ©2019 Elsevier B.V. All rights reserved. 1. Introduction the needs that insights can be mined out of large amounts of low- entropy data have substantially increased over years. More and more scientific applications in critical areas, such as Meantime, with the increasing performance gap between the climate science, astrophysics, combustion science, computational computing and I/O capability, data access has become the per- biology, and high-energy physics, tend to be highly data inten- formance bottleneck of many data-intensive applications, usually sive and present significant data challenges to the research and de- containing a large number of I/O operations and large amounts of velopment community [1,2] . Large amounts of data are generated data are written to and read from the storage system. The newly from scientific experiments, observations, and simulations. The size emerged burst buffer [4] is a promising solution to address this bot- of these datasets can range from hundreds of gigabytes to hun- tleneck issue. Given that the I/O patterns of many scientific appli- dreds of petabytes or even beyond. For instance, the FAST project cations are bursty [5] , burst buffer is designed as a layer of hier- (500-meter Aperture Spherical Telescope) [3] in Australia uses the archical storage, which comprises of high-speed storage medium Next Generation Archive System (NGAS) to store and maintain a such as Solid State Drives (SSD), to let the applications write or large amount of data which is to be collected. NGAS expects to read data quickly and return to the computation phase as soon handle about 3 petabytes of data from FAST every year, enough as possible. The data temporarily stored in burst buffer can then to fill 12,0 0 0 single-layer, 250 gigabyte Blu-ray disks. There are be transferred to the remote storage system asynchronously with- a number of reasons for this big data revolution: a rapid growth out interrupting the applications. In recent years, several peta-scale in computing capability (especially when compared with a much HPC systems have been deployed with a burst buffer sub-system, slower increase in I/O system bandwidth) has made data acquisi- such as Cori supercomputer [6] at National Energy Research Sci- tion and generation much easier; high-resolution, multi-model sci- entific Computing Center (NERSC) and Trinity [7] at Los Alamos entific discovery will require and produce much more data; and National Laboratory (LANL). A number of large-scale data-intensive scientific applications have harnessed significant performance im- provement from these burst buffer systems [8] . R Conflict of interest. The authors declared that they have no conflicts of interest These burst buffer systems are designed as shared resources for to this work. ∗ hundreds or thousands of users and applications. Previous research Corresponding author. E-mail addresses: [email protected] (W. Liang), [email protected] (Y. efforts have mainly focused on studying how to leverage burst Chen), [email protected] (J. Liu), [email protected] (H. An). buffer to improve the applications performance by reducing the https://doi.org/10.1016/j.parco.2019.04.010 0167-8191/© 2019 Elsevier B.V. All rights reserved. 26 W. Liang, Y. Chen and J. Liu et al. / Parallel Computing 87 (2019) 25–34 I/O time [9–11] and overlapping between the computation and I/O are usually deployed with SSDs or storage-class memory (SCM) phases of applications [12,13] . However, the resource management in general as a high-speed storage tier. As a shared resource, the for burst buffer is still understudied and the existing scheduling burst buffer nodes are available for all compute nodes to access di- strategies only consider the capacity request of users, which may rectly. While applications from compute nodes issue bursty I/O op- cause I/O contention between multiple or many concurrent I/O in- erations, the burst buffer nodes can quickly absorb I/O requests in tensive applications. their local SSDs/SCM, and let the applications continue to the next In this paper, we propose a contention-aware resource schedul- computing phase as soon as possible. Afterward, the burst buffer ing (CARS) design to reduce the I/O contention for multiple concur- nodes will transfer the data stored in their local SSDs/SCM to a rent I/O intensive jobs and improve the performance of the burst parallel file system (PFS) asynchronously. Existing large-scale HPC buffer system. Following the study of different resource scheduling systems such as Cori [6] and Trinity [7] have deployed such shared policies for shared burst buffer system and I/O modeling, CARS is burst buffer system architecture as production systems. Another al- designed as a new resource allocation and scheduling strategy for ternative burst buffer architecture is that each compute node con- burst buffer. CARS characterizes the I/O activities of running appli- tains NVMe-device as high-speed local storage, as implemented cations on burst buffer system, and allocates the burst buffer re- on Summit [14] . Such an architecture can effectively reduce I/O source for new jobs based on the analysis of the I/O load on the time by allowing checkpoint files to be written locally and then burst buffer system. Compared to the existing allocation strategies flushed to external PFS asynchronously with specific hardware sup- that only consider the capacity factor, CARS can significantly miti- port [15] . This design is particularly suitable for applications that gate the I/O contention between data-intensive jobs, largely speed- perform N-N (i.e. file per process) I/O operations. In this study, we ing up the I/O while improving the burst buffer system utilization. focus on the shared burst buffer architecture and the optimization The rest of paper is organized as follows. Section 2 discusses of data movement between compute nodes and the shared burst the background and motivation of this research. Section 3 and buffer nodes. Section 4 present the design of the contention-aware resource scheduling (CARS) and the I/O modeling respectively. The ex- 2.2. Resource management for burst buffer perimental and analytic results are presented in Section 5 . Section 6 discusses the related work and compares this study with Burst buffer serves the entire compute system as a shared them, and Section 7 concludes this study and discusses future storage layer. Thus, it is crucial for resource scheduler to man- work. age the burst buffer nodes efficiently to be accessed by differ- ent users and applications. In the burst buffer system of Cori 2. Background and motivation or Trinity supercomputer, the DataWarp [16] software integrated with SLURM [17] workload manager is responsible for the resource In this section, we first introduce the background about the management. When a user submits jobs by indicating the options burst buffer architecture and existing resource allocation policies for burst buffer requirement in the SLURM batch script, the re- of shared burst buffer systems. Then we illustrate the motivation source scheduler will allocate burst buffer nodes to meet users’ of this research study and discuss the challenges in our work. capacity need. The capacity request from users will be split up into multiple chunks and one chunk size is the minimum allo- 2.1. Overview of burst buffer cation size on one burst buffer node. For example, there are two choices of the chunk size on the Cori system, 20GB and 80GB, Fig. 1 illustrates an architecture overview of a shared burst respectively. buffer system. The burst buffer resides on dedicated nodes that Two allocation policies exist for how to assign burst buffer nodes for jobs on Cori, and users can specify the optimiza- tion_strategy option manually in SLURM batch script to chose different policies [16] . The first one is bandwidth policy, which instructs the resource scheduler to select as many burst buffer nodes as possible in a round-robin fashion to maximize the band- width for a job. In this case, users data will be striped across sev- eral burst buffer nodes and one node contains one chunk of the requested capacity size (i.e.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us