Applying Machine Learning to Understand Write Performance of Large-scale Parallel Filesystems

Presented by Bing Xie Bing Xie, Zilong Tan, Philip Carns, Jeff Chase, Kevin Harms, Jay Lofstead, Sarp Oral, Sudharshan S. Vazhkudai, Feiyi Wang

ORNL is managed by UT-Battelle, LLC for the US Department of Energy Applying Machine Learning to Understand Write Performance of Large-scale Parallel Filesystems

• Problem – Understand the write performance of HPC applications running on large-scale systems • Contribution – Built accurate ML models for predicting the I/O write performance – Interpreted multi-stage write behaviors of large-scale I/O subsystems • Impact – Demonstrated that ML can be applied to predict the write performance of large-scale I/O subsystems – Delivered a generic solution applicable to various large-scale I/O subsystems and technologies

2 Open slide master to edit Motivation: Reduce the Write Cost

• Configure write burst size/rate tradeoffs • Guide I/O middleware (e.g., ROMIO) to adapt write patterns • Inform system job schedulers to yield tighter/better estimates of I/O cost and application runtime

3 Open slide master to edit Related Works and Our Solution

• I/O performance studies – Profiling supercomputer I/O subsystems under production loads – Darshan toolkit – Statistical benchmarking • I/O middleware systems – ROMIO, ADIOS • ML in I/O performance prediction – Tune I/O parameters at application level – Learn I/O patterns from job logs and system monitoring data • Our Solution – First ML work to predict write performance of large-scale parallel filesystems based on application write patterns, system architecture, and configurations

4 Open slide master to edit Typical Scientific Applications

• HPC codes compute • A generic example: XGC for a long time at large – Evaluate physical equations iteratively scales over space: compute cost is predictable • Produce write bursts – 4 types of bursts with different write that stall application frequencies and burst sizes: • state snapshots: 500MB to 1.2GB executions and • diagnostic analysis bursts: 1MB – 400MB impact application • Bursts are stored as independent files runtime – Write stalls comprise 7-20% of run time

5 Open slide master to edit Target I/O systems

• Titan and Spider 2 at OLCF/ORNL Metadata Server – Cray XK7 – Lustre filesystem Client Server Target • and -FS1at ALCF/ANL – IBM Blue Gene/Q SAN – GPFS filesystem Supercomputer Storage System

6 Open slide master to edit Challenges

• High performance variability • Limited filesystem visibility for end-users

7 Open slide master to edit High Performance Variability

1. CDFs of write performance variations on Titan and Cetus.

1

0.8 3. Write performance on Titan and Cetus is highly variable.

0.6 CDF 0.4

0.2 Cetus Titan 0 2. The x-axis represents the relative measures 5 10 15 20 25 30 ( max/min ) of the write bandwidths of the Max/Min experiment data (IOR benchmarks) 8 Open slide master to edit Our Approach

• Highly variable, but reverts to mean over time – Model the mean performance – Effectively address the repeated I/O writes and aggregate impact • Limited visibility for end users – Extract features from write patterns and system architecture and configurations • Interference – Address noise as features • ML solution – Convergence-guaranteed sampling method – Lasso models – Systematic ML methodology

9 Open slide master to edit End-to-end I/O Write Path

Metadata Server

Client Server Target Each Target is a RAID array.

SAN Spider 2 Titan (Atlas1 and 2) Stripe_size Burst 0 b0 b1 b2 b3 Example: b4 b5 b6 b7 Stripe_Count=4 Starting_OST=23 Striping

Burst 0 Server23 Server24 Server25 Server26

Target23 Target24 Target25 Target26

10 Open slide master to edit Extract Features

• Insight: infer end-to-end burst absorption time based on performance-related parameters (write load, load skew, resources in use) at each stage • Collectable performance-related parameters on Titan and Cetus • Predictable performance-related parameters on Spider 2 and Mira-FS1 • Positive and inverse forms of performance-related parameters on separate stages, adjacent stages, and noise • Titan/Spider 2: 41 features; Cetus/Mira-FS1: 30 features

11 Open slide master to edit Systematic Machine Learning Approach

Candidate In each training set a Lasso model features

For each model

Search for the model with minimum MSE from the 255 Lasso 1. Train the model with BEST models each for 1 training set 10-fold cross validation. 2. Evaluate the model by MODEL Mean Square Error (MSE).

12 Open slide master to edit Experiments

• Train models on a small scale data set – 3,465 (Titan) and 4,715 (Cetus) converged samples collected with multiple IOR benchmarks on the scale of 1-128 compute nodes • Evaluate models on medium scale – 668 (Titan) and 874 (Cetus) converged samples produced by 200 -512 compute nodes • Evaluation criteria – Accuracy of the best model – Effectiveness of features

13 Open slide master to edit Reported 4 models

• Lassobest – With minimum Mean Square Error from 255 Lasso models across the training set candidates

• Lassobase – The Lasso model trained on the write scales of 1-128 compute nodes

• Linearbest – With minimum Mean Square Error from 255 Linear models across the training set candidates

• Linearbase – The Linear model trained on the write scales of 1-128 compute nodes

14 Open slide master to edit Results on Titan and Cetus

test set with 200, 256 nodes test set with 400, 512 nodes

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

Titan/Spider 2 0 0

-0.2 -0.2 Relative True Error Relative True Error

-0.4 -0.4

-0.6 Lasso -0.6 best_lustre Lasso best_lustre Lasso base_lustre Lasso base_lustre -0.8 Linear best_lustre -0.8 Linear best_lustre Linear base_lustre Linear base_lustre -1 -1 5 8.33 14.34 20.92 34.38 48.4 130.61 5.04 11.56 21.02 30.08 48.85 84.64 250.51 Samples sorted by t, Unit:Sec Samples sorted by t, Unit:Sec test set with 200, 256 test set with 400, 512 1 nodes 1 0.8 0.8nodes

0.6 0.6

0.4 0.4

0.2 0.2

Cetus/Mira-FS1 0 0

-0.2 -0.2 Relative True Error Relative True Error

-0.4 -0.4

-0.6 -0.6 Lasso best_gpfs Lasso best_gpfs Lasso base_gpfs Lasso base_gpfs -0.8 Linear best_gpfs -0.8 Linear best_gpfs Linear base_gpfs Linear base_gpfs -1 -1 5.13 14.33 33.61 62.79 107.76 191.26 2330.2 5.06 13.08 27.83 50.49 95.92 207.04 1281.38 Samples sorted by t, Unit:Sec 15 Samples sorted by t, Unit:Sec Open slide master to edit Results on Titan and Cetus Lasso_best is highly accurate and the best model

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

Titan/Spider 2 0 0

-0.2 -0.2 Relative True Error Relative True Error

-0.4 -0.4

-0.6 Lasso -0.6 best_lustre Lasso best_lustre Lasso base_lustre Lasso base_lustre -0.8 Linear best_lustre -0.8 Linear best_lustre Linear base_lustre Linear base_lustre -1 -1 5 8.33 14.34 20.92 34.38 48.4 130.61 5.04 11.56 21.02 30.08 48.85 84.64 250.51 Samples sorted by t, Unit:Sec Samples sorted by t, Unit:Sec

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

Cetus/Mira-FS1 0 0

-0.2 -0.2 Relative True Error Relative True Error

-0.4 -0.4

-0.6 -0.6 Lasso best_gpfs Lasso best_gpfs Lasso base_gpfs Lasso base_gpfs -0.8 Linear best_gpfs -0.8 Linear best_gpfs Linear base_gpfs Linear base_gpfs -1 -1 5.13 14.33 33.61 62.79 107.76 191.26 2330.2 5.06 13.08 27.83 50.49 95.92 207.04 1281.38 Samples sorted by t, Unit:Sec 16 Samples sorted by t, Unit:Sec Open slide master to edit Conclusions • Problem – Understand the I/O write performance of large-scale supercomputers • Our Solution – Systematic ML approach with Lasso – Modeling the mean performance, extracting features from application write patterns, system architecture and configurations, convergence-guaranteed sampling • Findings

– Lassobest is the most accurate model for both Titan and Cetus – Most effective features are load skew in supercomputers and resources in use on the system side • Applicability – Lasso models, features: Lustre, GPFS deployment – Systematic modeling method: generic supercomputer I/O subsystems

17 Open slide master to edit Acknowledgement

18 Open slide master to edit