
Serving Data to the Lunatic Fringe The Evolution of HPC Storage JOHNSTORAGE BENT, BRAD SETTLEMYER, AND GARY GRIDER John Bent. Making super- efore the advent of Big Data, the largest storage systems in the world computers superer for over a were found almost exclusively within high performance computing decade. At Seagate Government centers such as those found at US Department of Energy national lab- Solutions. B [email protected]. oratories. However, these systems are now dwarfed by large datacenters such John Bent. as those run by Google and Amazon. Although HPC storage systems are no longer the largest in terms of total capacity, they do exhibit the largest degree Brad Settlemyer is a Storage of concurrent write access to shared data. In this article, we will explain why Systems Researcher and HPC applications must necessarily exhibit this degree of concurrency and Systems Programmer the unique HPC storage architectures required to support them. specializing in high performance computing. He works as a research scientist in Los Alamos National Computing for Scientific Discovery High performance computing (HPC) has radically altered how the scientific method is used Laboratory’s Systems Integration group. He to aid in scientific discovery and has enabled the development of scientific theories that has published papers on emerging storage were previously unimaginable. Difficult to observe phenomena, such as galaxy collisions and systems, long distance data movement, quantum particle interactions, are now routinely simulated on the world’s largest supercom- network modeling, and storage system puters, and large-scale scientific simulation has dramatically decreased the time between algorithms. [email protected] hypothesis and experimental analysis. As scientists increasingly use simulation for discov- ery in emerging fields such as climatology and nuclear fusion, demand is driving the growth As Division Leader of the High of HPC platforms capable of supporting ever-increasing levels of fidelity and accuracy. Performance Computing (HPC) Extreme-scale HPC platforms (i.e., supercomputers), such as Oak Ridge National Labora- Division at Los Alamos National tory’s Titan or Los Alamos National Laboratory’s Trinity, incorporate tens of thousands Laboratory, Gary Grider is of processors, memory modules, and storage devices into a single system to better support responsible for all aspects of simulation science. Researchers at universities and national laboratories are continuously high performance computing technologies striving to develop algorithms to fully utilize these increasingly powerful and complex and deployment at Los Alamos. Gary is supercomputers. also the US Department of Energy Exascale Storage, I/O, and Data Management National Co-Coordinator. Gary has 30 active patents/ applications in the data storage area and has been working in HPC and HPC-related storage since 1984. [email protected] Figure 1: Adaptive Mesh Refinement. An example of adaptive mesh refinement for a two-dimensional grid in which the most turbulent areas of the mesh have been highly refined (from https://convergecfd.com /applications/gas-turbines/). 34 SUMMER 2016 VOL. 41, NO. 2 www.usenix.org STORAGE Serving Data to the Lunatic Fringe: The Evolution of HPC Storage A simulation is typically performed by decomposing a physical in the storage system and is thus unaligned. Two, the writes are region of interest into a collection of cells called a mesh and then to shared data sets and thus incur either metadata or data bottle- calculating how the properties of the elements within each cell necks [3, 8]. Three, the writes are bursty in that they all occur change over time. The mesh cells are distributed across a set of concurrently during the application checkpoint phase following processes running across the many compute nodes within the a large period of storage idleness during the application compute supercomputer. Contiguous regions of cells are assigned to each phase. Four, the required bandwidth is very high; supercomputer process, and processes must frequently communicate to exchange designers face pressure to ensure 90% efficiency such that the boundary conditions between neighboring cells split across pro- checkpoint-restart of massive amounts of memory must com- cesses. Although replicated cells, called ghost cells, are sometimes plete quickly enough that no more than 10% of supercomputer used to reduce the frequency of communication, processes still lifetime is used. typically exchange messages dozens of times per second. Although many techniques have been developed to reduce this Additional communication occurs during a load-leveling phase. chaos [4], they are typically not available in practice. Incremen- Complex areas of the mesh that contain a large number of dif- tal checkpointing reduces the size of the checkpoint but does ferent types of elements are more difficult to simulate with high not help when the memories are constantly overwritten. Uncoor- fidelity. Accordingly, simulations will often subdivide these dinated checkpointing reduces burstiness but is not amenable to areas into smaller cells as shown in Figure 1. This process of bulk synchronous computation. Two-phase I/O improves perfor- adaptive mesh refinement causes work imbalance as some pro- mance by reorganizing chaotic writes into larger aligned writes. cesses within the parallel application will suddenly be responsi- Checkpointing into neighbor memory improves performance by ble for a larger number of cells than their siblings. Therefore the eliminating media latencies. However, neither of these latter two simulation will rebalance the assignment of cells to processes is possible when all of available memory is used by the application. following these refinements. Thus, HPC storage workloads, lacking common ground with Due to the frequency of communication, the processes must run read-intensive cloud workloads or IOPS-intensive enterprise in parallel and avoid performance deviations to minimize the workloads, have led to the creation of parallel file systems, time spent waiting on messages. This method, tightly coupled such as BeeGFS, Ceph, GPFS, Lustre, OrangeFS, and PanFS, bulk synchronous computation, is a primary differentiator designed to handle bursty and chaotic checkpointing. between HPC and Big Data analytics. Storage for Scientific Discovery Another primary differentiator is that the memory of each pro- From the teraflop to the petaflop era, the basic supercomputer cess is constantly overwritten as the properties of the mesh are architecture was remarkably consistent, and parallel file sys- updated. The total amount of this distributed memory has grown tems were the primary building block of its storage architecture. rapidly. For example, an astrophysics simulation on LANL’s Successive supercomputers were largely copied from the same Trinity system may require up to 1.5 PB of RAM to represent blueprint because the speed of processors and the capacities of regions of space with sufficient detail. Memory is one of the most memory and disk all grew proportionally to each other following precious resources in a supercomputer; most large-scale simula- Moore’s Law. Horizontal arrows in Table 1 show how these basic tions expand to use all available memory. The large memory requirements coupled with the large amount of time required to FLOPS / RAM simulate complex physical interactions leads to a problem for the () RAM / core users of large-scale computing systems. + MTTF per component () The Need for Checkpointing MTTI per application + How can one ensure the successful completion of a simulation that Impact of performance deviations takes days of calculation using tens of thousands of tightly coupled * Drive spindles for capacity computers with petabytes of constantly overwritten memory? () Drive spindles for bandwidth The answer to that question has been checkpoint-restart. Stor- * Tape cassettes for capacity ing the program state into a reliable storage system allows a () failed simulation to restart from the most recently stored state. Tape drives for bandwidth * Storage clients / servers Seemingly a trivial problem in the abstract, checkpoint-restart * in practice is highly challenging because the actual writes in a Table 1: Supercomputer Trends Affecting Storage. Horizontal lines do not checkpoint are extremely chaotic. One, the amount of data stored necessarily indicate no growth in absolute numbers but rather that the by each process is unlikely to match any meaningful block size trend follows the relative overall growth in the machine. www.usenix.org SUMMER 2016 VOL. 41, NO. 2 35 STORAGE Serving Data to the Lunatic Fringe: The Evolution of HPC Storage (a) 2002–2015 (b) 2015–2016 (c) 2016–2020 (d) 2020– Figure 2: From 2 to 4 and back again. Static for over a decade, the HPC storage stack has now entered a period of rapid change. architectural elements scaled proportionally. However, recent During this era, total transistor counts continued to double inflection points, shown with the vertical arrows, require a rede- approximately every 24 months. However, in the second half of sign of both the supercomputer and its storage architecture. this era, they did so horizontally by adding more compute nodes with larger core counts as opposed to merely increasing transis- Era: 2002–2015. The storage architecture for the teraflop tors within cores. This has had important implications which and petaflop eras is shown in Figure 2a. Large tightly coupled affect the storage
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-