Big Data Meets HPC Log Analytics: Scalable Approach to Understanding Systems at Extreme Scale Byung H. Park∗, Saurabh Hukerikar∗, Ryan Adamsony, and Christian Engelmann∗ ∗Computer Science and Mathematics Division yNational Center for Computational Sciences Oak Ridge National Laboratory Oak Ridge, TN, USA Email: fparkbh, hukerikarsr, adamsonrm,
[email protected] Abstract—Today’s high-performance computing (HPC) sys- This system activity and event information is logged for mon- tems are heavily instrumented, generating logs containing in- itoring and analysis. Large-scale HPC installations produce formation about abnormal events, such as critical conditions, various types of log data. For example, job logs maintain faults, errors and failures, system resource utilization, and about the resource usage of user applications. These logs, once fully a history of application runs, the allocated resources, their analyzed and correlated, can produce detailed information about sizes, user information, and exit statuses, i.e., successful vs. the system health, root causes of failures, and analyze an failed. Reliability, availability and serviceability (RAS) system application’s interactions with the system, providing valuable logs derive data from various hardware and software sensors, insights to domain scientists and system administrators. However, such as temperature sensors, memory errors and processor processing HPC logs requires a deep understanding of hardware and software components at multiple layers of the system stack. utilization. Network systems collect data about network link Moreover, most log data is unstructured and voluminous, making bandwidth, congestion and routing and link faults. Input/output it more difficult for system users and administrators to manually (I/O) and storage systems produce logs that record perfor- inspect the data.