Memory Hierarchies for Future Hpc Architectures
Total Page:16
File Type:pdf, Size:1020Kb
MEMORY HIERARCHIES FOR FUTURE HPC ARCHITECTURES V´ICTOR GARC´IA FLORES ADISSERTATION SUBMITTED IN FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY /DOCTOR PER LA UPC TO THE DEPARTMENT OF COMPUTER ARCHITECTURE UNIVERSITAT POLITECNICA` DE CATALUNYA ADVISOR:NACHO NAVARRO CO-ADVISORS:ANTONIO J. PENA˜ ,EDUARD AYGUADE BARCELONA SEPTEMBER 2017 Abstract Efficiently managing the memory subsystem of modern multi/manycore architectures is in- creasingly becoming a challenge as systems grow in complexity and heterogeneity. From multicore architectures with several levels of on-die cache to heterogeneous systems com- bining graphics processing units (GPUs) and traditional general-purpose processors, the once simple Von Neumann machines have transformed into complex systems with a high reliance on an efficient memory subsystem. In the field of high performance computing (HPC) in particular, where massively parallel architectures are used and input sets of several terabytes are common, careful management of the memory hierarchy is crucial to exploit the full com- puting power of these systems. The goal of this thesis is to provide computer architects with valuable information to guide the design of future systems, and in particular of those more widely used in the field of HPC, i.e., symmetric multicore processors (SMPs) and GPUs. With that aim, we present an analysis of some of the inefficiencies and shortcomings of current memory management techniques and propose two novel schemes leveraging the opportunities that arise from the use of new and emerging programming models and computing paradigms. The first contribution of this thesis is a block prefetching mechanism for task-based pro- gramming models. Using a task-based programming model simplifies parallel programming and allows for better resource utilization in the large-scale supercomputers used in the field of HPC, while enabling sophisticated memory management techniques. The scheme pro- posed relies on a memory-aware runtime system to guide prefetching while avoiding the main drawbacks of traditional prefetching mechanisms, i.e., cache pollution, thrashing and lack of timeliness. It leverages the information provided by the user about tasks’ input and output data to prefetch contiguous blocks of memory that are certain to be useful. The pro- posed scheme targets SMPs with large cache hierarchies and uses heuristics to dynamically decide the best cache level to prefetch into without evicting useful data. The focus of this thesis then turns to heterogeneous architectures combining GPUs and traditional multicore processors. The current trend towards tighter coupling of GPU and CPU i Abstract enables new collaborative computations that tax the memory subsystem in a different manner than previous heterogeneous computations did, and requires careful analysis to understand the trade-offs that are to be expected when designing future memory organizations. The second contribution is an in-depth analysis on the impact of sharing the last-level cache between GPU and CPU cores on a system where the GPU is integrated on the same die as the CPU. The analysis focuses on the effect that a shared cache can have on collaborative computations where GPU and CPU threads concurrently work on a problem and share data at fine granularities. The results presented here show that sharing the last-level cache is largely beneficial as it allows for better resource utilization. In addition, the experimental evaluation shows that collaborative computations benefit significantly from the faster CPU- GPU communication and higher cache hit rates that a shared cache level provides. The final contribution of this thesis analyzes the inefficiencies and drawbacks of demand paging as currently implemented in discrete GPUs by NVIDIA. Then, it proposes a novel memory organization and dynamic migration scheme that allows for efficient data sharing between GPU and CPU, specially when executing collaborative computations where data is migrated back and forth between the two separate memories. This scheme migrates data at cache line granularities transparently to the user and operating system, avoiding false sharing and the unnecessary data transfers that occur on the current demand paging mechanism. The results show that the proposed scheme is able to outperform the baseline system by reducing the migration latency of data that is copied multiple times between the two memories. In addition, analysis of different interconnect latencies shows that fine-grained data sharing between GPU and CPU is feasible as long as future interconnect technologies achieve four to five times lower round-trip times than PCI-Express 3.0. ii Contents Abstracti Contentsv List of Figures vii List of Tables ix Glossary xi 1 Introduction1 1.1 Thesis Objectives and Contributions.....................5 1.1.1 Adaptive Runtime-Assisted Block Prefetching...........5 1.1.2 Last-Level Cache Sharing on Integrated Heterogeneous Architectures6 1.1.3 Efficient Data Sharing on Heterogeneous Architectures.......6 1.2 Thesis Organization..............................7 2 State of the Art9 2.1 Task-Based Programming Models......................9 2.2 Prefetching.................................. 10 2.2.1 Traditional Prefetching........................ 11 2.2.2 Block Prefetching........................... 14 2.3 Heterogeneous GPU-CPU Architectures................... 15 2.3.1 Resource Sharing on Integrated Architectures............ 17 2.3.2 Heterogeneous Computing...................... 19 2.3.3 Heterogeneous Memory Management................ 20 2.3.3.1 Memory Management on Heterogeneous Architectures. 20 2.3.3.2 Management of Hybrid Memory Designs......... 22 2.3.4 Memory Consistency and Cache Coherence............. 22 iii CONTENTS 3 Methodology 25 3.1 Simulation Infrastructure........................... 25 3.2 Workloads................................... 27 3.2.1 Adaptive Runtime-Assisted Block Prefetching........... 27 3.2.2 Heterogeneous Architectures..................... 28 3.3 Metrics.................................... 32 4 Adaptive Runtime-Assisted Block Prefetching 35 4.1 Motivation................................... 35 4.2 Target Architecture.............................. 36 4.3 Block Prefetching............................... 37 4.4 Multicore Data Transfer Engine....................... 38 4.5 Runtime-Assisted Prefetching........................ 39 4.5.1 Adaptive Destination......................... 41 4.5.2 Coordinating Hardware and Software Prefetch with Demand Loads 43 4.6 Methodology................................. 44 4.6.1 Hardware Prefetching........................ 44 4.6.2 Compiler-Based Software Prefetching................ 45 4.7 Experimental Evaluation........................... 46 4.7.1 Average Memory Access Time.................... 46 4.7.2 Cache Hit Rates............................ 47 4.7.3 Execution Time............................ 47 4.7.4 Energy Consumption......................... 50 4.8 Summary and Concluding Remarks..................... 51 5 Last-Level Cache Sharing on Integrated Heterogeneous Architectures 55 5.1 Motivation................................... 55 5.2 Methodology................................. 56 5.3 Experimental Evaluation........................... 58 5.3.1 Rodinia................................ 59 5.3.2 Collaborative Heterogeneous Benchmarks.............. 62 5.3.3 Energy................................ 67 5.4 Summary and Concluding Remarks..................... 68 6 Efficient Data Sharing on Heterogeneous Architectures 71 6.1 Motivation................................... 71 6.2 Demand Paging on GPUs........................... 72 iv CONTENTS 6.2.1 Page Faulting on Known-Pages................... 73 6.2.2 Unused Data and False Sharing................... 74 6.3 Efficient Data Sharing in Heterogeneous Architectures........... 75 6.3.1 Heterogeneous Memory Organization................ 75 6.3.2 Avoiding Host Intervention...................... 78 6.3.3 Efficient Fine-Grained Migration.................. 79 6.4 Methodology................................. 82 6.5 Experimental Evaluation........................... 83 6.5.1 Migration Granularity........................ 83 6.5.2 Impact of Block Sizes in Data Migrations.............. 87 6.5.3 Link Latency Analysis........................ 88 6.6 Summary and Concluding Remarks..................... 90 7 Conclusion and Future Work 91 7.1 Conclusions.................................. 91 7.2 Future Work.................................. 92 7.2.1 Runtime-Assisted Prefetching.................... 93 7.2.2 Resource Sharing on Integrated Systems............... 93 7.2.3 Efficient Data Sharing on Heterogeneous Architectures....... 94 A Publications 95 A.1 Thesis Related Publications.......................... 95 A.2 Other Publications.............................. 96 Bibliography 97 v CONTENTS vi List of Figures 1.1 Processor-memory performance gap. From ”Computer Architecture: A Quan- titative Approach” by John L. Hennessy, David A. Patterson.........2 2.1 Example code of a Cholesky Decomposition in the OmpSs programming model..................................... 10 2.2 High level overview of a heterogeneous system composed of a multicore processor and a discrete GPU connected to the host through PCI-Express.. 16 2.3 High level overview of two integrated heterogeneous architectures with dif- ferent cache hierarchy designs........................ 17 3.1 Simulation workflow for OmpSs applications and TaskSim.......... 26 3.2 Simulation architecture of the gem5-gpu simulator.............. 26 4.1