Efficiently and Transparently Maintaining High SIMD Occupancy in the Presence of Wavefront Irregularity Stephen V
Total Page:16
File Type:pdf, Size:1020Kb
Washington University in St. Louis Washington University Open Scholarship Engineering and Applied Science Theses & McKelvey School of Engineering Dissertations Summer 8-15-2017 Efficiently and Transparently Maintaining High SIMD Occupancy in the Presence of Wavefront Irregularity Stephen V. Cole Washington University in St. Louis Follow this and additional works at: https://openscholarship.wustl.edu/eng_etds Part of the Computer Engineering Commons Recommended Citation Cole, Stephen V., "Efficiently and Transparently Maintaining High SIMD Occupancy in the Presence of Wavefront Irregularity" (2017). Engineering and Applied Science Theses & Dissertations. 301. https://openscholarship.wustl.edu/eng_etds/301 This Dissertation is brought to you for free and open access by the McKelvey School of Engineering at Washington University Open Scholarship. It has been accepted for inclusion in Engineering and Applied Science Theses & Dissertations by an authorized administrator of Washington University Open Scholarship. For more information, please contact [email protected]. WASHINGTON UNIVERSITY IN ST. LOUIS School of Engineering and Applied Science Department of Computer Science and Engineering Dissertation Examination Committee: Jeremy Buhler, Chair James Buckley Roger Chamberlain Chris Gill I-Ting Angelina Lee Efficiently and Transparently Maintaining High SIMD Occupancy in the Presence of Wavefront Irregularity by Stephen V. Cole A dissertation presented to The Graduate School of Washington University in partial fulfillment of the requirements for the degree of Doctor of Philosophy August 2017 Saint Louis, Missouri © 2017, Stephen V. Cole Table of Contents List of Figures ........................................ v List of Tables ........................................ ix Acknowledgments ...................................... x Abstract ........................................... xv Chapter 1: Background and Context .......................... 1 1.1 Motivation ........................................ 1 1.2 Research Questions and Direction ............................ 9 1.2.1 The Streaming Paradigm ............................ 9 1.3 Related Work ....................................... 13 1.3.1 Non-streaming Approaches to Parallelism ................... 13 1.3.2 Context Within the Streaming Paradigm ................... 17 1.3.3 Level of Developer Abstraction ......................... 18 1.3.4 Road Map and Contributions .......................... 21 Chapter 2: Wavefront Irregularity Case Study: Woodstocc ........... 24 2.1 Overview ......................................... 25 2.2 Background ........................................ 26 2.2.1 Problem Definition ................................ 26 2.2.2 Related Work ................................... 26 2.3 Alignment Strategy .................................... 28 2.3.1 Core Algorithm .................................. 28 2.3.2 Read Batching .................................. 30 2.4 GPU Implementation ................................... 34 2.4.1 Application Mapping to GPU .......................... 34 2.4.2 Kernel Organization to Maximize Occupancy ................. 34 2.4.3 Decoupling Traversal from DP ......................... 38 2.4.4 Boosting Block Occupancy via Kernel Slicing ................. 38 2.5 Results ........................................... 40 2.5.1 Impact of Worklist and Dynamic Mapping ................... 40 2.5.2 Kernel Stage Decoupling ............................. 41 2.5.3 Overall Performance and Limitations ...................... 42 2.6 Summary and Relevance ................................. 44 ii Chapter 3: Mercator Overview and Results .................... 46 3.1 Mercator Application Model ............................. 47 3.1.1 Topologies and Deadlock Avoidance ...................... 49 3.2 Parallelization on a SIMD Platform ........................... 51 3.2.1 Parallel Semantics and Remapping Support .................. 52 3.2.2 Efficient Support for Runtime Remapping ................... 53 3.3 System Implementation ................................. 59 3.3.1 Developer Interface ................................ 59 3.3.2 Infrastructure ................................... 61 3.4 Experimental Evaluation ................................. 65 3.4.1 Design of Performance Experiments ...................... 65 3.4.2 Results and Discussion .............................. 69 3.5 Conclusion ........................................ 74 Chapter 4: Mercator Dataflow Model and Scheduler ............... 77 4.1 Model Introduction / Background ............................ 78 4.1.1 Example Target Platform: GPUs ........................ 78 4.1.2 Related Work ................................... 78 4.2 Function-Centric (FC) Model .............................. 81 4.2.1 Structure ..................................... 81 4.2.2 Execution ..................................... 82 4.2.3 Example Applications .............................. 83 4.3 Mercator and the FC Model ............................. 85 4.3.1 Merged DFGs and Deadlock Potential in the FC Model ........... 85 4.3.2 Deadlock Avoidance with Mercator Queues ................. 86 4.4 Mercator Scheduler: Efficient Internal Execution .................. 88 4.4.1 Feasible Fireability Requirements ........................ 88 4.4.2 Feasible Fireability Calculation ......................... 91 4.5 Mercator Scheduler: Firing Strategy ......................... 91 4.5.1 Mercator Scheduler: Results ......................... 95 4.6 Conclusion ........................................ 101 Chapter 5: Conclusion and Future Work ....................... 103 5.1 Optimization Opportunities ............................... 104 5.1.1 Variable Thread-to-Work Module Mappings .................. 105 5.1.2 Module Fusion/Fission .............................. 105 5.2 Extensions to New Application Classes ......................... 107 5.2.1 Applications with “join” Semantics ....................... 108 5.2.2 Iterative Applications .............................. 109 5.2.3 Bulk Synchronous Parallel (BSP) Applications ................ 110 5.2.4 Case study: Loopy Belief Propagation for Stereo Image Depth Inference .. 114 5.2.5 Graph Processing Applications ......................... 119 5.2.6 Applications with Dynamic Dataflow ...................... 120 5.2.7 Applications with Multiple Granularities of Operation ............ 121 5.2.8 Footnote: Infrastructure Module Novelty .................... 125 iii Appendix A Mercator User Manual ........................ 129 A.1 Introduction ........................................ 129 A.1.1 Intended Use Case: Modular Irregular Streaming Applications ....... 129 A.1.2 Mercator Application Terminology ...................... 130 A.1.3 System Overview and Basic Workflow ..................... 132 A.2 Mercator System Documentation ........................... 133 A.2.1 Initial Input .................................... 133 A.2.2 System Feedback ................................. 147 A.2.3 Full Program ................................... 148 A.2.4 Example Application Code ........................... 157 References .......................................... 165 iv List of Figures Figure 1.1: NCBI SRA growth. ............................... 2 Figure 1.2: Processor categories from Flynn’s taxonomy, including SIMD. ........ 2 Figure 1.3: Irregular-wavefront work assignment challenge. ................ 8 Figure 1.4: Example of streaming pipeline: VaR simulation. ............... 10 Figure 1.5: Comparison between monolithic application structure and modularized stream- ing structure for rectification of wavefront irregularity. ............ 11 Figure 1.6: Spectrum of parallelism granularity targeted by various systems. ...... 15 Figure 1.7: Spectrum of developer abstraction from implementation of parallelism. ... 21 Figure 2.1: Example alignment of a short read to a longer reference sequence. ..... 26 Figure 2.2: Example suffix trie for a string. ......................... 27 Figure 2.3: Snapshot of suffix tree alignment using dynamic programming. ....... 29 Figure 2.4: Snapshot of batched traversal method for dynamic programming suffix tree alignment. .................................... 31 Figure 2.5: Multiple-granulairty SIMD work assignment challenge for batched traversal. 32 Figure 2.6: Idle reads SIMD work assignment challenge for batched traversal. ..... 33 Figure 2.7: Stages of the Woodstocc GPU kernel main loop. .............. 36 Figure 2.8: CDF of loop iterations by block for Woodstocc. .............. 40 Figure 2.9: Major Woodstocc performance result: Woodstocc vs. BWA. ...... 43 Figure 3.1: Example application topologies, including pipelines and trees with or without back edges. .................................... 47 v Figure 3.2: Schematic of steps involved in firing a Mercator module. ......... 54 Figure 3.3: Dataflow graph for an example Mercator application. ........... 55 Figure 3.4: Snapshot of an ensemble gather calculation on a set of three node queues. .57 Figure 3.5: Snapshot of a scatter calculation for an output channel of a module with four nodes. ....................................... 58 Figure 3.6: Major components of the Mercator framework. .............. 59 Figure 3.7: Workflow for writing a program incorporating a Mercator application. .60 Figure 3.8: Pseudocode of application graph traversal to mark cycle components. ... 62 Figure 3.9: Pseudocode of postorder graph traversal to verify cycle validity. ....... 63 Figure 3.10: Topologies of single-pipeline synthetic benchmarks. .............. 66 Figure 3.11: Topologies of multi-pipeline