Efficiently and Transparently Maintaining High SIMD Occupancy in the Presence of Wavefront Irregularity Stephen V

Total Page:16

File Type:pdf, Size:1020Kb

Efficiently and Transparently Maintaining High SIMD Occupancy in the Presence of Wavefront Irregularity Stephen V Washington University in St. Louis Washington University Open Scholarship Engineering and Applied Science Theses & McKelvey School of Engineering Dissertations Summer 8-15-2017 Efficiently and Transparently Maintaining High SIMD Occupancy in the Presence of Wavefront Irregularity Stephen V. Cole Washington University in St. Louis Follow this and additional works at: https://openscholarship.wustl.edu/eng_etds Part of the Computer Engineering Commons Recommended Citation Cole, Stephen V., "Efficiently and Transparently Maintaining High SIMD Occupancy in the Presence of Wavefront Irregularity" (2017). Engineering and Applied Science Theses & Dissertations. 301. https://openscholarship.wustl.edu/eng_etds/301 This Dissertation is brought to you for free and open access by the McKelvey School of Engineering at Washington University Open Scholarship. It has been accepted for inclusion in Engineering and Applied Science Theses & Dissertations by an authorized administrator of Washington University Open Scholarship. For more information, please contact [email protected]. WASHINGTON UNIVERSITY IN ST. LOUIS School of Engineering and Applied Science Department of Computer Science and Engineering Dissertation Examination Committee: Jeremy Buhler, Chair James Buckley Roger Chamberlain Chris Gill I-Ting Angelina Lee Efficiently and Transparently Maintaining High SIMD Occupancy in the Presence of Wavefront Irregularity by Stephen V. Cole A dissertation presented to The Graduate School of Washington University in partial fulfillment of the requirements for the degree of Doctor of Philosophy August 2017 Saint Louis, Missouri © 2017, Stephen V. Cole Table of Contents List of Figures ........................................ v List of Tables ........................................ ix Acknowledgments ...................................... x Abstract ........................................... xv Chapter 1: Background and Context .......................... 1 1.1 Motivation ........................................ 1 1.2 Research Questions and Direction ............................ 9 1.2.1 The Streaming Paradigm ............................ 9 1.3 Related Work ....................................... 13 1.3.1 Non-streaming Approaches to Parallelism ................... 13 1.3.2 Context Within the Streaming Paradigm ................... 17 1.3.3 Level of Developer Abstraction ......................... 18 1.3.4 Road Map and Contributions .......................... 21 Chapter 2: Wavefront Irregularity Case Study: Woodstocc ........... 24 2.1 Overview ......................................... 25 2.2 Background ........................................ 26 2.2.1 Problem Definition ................................ 26 2.2.2 Related Work ................................... 26 2.3 Alignment Strategy .................................... 28 2.3.1 Core Algorithm .................................. 28 2.3.2 Read Batching .................................. 30 2.4 GPU Implementation ................................... 34 2.4.1 Application Mapping to GPU .......................... 34 2.4.2 Kernel Organization to Maximize Occupancy ................. 34 2.4.3 Decoupling Traversal from DP ......................... 38 2.4.4 Boosting Block Occupancy via Kernel Slicing ................. 38 2.5 Results ........................................... 40 2.5.1 Impact of Worklist and Dynamic Mapping ................... 40 2.5.2 Kernel Stage Decoupling ............................. 41 2.5.3 Overall Performance and Limitations ...................... 42 2.6 Summary and Relevance ................................. 44 ii Chapter 3: Mercator Overview and Results .................... 46 3.1 Mercator Application Model ............................. 47 3.1.1 Topologies and Deadlock Avoidance ...................... 49 3.2 Parallelization on a SIMD Platform ........................... 51 3.2.1 Parallel Semantics and Remapping Support .................. 52 3.2.2 Efficient Support for Runtime Remapping ................... 53 3.3 System Implementation ................................. 59 3.3.1 Developer Interface ................................ 59 3.3.2 Infrastructure ................................... 61 3.4 Experimental Evaluation ................................. 65 3.4.1 Design of Performance Experiments ...................... 65 3.4.2 Results and Discussion .............................. 69 3.5 Conclusion ........................................ 74 Chapter 4: Mercator Dataflow Model and Scheduler ............... 77 4.1 Model Introduction / Background ............................ 78 4.1.1 Example Target Platform: GPUs ........................ 78 4.1.2 Related Work ................................... 78 4.2 Function-Centric (FC) Model .............................. 81 4.2.1 Structure ..................................... 81 4.2.2 Execution ..................................... 82 4.2.3 Example Applications .............................. 83 4.3 Mercator and the FC Model ............................. 85 4.3.1 Merged DFGs and Deadlock Potential in the FC Model ........... 85 4.3.2 Deadlock Avoidance with Mercator Queues ................. 86 4.4 Mercator Scheduler: Efficient Internal Execution .................. 88 4.4.1 Feasible Fireability Requirements ........................ 88 4.4.2 Feasible Fireability Calculation ......................... 91 4.5 Mercator Scheduler: Firing Strategy ......................... 91 4.5.1 Mercator Scheduler: Results ......................... 95 4.6 Conclusion ........................................ 101 Chapter 5: Conclusion and Future Work ....................... 103 5.1 Optimization Opportunities ............................... 104 5.1.1 Variable Thread-to-Work Module Mappings .................. 105 5.1.2 Module Fusion/Fission .............................. 105 5.2 Extensions to New Application Classes ......................... 107 5.2.1 Applications with “join” Semantics ....................... 108 5.2.2 Iterative Applications .............................. 109 5.2.3 Bulk Synchronous Parallel (BSP) Applications ................ 110 5.2.4 Case study: Loopy Belief Propagation for Stereo Image Depth Inference .. 114 5.2.5 Graph Processing Applications ......................... 119 5.2.6 Applications with Dynamic Dataflow ...................... 120 5.2.7 Applications with Multiple Granularities of Operation ............ 121 5.2.8 Footnote: Infrastructure Module Novelty .................... 125 iii Appendix A Mercator User Manual ........................ 129 A.1 Introduction ........................................ 129 A.1.1 Intended Use Case: Modular Irregular Streaming Applications ....... 129 A.1.2 Mercator Application Terminology ...................... 130 A.1.3 System Overview and Basic Workflow ..................... 132 A.2 Mercator System Documentation ........................... 133 A.2.1 Initial Input .................................... 133 A.2.2 System Feedback ................................. 147 A.2.3 Full Program ................................... 148 A.2.4 Example Application Code ........................... 157 References .......................................... 165 iv List of Figures Figure 1.1: NCBI SRA growth. ............................... 2 Figure 1.2: Processor categories from Flynn’s taxonomy, including SIMD. ........ 2 Figure 1.3: Irregular-wavefront work assignment challenge. ................ 8 Figure 1.4: Example of streaming pipeline: VaR simulation. ............... 10 Figure 1.5: Comparison between monolithic application structure and modularized stream- ing structure for rectification of wavefront irregularity. ............ 11 Figure 1.6: Spectrum of parallelism granularity targeted by various systems. ...... 15 Figure 1.7: Spectrum of developer abstraction from implementation of parallelism. ... 21 Figure 2.1: Example alignment of a short read to a longer reference sequence. ..... 26 Figure 2.2: Example suffix trie for a string. ......................... 27 Figure 2.3: Snapshot of suffix tree alignment using dynamic programming. ....... 29 Figure 2.4: Snapshot of batched traversal method for dynamic programming suffix tree alignment. .................................... 31 Figure 2.5: Multiple-granulairty SIMD work assignment challenge for batched traversal. 32 Figure 2.6: Idle reads SIMD work assignment challenge for batched traversal. ..... 33 Figure 2.7: Stages of the Woodstocc GPU kernel main loop. .............. 36 Figure 2.8: CDF of loop iterations by block for Woodstocc. .............. 40 Figure 2.9: Major Woodstocc performance result: Woodstocc vs. BWA. ...... 43 Figure 3.1: Example application topologies, including pipelines and trees with or without back edges. .................................... 47 v Figure 3.2: Schematic of steps involved in firing a Mercator module. ......... 54 Figure 3.3: Dataflow graph for an example Mercator application. ........... 55 Figure 3.4: Snapshot of an ensemble gather calculation on a set of three node queues. .57 Figure 3.5: Snapshot of a scatter calculation for an output channel of a module with four nodes. ....................................... 58 Figure 3.6: Major components of the Mercator framework. .............. 59 Figure 3.7: Workflow for writing a program incorporating a Mercator application. .60 Figure 3.8: Pseudocode of application graph traversal to mark cycle components. ... 62 Figure 3.9: Pseudocode of postorder graph traversal to verify cycle validity. ....... 63 Figure 3.10: Topologies of single-pipeline synthetic benchmarks. .............. 66 Figure 3.11: Topologies of multi-pipeline
Recommended publications
  • High-Level and Efficient Stream Parallelism on Multi-Core Systems
    WSCAD 2017 - XVIII Simp´osio em Sistemas Computacionais de Alto Desempenho High-Level and Efficient Stream Parallelism on Multi-core Systems with SPar for Data Compression Applications Dalvan Griebler1, Renato B. Hoffmann1, Junior Loff1, Marco Danelutto2, Luiz Gustavo Fernandes1 1 Faculty of Informatics (FACIN), Pontifical Catholic University of Rio Grande do Sul (PUCRS), GMAP Research Group, Porto Alegre, Brazil. 2Department of Computer Science, University of Pisa (UNIPI), Pisa, Italy. {dalvan.griebler, renato.hoffmann, junior.loff}@acad.pucrs.br, [email protected], [email protected] Abstract. The stream processing domain is present in several real-world appli- cations that are running on multi-core systems. In this paper, we focus on data compression applications that are an important sub-set of this domain. Our main goal is to assess the programmability and efficiency of domain-specific language called SPar. It was specially designed for expressing stream paral- lelism and it promises higher-level parallelism abstractions without significant performance losses. Therefore, we parallelized Lzip and Bzip2 compressors with SPar and compared with state-of-the-art frameworks. The results revealed that SPar is able to efficiently exploit stream parallelism as well as provide suit- able abstractions with less code intrusion and code re-factoring. 1. Introduction Over the past decade, vendors realized that increasing clock frequency to gain perfor- mance was no longer possible. Companies were then forced to slow the clock frequency and start adding multiple processors to their chips. Since that, software started to rely on parallel programming to increase performance [Sutter 2005]. However, exploiting par- allelism in such multi-core architectures is a challenging task that is still too low-level and complex for application programmers.
    [Show full text]
  • Queue Streaming Model Theory, Algorithms, and Implementation
    Mississippi State University Scholars Junction Theses and Dissertations Theses and Dissertations 1-1-2019 Queue Streaming Model Theory, Algorithms, and Implementation Anup D. Zope Follow this and additional works at: https://scholarsjunction.msstate.edu/td Recommended Citation Zope, Anup D., "Queue Streaming Model Theory, Algorithms, and Implementation" (2019). Theses and Dissertations. 3708. https://scholarsjunction.msstate.edu/td/3708 This Dissertation - Open Access is brought to you for free and open access by the Theses and Dissertations at Scholars Junction. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of Scholars Junction. For more information, please contact [email protected]. Queue streaming model: Theory, algorithms, and implementation By Anup D. Zope A Dissertation Submitted to the Faculty of Mississippi State University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Computational Engineering in the Center for Advanced Vehicular Systems Mississippi State, Mississippi May 2019 Copyright by Anup D. Zope 2019 Queue streaming model: Theory, algorithms, and implementation By Anup D. Zope Approved: Edward A. Luke (Major Professor) Shanti Bhushan (Committee Member) Maxwell Young (Committee Member) J. Mark Janus (Committee Member) Seongjai Kim (Committee Member) Manav Bhatia (Graduate Coordinator) Jason M. Keith Dean Bagley College of Engineering Name: Anup D. Zope Date of Degree: May 3, 2019 Institution: Mississippi State University Major Field: Computational Engineering Major Professor: Edward A. Luke Title of Study: Queue streaming model: Theory, algorithms, and implementation Pages of Study: 195 Candidate for Degree of Doctor of Philosophy In this work, a model of computation for shared memory parallelism is presented.
    [Show full text]
  • Pash: Light-Touch Data-Parallel Shell Processing
    PaSh: Light-touch Data-Parallel Shell Processing Nikos Vasilakis∗ Konstantinos Kallas∗ Konstantinos Mamouras MIT University of Pennsylvania Rice University [email protected] [email protected] [email protected] Achilles Benetopoulos Lazar Cvetković Unaffiliated University of Belgrade [email protected] [email protected] Abstract Parallelizability Parallelizing Runtime Classes §3 Transformations §4.3 Primitives §5 Dataflow This paper presents PaSh, a system for parallelizing POSIX POSIX, GNU §3.1 Regions shell scripts. Given a script, PaSh converts it to a dataflow Annotations §3.2 § 4.1 DFG § 4.4 graph, performs a series of semantics-preserving program §4.2 transformations that expose parallelism, and then converts Seq. Script Par. Script the dataflow graph back into a script—one that adds POSIX constructs to explicitly guide parallelism coupled with PaSh- Fig. 1. PaSh overview. PaSh identifies dataflow regions (§4.1), converts provided Unix-aware runtime primitives for addressing per- them to dataflow graphs (§4.2), applies transformations (§4.3) based onthe parallelizability properties of the commands in these regions (§3.1, §3.2), formance- and correctness-related issues. A lightweight an- and emits a parallel script (§4.4) that uses custom primitives (§5). notation language allows command developers to express key parallelizability properties about their commands. An accompanying parallelizability study of POSIX and GNU • Command developers, responsible for implementing indi- commands—two large and commonly used groups—guides vidual commands such as sort, uniq, and jq. These de- the annotation language and optimized aggregator library velopers usually work in a single programming language, that PaSh uses. PaSh’s extensive evaluation over 44 unmod- leveraging its abstractions to provide parallelism when- ified Unix scripts shows significant speedups (0.89–61.1×, ever possible.
    [Show full text]
  • A Personal Analytics Platform for the Internet of Things Implementing Kappa Architecture with Microservice-Based Stream Processing
    A Personal Analytics Platform for the Internet of Things Implementing Kappa Architecture with Microservice-based Stream Processing Theo Zschörnig1, Robert Wehlitz1 and Bogdan Franczyk2,3 1Institute for Applied Informatics (InfAI), Leipzig University, Hainstr. 11, 04109 Leipzig, Germany 2Information Systems Institute, Leipzig University, Grimmaische Str. 12, 04109 Leipzig, Germany 3Business Informatics Institute, Wrocław University of Economics, ul. Komandorska 118-120, 53-345 Wrocław, Poland Keywords: Personal Analytics, Internet of Things, Kappa Architecture, Microservices, Stream Processing. Abstract: The foundation of the Internet of Things (IoT) consists of different devices, equipped with sensors, actuators and tags. With the emergence of IoT devices and home automation, advantages from data analysis are not limited to businesses and industry anymore. Personal analytics focus on the use of data created by individuals and used by them. Current IoT analytics architectures are not designed to respond to the needs of personal analytics. In this paper, we propose a lightweight flexible analytics architecture based on the concept of the Kappa Architecture and microservices. It aims to provide an analytics platform for huge numbers of different scenarios with limited data volume and different rates in data velocity. Furthermore, the motivation for and challenges of personal analytics in the IoT are laid out and explained as well as the technological approaches we use to overcome the shortcomings of current IoT analytics architectures. 1 INTRODUCTION also the opportunities they provide (Section 2). We give an overview of the state of the art in IoT analytics It is estimated that the number of Internet of Things regarding technologies and architectures and show (IoT) devices will grow in huge quantities, to around that these are not fully suitable for personal analytics 24 billion in the year 2020 (Greenough, 2016).
    [Show full text]
  • Paving the Way Towards High-Level Parallel Pattern Interfaces for Data Stream Processing
    This is a postprint version of the following published document: Del Río Astorga, D., Dolz, M. F., Fernández, J., García, J.D., (2018). Paving the way towards high- level parallel pattern interfaces for data stream processing. Future Generation Computer Systems, 87, pp. 228-241. DOI: https://doi.org/10.1016/j.future.2018.05.011 © 2018 Elsevier B.V. All rights reserved. This work is licensed under a Creative Commons Attribution-NonCommercial- NoDerivatives 4.0 International License. Paving the Way Towards High-Level Parallel Pattern Interfaces for Data Stream Processing David del Rio Astorga1, Manuel F. Dolz1, Javier Fernandez´ 1 and J. Daniel Garc´ıa1 1 Universidad Carlos III de Madrid, 28911–Leganes,´ Spain The emergence of data stream applications has posed a number of new challenges to existing infrastruc- tures, processing engines and programming models. In this sense, high-level interfaces, encapsulating algorithmic aspects in pattern-based constructions, have considerable reduced the development and par- allelization efforts of these type of applications. An example of parallel pattern interface is GrPPI, a C++ generic high-level library that acts as a layer between developers and existing parallel programming frameworks, such as C++ threads, OpenMP and Intel TBB. In this paper, we complement the basic pat- terns supported by GrPPI with the new stream operators Split-Join and Window, and the advanced parallel patterns Stream-Pool, Windowed-Farm and Stream-Iterator for the aforementioned back ends. Thanks to these new stream operators, complex compositions among streaming patterns can be expressed. On the other hand, the collection of advanced patterns allows users to tackle some domain-specific applications, ranging from the evolutionary to the real-time computing areas, where compositions of basic patterns are not capable of fully mimicking the algorithmic behavior of their original sequential codes.
    [Show full text]
  • A Complete Bibliography of Publications in the International
    A Complete Bibliography of Publications in The International Journal of Supercomputer Applications, The International Journal of Supercomputer Applications and High-Performance Computing, and The International Journal of High Performance Computing Applications Nelson H. F. Beebe University of Utah Department of Mathematics, 110 LCB 155 S 1400 E RM 233 Salt Lake City, UT 84112-0090 USA Tel: +1 801 581 5254 FAX: +1 801 581 4148 E-mail: [email protected], [email protected], [email protected] (Internet) WWW URL: http://www.math.utah.edu/~beebe/ 18 May 2021 Version 2.03 Title word cross-reference IGDQO19, MC21, YZC+15]. M × N [BYCB05, DP05, JLO05]. N [HJ96, SWW94, DAB+12, FT19, RTRG+07, INS+20]. ? 2 [VSS+13, Wal18]. 2564 [IKY+10]. 3 [NC18]. [AKW19, ARR99, BGM15, CSGM17, + + FIMU19, GGS01, HSLK11, KC18, PR95, -Body [HJ96, SWW94, INS 20, RTRG 07, + SSCF19, THL88]. 2 [AMC+18]. 3 [KPST18]. DAB 12, FT19]. -D α [TKSK88]. d = 2 [BRT+92]. H [YIYD19]. [SSCF19, ARR99, GGS01, PR95, THL88]. * ∗ * -matrix [YIYD19]. CH + H2 ) CH3 ) CH2 + H [ASW91]. CuO2 [SSSW91]. ILU [SZ11]. k [TNLP13]. 0th K2 [CBW95]. LU [BLRR01, DD89, DD91, [RAGW93]. 1 2 100 [IHMM87]. 100k [INY+14]. 10P Accessing [HLP+03]. Accuracy [DD89]. 164 [LT88]. 1917-1991 [Mar91]. [HCCG20, SK20]. Accurate 1A [HXW+13]. 1s [McN89]. [LR09, LRO10, TMWS91, VFD04, WSD+14]. Acknowledgments [Ano02c, Ano02d]. 2 [CL95, CC95, CLM+16, DD89, DD91, Acoustic [GKN+96, MBF+11, ALE+20]. GCL93, HGMW12, LT90, LYL+16, Mor89a]. Across [KS05, SB19]. ACS [Mar88a]. 200/VF [DD89]. 2000 [CLBS17]. 2003 Action [DBA+09, Num04]. Active [Her91]. [OL05].
    [Show full text]
  • Packet-Level Analytics in Software Without Compromises
    Packet-Level Analytics in Software without Compromises Oliver Michel John Sonchack University of Colorado Boulder University of Pennsylvania Eric Keller Jonathan M. Smith University of Colorado Boulder University of Pennsylvania Abstract just could not process the information fast enough. These approaches, of course, reduce information – aggregation Traditionally, network monitoring and analytics systems reduces the load of the analytics system at the cost of rely on aggregation (e.g., flow records) or sampling to granularity, as per-packet data is reduced to groups of cope with high packet rates. This has the downside packets in the form of sums or counts [3, 16]. Sampling that, in doing so, we lose data granularity and accu- and filtering reduces the number of packets or flows to be racy, and, in general, limit the possible network analytics analyzed. Reducing information reduces load, but it also we can perform. Recent proposals leveraging software- increases the chance of missing critical information, and defined networking or programmable hardware provide restricts the set of possible applications [30, 28]. more fine-grained, per-packet monitoring but are still Recent advances in software-defined networking based on the fundamental principle of data reduction in (SDN) and more programmable hardware have provided the network, before analytics. In this paper, we pro- opportunities for more fine-grained monitoring, towards vide a first step towards a cloud-scale, packet-level mon- packet-level network analytics. Packet-level analytics itoring and analytics system based on stream processing systems provide the benefit of complete insight into the entirely in software. Software provides virtually unlim- network and open up opportunities for applications that ited programmability and makes modern (e.g., machine- require per-packet data in the network [37].
    [Show full text]
  • Balancing Performance and Reliability in Software Components
    Balancing Performance and Reliability in Software Components by Javier López-Gómez A dissertation submitted by in partial fulfillment of the requirements for the degreee of Doctor in Philosophy in Computer Science and Technology Universidad Carlos III de Madrid Advisors: José Daniel García Sánchez Javier Fernández Muñoz October 2020 This thesis is distributed under license “Creative Commons Attribution – Non Commercial – Non Derivatives”. A mis padres. To my parents. Acknowledgements Quiero dedicar esta página a recordar a aquellas personas que me han acom- pañado durante esta etapa y a las que, lamentablemente, no pudieron hacerlo. En primer lugar, a mis padres. Os debo todo lo que soy y seré. Gracias por el cuidado y educación recibidos, por los buenos (y malos) momentos, por aquellos días de piscina en Playa de Madrid, por las barbacoas en la Fundación del Lesionado Medular/ASPAYM Madrid, por las comidas en “La Bodeguita” y en el Centro Municipal de Mayores Ascao. Gracias a vosotros aprendí a ser fuerte. A mamá, porque me enseñaste lo más importante, a sonreir a pesar de la adversidad, y otras cosas de menor importancia, como mantener un hogar. A papá, por enseñarme gran parte de lo que sé, el equilibrio, e iluminarme el camino; aunque fuera de forma accidental, sin tí, mi vocación sería otra. Nunca os lo podré agradecer tanto como lo merecisteis. D.E.P. En segundo lugar, a mis compañeros de trabajo: Manu, Javi Doc, y David. Por todos los momentos compartidos en el laboratorio y fuera de él; por ayu- darme y presionarme para que trabajara. Sin vosotros, dudo que estuviera es- cribiendo estas líneas ahora.
    [Show full text]
  • Raftlib: a C++ Template Library for High Performance Stream Parallel Processing
    RaftLib: A C++ Template Library for High Performance Stream Parallel Processing Jonathan C. Beard Peng Li Roger D. Chamberlain Jonathan C. Beard, Peng Li and Roger D. Chamberlain. “RaftLib: A C++ Template Library for High Performance Stream Parallel Processing,” in Proc. of 6th Int’l Workshop on Programming Models and Applications for Multicores and Manycores (PMAM), February 2015, pp. 96-105. Dept. of Computer Science and Engineering Washington University in St. Louis RaftLib: A C++ Template Library for High Performance Stream Parallel Processing Jonathan C. Beard, Peng Li and Roger D. Chamberlain Dept. of Computer Science and Engineering Washington University in St. Louis {jbeard,pengli,roger}@wustl.edu ABSTRACT Stream processing or data-flow programming is a compute source paradigm that has been around for decades in many forms yet has failed garner the same attention as other mainstream sum print languages and libraries (e.g., C++ or OpenMP [15]). Stream processing has great promise: the ability to safely exploit extreme levels of parallelism. There have been many im- source plementations, both libraries and full languages. The full languages implicitly assume that the streaming paradigm cannot be fully exploited in legacy languages, while library approaches are often preferred for being integrable with the Figure 1: Simple streaming application example vast expanse of legacy code that exists in the wild. Li- with four compute kernels of three distinct types. braries, however are often criticized for yielding to the shape From left to right: the two source kernels each pro- of their respective languages. RaftLib aims to fully exploit vide a number stream, the “sum” kernel adds pairs of the stream processing paradigm, enabling a full spectrum of numbers and the last kernel prints the result.
    [Show full text]
  • User-Defined Execution Relaxations for Enhanced Programmability in High-Performance Parallel Computing
    UNIVERSIDAD COMPLUTENSE DE MADRID FACULTAD DE INFORMATICA TESIS DOCTORAL User-defined execution relaxations for enhanced programmability in high-performance parallel computing Relajaciones de ejecución definidas por el usuario para la mejora de la programabilidad en computación paralela de altas prestaciones MEMORIA PARA OPTAR AL GRADO DE DOCTOR PRESENTADA POR Andrés Antón Rey Villaverde Directores Francisco Daniel Igual Peña Manuel Prieto Matías Madrid © Andrés Antón Rey Villaverde, 2019 User-defined Execution Relaxations for Enhanced Programmability in High-Performance Parallel Computing – Relajaciones de Ejecucion´ Definidas por el Usuario para la Mejora de la Programabilidad en Computacion´ Paralela de Altas Prestaciones TESIS DOCTORAL Andres´ Anton´ Rey Villaverde Dirigida por: Francisco Daniel Igual Pena˜ y Manuel Prieto Mat´ıas Facultad de Informatica´ Universidad Complutense de Madrid Madrid, 2019 ii User-defined Execution Relaxations for Enhanced Programmability in High-Performance Parallel Computing – Relajaciones de Ejecucion´ Definidas por el Usuario para la Mejora de la Programabilidad en Computacion´ Paralela de Altas Prestaciones Memoria que presenta para optar al t´ıtulo de Doctor en Informatica´ Andres´ Anton´ Rey Villaverde Dirigida por los Doctores Francisco Daniel Igual Pena˜ y Manuel Prieto Mat´ıas Facultad de Informatica´ Universidad Complutense de Madrid Madrid, 2019 iv v vi This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. I hereby declare that all the content presented in this thesis entitled “User-defined Execution Relaxations for Enhanced Programmability in High-Performance Parallel Computing” has been developed by me, and all other content has been appropriately referenced.
    [Show full text]
  • Online Modeling and Tuning of Parallel Stream Processing Systems Jonathan Curtis Beard Washington University in St
    Washington University in St. Louis Washington University Open Scholarship Engineering and Applied Science Theses & McKelvey School of Engineering Dissertations Summer 8-15-2015 Online Modeling and Tuning of Parallel Stream Processing Systems Jonathan Curtis Beard Washington University in St. Louis Follow this and additional works at: https://openscholarship.wustl.edu/eng_etds Part of the Engineering Commons Recommended Citation Beard, Jonathan Curtis, "Online Modeling and Tuning of Parallel Stream Processing Systems" (2015). Engineering and Applied Science Theses & Dissertations. 125. https://openscholarship.wustl.edu/eng_etds/125 This Dissertation is brought to you for free and open access by the McKelvey School of Engineering at Washington University Open Scholarship. It has been accepted for inclusion in Engineering and Applied Science Theses & Dissertations by an authorized administrator of Washington University Open Scholarship. For more information, please contact [email protected]. WASHINGTON UNIVERSITY IN ST. LOUIS School of Engineering and Applied Science Department of Computer Science and Engineering Dissertation Examination Committee: Roger Dean Chamberlain, Chair Jeremy Buhler Ron Cytron Roch Guerin Jenine Harris Juan Pantano Robert B. Pless Online Modeling and Tuning of Parallel Stream Processing Systems by Jonathan Curtis Beard A dissertation presented to the Graduate School of Arts and Sciences of Washington University in partial fulfillment of the requirements for the degree of Doctor of Philosophy August 2015 Saint
    [Show full text]
  • Intrinsic Compilation Model to Enhance Performance of Real Time
    ISSN (Print) : 2319-8613 ISSN (Online) : 0975-4024 Sumalatha Aradhya et al. / International Journal of Engineering and Technology (IJET) Intrinsic Compilation Model to enhance Performance of real time application in embedded multi core system Sumalatha Aradhya #1, Dr.N K Srinath *2 # F Research Scholar, Department of Computer Science and Engineering, R V College of Engineering, Mysore Road, Bengaluru, Karnataka, India 1 [email protected] * Professor and Dean Academics, R V College of Engineering, Mysore Road, Bengaluru, Karnataka, India 2 [email protected] Abstract— The embedded multi core system has critical performance issues. This is due to extra optimization codes added by the compiler. In many cases, performance becomes challenging due to increased cyclomatic complexity of the embedded software especially in the release version of the code. To ease the extra complexities of the software, a better optimization approach through intrinsic compilation model proposed in the paper. The proposed intrinsic compilation model handles software to hardware integrity complexities through vector dynamic interface algorithm discussed in the paper. The vector dynamic interface algorithm resolves the conflicts of optimization across multi core target by using optimizer conflict resolver algorithm. The optimizer conflict resolver algorithm computed by the recurrence logic derived through probabilities using vector rule algorithm. The intrinsic compilation model makes use of effective partitioning logics to obtain optimized vector code data. The bench mark shows improved speed up results between static code and vector code. Keyword - compiler, intrinsic programming, parallel processing, performance, optimization, simulation I. INTRODUCTION The compiler set up with high optimization levels is not used in embedded integrated environment due to enormous usage of volatile data present in the code.
    [Show full text]