3.15 Speedup and Stall Rate for Livermore Kernels 1 and 7

Total Page:16

File Type:pdf, Size:1020Kb

3.15 Speedup and Stall Rate for Livermore Kernels 1 and 7 UvA-DARE (Digital Academic Repository) On the compilation of a parallel language targeting the self-adaptive virtual processor Bernard, T.A.M. Publication date 2011 Document Version Final published version Link to publication Citation for published version (APA): Bernard, T. A. M. (2011). On the compilation of a parallel language targeting the self-adaptive virtual processor. Print partners Ipskamp. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:27 Sep 2021 INVITATION On the Compilation of a Parallel Language SVP targeting of a Parallel On the Compilation TO ATTEND THE PUBLIC DEFENSE OF MY THESIS ON FRIDAY MARCH 11TH 2011 at 12:00PM IN THE AGNIETENKAPEL, OUDERZIJDS VOORGBURGWAL 231, AMSTERDAM RECEPTION AFTERWARDS Thomas A.M. Bernard Thomas On the Compilation of On the Compilation of a Parallel Language a Parallel ISBN 978-90-9026006-8 Language targeting the 90000 > targeting the Self-adaptive Virtual Processor Self-adaptive Virtual Processor 9 789090 260068 Thomas A.M. Bernard Thomas A.M. Bernard On the Compilation of a Parallel Language targeting the Self-Adaptive Virtual Processor Thomas A.M. Bernard On the Compilation of a Parallel Language targeting the Self-Adaptive Virtual Processor ACADEMISCH PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Universiteit van Amsterdam op gezag van de Rector Magnificus prof. dr. D.C. van den Boom ten overstaan van een door het college voor promoties ingestelde commissie, in het openbaar te verdedigen in de Agnietenkapel op vrijdag 11 maart 2011 te 12:00 uur door Thomas Antoine Marie Bernard geboren te Meaux, Frankrijk Promotiecommissie: Promotor: prof. dr. C.R. Jesshope Overige leden: prof. dr. P. Klint prof. dr. G.J.M. Smit dr. M. Beemster dr. C.U. Grelck Faculteit: Faculteit der Natuurwetenschappen, Wiskunde en Informatica The work described in this thesis was carried out in the section of Computer Systems Architecture of the University of Amsterdam, with the financial sup- port of: • the University of Amsterdam, • the NWO Microgrids project, • the European FP-7 Apple-CORE project, • the Advanced School for Computing and Imaging (ASCI). Advanced School for Computing and Imaging c Copyright 2010 by Thomas A.M. Bernard ISBN 978-90-9026006-8 ASCI dissertation series number 211. Author contact: [email protected] or [email protected] Print partners Ipskamp, Enschede Our greatest glory is not in never falling, but in rising every time we fall. Confucius Contents 1 Introduction 1 1.1 Classical microprocessor improvements . 2 1.2 Multicore architectures . 2 1.3 Exploiting concurrency as a solution . 3 1.4 Impact of concurrency on software systems . 9 1.5 Contribution of this thesis . 12 1.6 Overview of this thesis . 12 I Foundations 15 2 Background in parallel computing systems 17 2.1 Approaches in concurrent execution models . 18 2.2 Relevant parallel architectures . 22 2.3 Modeling concurrency in compilers . 24 2.4 Requirements for a concurrent execution model . 25 3 SVP Execution Model and its Implementations 27 3.1 Our approach to multicore programming . 28 3.2 Presentation of the SVP execution model . 29 3.3 Hardware implementation: Microgrid . 36 3.4 Software implementation: µTC language . 44 3.5 SVP system performance . 49 3.6 Discussion and conclusion . 56 i CONTENTS II Compilation for Parallel Computing Systems 57 4 From basics to advanced SVP compilation 59 4.1 Basics in compiler transformations . 60 4.2 SVP compilation schemes . 63 4.3 Under the hood of SVP compilation . 67 4.4 Conclusion . 83 5 On the challenges of optimizations 85 5.1 Hazards with optimizations . 86 5.2 Investigating some optimizations . 87 5.3 Discussion and conclusion . 97 6 Implementing the SVP compiler 99 6.1 Role of the compiler . 100 6.2 Compiler design decisions . 101 6.3 Compilation challenges . 107 6.4 Discussion and conclusion . 113 7 SVP evaluation 117 7.1 Evaluation of SVP compilation . 118 7.2 Evaluation of SVP computing system . 122 7.3 Discussion and conclusion . 137 III Discussion and conclusion 139 8 Discussion and conclusion 141 8.1 Thesis overview . 141 8.2 Limitations . 142 8.3 Future work . 144 8.4 Conclusions . 146 A µTC language syntax summary 147 Summary 157 Samenvatting 162 ii CONTENTS Acknowledgements 165 Publications 167 Bibliography 174 Index 175 iii CONTENTS iv List of Figures 1.1 Partitioning a sequential cooking recipe into tasks . 5 1.2 Communication between concurrent tasks of a cooking recipe . 6 1.3 Synchronization between concurrent tasks of a cooking recipe . 6 1.4 Management of concurrent tasks of a cooking recipe . 7 1.5 Overview of a standard software system . 9 1.6 The bridge between Software world and Hardware world . 11 2.1 Computing system domains . 18 3.1 SVP parallel computing system overview . 28 3.2 Illustration of an SVP family creation . 29 3.3 An SVP family . 30 3.4 SVP inter-thread communication with a Global channel . 31 3.5 SVP inter-thread communication with a shared channel . 32 3.6 SVP inter-thread communication channels . 33 3.7 Illustration of an SVP concurrency tree . 35 3.8 Different states of an SVP thread . 40 3.9 SVP register window layout . 42 3.10 Mapping of hardware registers to architectural registers . 44 3.11 µTC example of a simplified reduction . 46 3.12 Thread function definition . 48 3.13 Gray area between create and sync . 49 3.14 Functional diagram of a 16-core Microgrid . 50 v LIST OF FIGURES 3.15 Speedup and stall rate for Livermore kernels 1 and 7 . 52 3.16 Speedup of sine function . 53 3.17 Speedup of Livermore kernel 3 . 54 3.18 Performance of FFT . 55 4.1 Basic compilation scheme T ...................... 63 4.2 Simplified compilation scheme T for a thread function . 63 4.3 Compilation scheme T for a thread function . 65 4.4 Compilation scheme for µTC create action . 66 4.5 Compilation scheme for µTC break action . 66 4.6 Compilation scheme T involving a C function call . 67 4.7 Call gate inserted instead of function call . 67 4.8 The compilation process as a black box . 68 4.9 A simple compiler . 68 4.10 A classic optimizing three-stage compiler . 69 4.11 A modern optimizing three-stage compiler design . 70 4.12 A work-flow representation of a compilation process . 71 4.13 Composition of a µTC program with concurrent regions . 76 4.14 Creation graph example of a program . 77 4.15 The relationship of a single concurrent region . 79 4.16 Control flows of sequential and concurrent paradigms . 80 4.17 CFG representation of an SVP create block . 80 4.18 DFG of an SVP shared synchronized communication channel . 81 5.1 Example of optimization side-effects on SVP code . 87 5.2 Example of optimization side-effects on communication channels 88 5.3 SSA transformation . 89 5.4 Example of unreachable code . 90 5.5 Example of valid code removal . 90 5.6 CFG representation of thread function “foo” . 91 5.7 Code example with CSE . 92 5.8 Code example with PRE . 92 5.9 Example of combining instruction . 93 5.10 Example of copy propagation . 94 5.11 Instruction reordering example . 94 vi LIST OF FIGURES 5.12 Instruction reordering example with create sequence . 95 5.13 Dependency chain between operations . 96 6.1 Compiler composition of GCC 4.1 Core Release . 105 6.2 Location of changes in GCC-UTC . 106 6.3 Shared object used as a token to enforce sequential constraints . 110 7.1 Instruction mix of Livermore kernels in µTC . 119 7.2 Comparison of code size between unoptimized and optimized code . 120 7.3 Comparison of instruction size between hand-coded and com- piled code . 121 7.4 Comparison of execution cycles between hand-coded and com- piled Livermore kernels . 122 7.5 Functional diagram of a 64-core Microgrid . 123 7.6 BLAS DNRM2 in µTC . 125 7.7 Performance of DNRM2 on one SVP place . 126 7.8 N/P parallel reduction for the inner product . 128 7.9 IP performance, using N/P reduction . 129 7.10 Performance of the ESF . 132 7.11 Performance of the matrix-matrix product . 133 7.12 Computation kernel for the 1-D FFT . 134 7.13 Performance of the 1-D FFT . 135 vii LIST OF FIGURES viii List of Tables 3.1 List of SVP instructions which can be added to an existing ISA . 37 3.2 List of SVP register classes . 42 3.3 List of µTC constructs . 46 3.4 List of µTC types . 46 3.5 Create parameters which set up the family definition . 48 ix LIST OF TABLES x Chapter 1 Introduction I think there is a world market for maybe five computers Thomas J.
Recommended publications
  • Resourceable, Retargetable, Modular Instruction Selection Using a Machine-Independent, Type-Based Tiling of Low-Level Intermediate Code
    Reprinted from Proceedings of the 2011 ACM Symposium on Principles of Programming Languages (POPL’11) Resourceable, Retargetable, Modular Instruction Selection Using a Machine-Independent, Type-Based Tiling of Low-Level Intermediate Code Norman Ramsey Joao˜ Dias Department of Computer Science, Tufts University Department of Computer Science, Tufts University [email protected] [email protected] Abstract Our compiler infrastructure is built on C--, an abstraction that encapsulates an optimizing code generator so it can be reused We present a novel variation on the standard technique of selecting with multiple source languages and multiple target machines (Pey- instructions by tiling an intermediate-code tree. Typical compilers ton Jones, Ramsey, and Reig 1999; Ramsey and Peyton Jones use a different set of tiles for every target machine. By analyzing a 2000). C-- accommodates multiple source languages by providing formal model of machine-level computation, we have developed a two main interfaces: the C-- language is a machine-independent, single set of tiles that is machine-independent while retaining the language-independent target language for front ends; the C-- run- expressive power of machine code. Using this tileset, we reduce the time interface is an API which gives the run-time system access to number of tilers required from one per machine to one per archi- the states of suspended computations. tectural family (e.g., register architecture or stack architecture). Be- cause the tiler is the part of the instruction selector that is most dif- C-- is not a universal intermediate language (Conway 1958) or ficult to reason about, our technique makes it possible to retarget an a “write-once, run-anywhere” intermediate language encapsulating instruction selector with significantly less effort than standard tech- a rigidly defined compiler and run-time system (Lindholm and niques.
    [Show full text]
  • The LLVM Instruction Set and Compilation Strategy
    The LLVM Instruction Set and Compilation Strategy Chris Lattner Vikram Adve University of Illinois at Urbana-Champaign lattner,vadve ¡ @cs.uiuc.edu Abstract This document introduces the LLVM compiler infrastructure and instruction set, a simple approach that enables sophisticated code transformations at link time, runtime, and in the field. It is a pragmatic approach to compilation, interfering with programmers and tools as little as possible, while still retaining extensive high-level information from source-level compilers for later stages of an application’s lifetime. We describe the LLVM instruction set, the design of the LLVM system, and some of its key components. 1 Introduction Modern programming languages and software practices aim to support more reliable, flexible, and powerful software applications, increase programmer productivity, and provide higher level semantic information to the compiler. Un- fortunately, traditional approaches to compilation either fail to extract sufficient performance from the program (by not using interprocedural analysis or profile information) or interfere with the build process substantially (by requiring build scripts to be modified for either profiling or interprocedural optimization). Furthermore, they do not support optimization either at runtime or after an application has been installed at an end-user’s site, when the most relevant information about actual usage patterns would be available. The LLVM Compilation Strategy is designed to enable effective multi-stage optimization (at compile-time, link-time, runtime, and offline) and more effective profile-driven optimization, and to do so without changes to the traditional build process or programmer intervention. LLVM (Low Level Virtual Machine) is a compilation strategy that uses a low-level virtual instruction set with rich type information as a common code representation for all phases of compilation.
    [Show full text]
  • Automatic Isolation of Compiler Errors
    Automatic Isolation of Compiler Errors DAVID B.WHALLEY Flor ida State University This paper describes a tool called vpoiso that was developed to automatically isolate errors in the vpo com- piler system. The twogeneral types of compiler errors isolated by this tool are optimization and nonopti- mization errors. When isolating optimization errors, vpoiso relies on the vpo optimizer to identify sequences of changes, referred to as transformations, that result in semantically equivalent code and to pro- vide the ability to stop performing improving (or unnecessary) transformations after a specified number have been performed. Acompilation of a typical program by vpo often results in thousands of improving transformations being performed. The vpoiso tool can automatically isolate the first improving transforma- tion that causes incorrect output of the execution of the compiled program by using a binary search that varies the number of improving transformations performed. Not only is the illegaltransformation automati- cally isolated, but vpoiso also identifies the location and instant the transformation is performed in vpo. Nonoptimization errors occur from problems in the front end, code generator,and necessary transforma- tions in the optimizer.Ifanother compiler is available that can produce correct (but perhaps more ineffi- cient) code, then vpoiso can isolate nonoptimization errors to a single function. Automatic isolation of compiler errors facilitates retargeting a compiler to a newmachine, maintenance of the compiler,and sup- porting experimentation with newoptimizations. General Terms: Compilers, Testing Additional Key Words and Phrases: Diagnosis procedures, nonoptimization errors, optimization errors 1. INTRODUCTION To increase portability compilers are often split into twoparts, a front end and a back end.
    [Show full text]
  • Automatic Derivation of Compiler Machine Descriptions
    Automatic Derivation of Compiler Machine Descriptions CHRISTIAN S. COLLBERG University of Arizona We describe a method designed to significantly reduce the effort required to retarget a compiler to a new architecture, while at the same time producing fast and effective compilers. The basic idea is to use the native C compiler at compiler construction time to discover architectural features of the new architecture. From this information a formal machine description is produced. Given this machine description, a native code-generator can be generated by a back-end generator such as BEG or burg. A prototype automatic Architecture Discovery Tool (called ADT) has been implemented. This tool is completely automatic and requires minimal input from the user. Given the Internet address of the target machine and the command-lines by which the native C compiler, assembler, and linker are invoked, ADT will generate a BEG machine specification containing the register set, addressing modes, instruction set, and instruction timings for the architecture. The current version of ADT is general enough to produce machine descriptions for the integer instruction sets of common RISC and CISC architectures such as the Sun SPARC, Digital Alpha, MIPS, DEC VAX, and Intel x86. Categories and Subject Descriptors: D.2.7 [Software Engineering]: Distribution, Maintenance, and Enhancement—Portability; D.3.2 [Programming Languages]: Language Classifications— Macro and assembly languages; D.3.4 [Programming Languages]: Processors—Translator writ- ing systems and compiler generators General Terms: Languages Additional Key Words and Phrases: Back-end generators, compiler configuration scripts, retargeting 1. INTRODUCTION An important aspect of a compiler implementation is its retargetability.For example, a new programming language whose compiler can be quickly retar- geted to a new hardware platform or a new operating system is more likely to gain widespread acceptance than a language whose compiler requires extensive retargeting effort.
    [Show full text]
  • Building Linux Applications Using RVDS 3.0 and the GNU Tools and Libraries
    Application Note 150 Building Linux Applications Using RVDS 3.0 and the GNU Tools and Libraries Released on: September, 2006 Copyright © 2005-2006. All rights reserved. DAI0150B Application Note 150 Application Note 150 Building Linux Applications Using RVDS 3.0 and the GNU Tools and Libraries Copyright © 2005-2006. All rights reserved. Release Information The following changes have been made to this application note. Table 1 Change history Date Issue Change April 2006 A First release for RVDS 3.0 September 2006 B Second release for RVDS 3.0 SP1 Proprietary Notice Words and logos marked with ® and ™ are registered trademarks owned by ARM Limited, except as otherwise stated below in this proprietary notice. Other brands and names mentioned herein may be the trademarks of their respective owners. Neither the whole nor any part of the information contained in, or the product described in, this document may be adapted or reproduced in any material form except with the prior written permission of the copyright holder. The product described in this document is subject to continuous developments and improvements. All particulars of the product and its use contained in this document are given by ARM in good faith. However, all warranties implied or expressed, including but not limited to implied warranties of merchantability, or fitness for purpose, are excluded. This document is intended only to assist the reader in the use of the product. ARM Limited shall not be liable for any loss or damage arising from the use of any information in this document, or any error or omission in such information, or any incorrect use of the product.
    [Show full text]
  • Totalprof: a Fast and Accurate Retargetable Source Code Profiler
    TotalProf: A Fast and Accurate Retargetable Source Code Profiler Lei Gao, Jia Huang, Jianjiang Ceng, Rainer Leupers, Gerd Ascheid, and Heinrich Meyr Institute for Integrated Signal Processing Systems RWTH Aachen University, Germany {gao,leupers}@iss.rwth-aachen.de ABSTRACT 1. INTRODUCTION Profilers play an important role in software/hardware de- Profiling is deemed of pivotal importance for embedded sign, optimization, and verification. Various approaches system design. On the one hand, profiles obtained from re- have been proposed to implement profilers. The most alistic workloads are indispensable in application, processor, widespread approach adopted in the embedded domain is and compiler design. On the other hand, profile-based pro- Instruction Set Simulation (ISS) based profiling, which pro- gram optimization (e.g., [5]) and parallelization (e.g., [7]) vides uncompromised accuracy but limited execution speed. play an increasingly important role in harnessing the pro- Source code profilers, on the contrary, are fast but less accu- cessing power of embedded processors. However, current rate. This paper introduces TotalProf, a fast and accurate profiling techniques face two major challenges: First, the source code cross profiler that estimates the performance stringent time-to-market pressure and the increasing com- of an application from three aspects: First, code optimiza- plexity of applications and systems require profiling tools to tion and a novel virtual compiler backend are employed to be fast and quickly available, so that they can be employed resemble the course of target compilation. Second, an opti- as early as possible in the evaluation of design variants. Sec- mistic static scheduler is introduced to estimate the behav- ond, the accuracy of profiles is imperative to capture the ior of the target processor’s datapath.
    [Show full text]
  • Retargeting a C Compiler for a DSP Processor
    Retargeting a C Compiler for a DSP Processor Master thesis performed in electronics systems by Henrik Antelius LiTH-ISY-EX-3595-2004 Linköping 2004 Retargeting a C Compiler for a DSP Processor Master thesis in electronics systems at Linköping Institute of Technology by Henrik Antelius LiTH-ISY-EX-3595-2004 Supervisors: Thomas Johansson Ulrik Lindblad Patrik Thalin Examiner: Kent Palmkvist Linköping, 2004-10-05 Avdelning, Institution Datum Division, Department Date 2004-10-05 Institutionen för systemteknik 581 83 LINKÖPING Språk Rapporttyp ISBN Language Report category Svenska/Swedish Licentiatavhandling ISRN LITH-ISY-EX-3595-2004 X Engelska/English X Examensarbete C-uppsats Serietitel och serienummer ISSN D-uppsats Title of series, numbering Övrig rapport ____ URL för elektronisk version http://www.ep.liu.se/exjobb/isy/2004/3595/ Titel Anpassning av en C-kompilator för kodgenerering till en DSP-processor Title Retargeting a C Compiler for a DSP Processor Författare Henrik Antelius Author Sammanfattning Abstract The purpose of this thesis is to retarget a C compiler for a DSP processor. Developing a new compiler from scratch is a major task. Instead, modifying an existing compiler so that it generates code for another target is a common way to develop compilers for new processors. This is called retargeting. This thesis describes how this was done with the LCC C compiler for the Motorola DSP56002 processor. Nyckelord Keyword retarget, compiler, LCC, DSP Abstract The purpose of this thesis is to retarget a C compiler for a DSP proces- sor. Developing a new compiler from scratch is a major task. Instead, modifying an existing compiler so that it generates code for another target is a common way to develop compilers for new processors.
    [Show full text]
  • Design and Implementation of a Tricore Backend for the LLVM Compiler Framework
    Design and Implementation of a TriCore Backend for the LLVM Compiler Framework Studienarbeit im Fach Informatik vorgelegt von Christoph Erhardt geboren am 14.11.1984 in Kronach Angefertigt am Department Informatik Lehrstuhl fur¨ Informatik 4 { Verteilte Systeme und Betriebssysteme Friedrich-Alexander-Universit¨at Erlangen-Nurnberg¨ Betreuer: Prof. Dr.-Ing. habil. Wolfgang Schr¨oder-Preikschat Dipl.-Inf. Fabian Scheler Beginn der Arbeit: 01. Dezember 2008 Ende der Arbeit: 01. September 2009 Hiermit versichere ich, dass ich die Arbeit selbstst¨andig und ohne Benutzung anderer als der angegebenen Quellen angefertigt habe und dass die Arbeit in gleicher oder ¨ahnlicher Form noch keiner anderen Prufungsbeh¨ ¨orde vorgelegen hat und von dieser als Teil einer Prufungsleistung¨ angenommen wurde. Alle Stellen, die dem Wortlaut oder dem Sinn nach anderen Werken entnommen sind, habe ich durch Angabe der Quelle als Entlehnung kenntlich gemacht. Erlangen, den 01. September 2009 Uberblick¨ Diese Arbeit beschreibt die Entwicklung und Implementierung eines neuen Backends fur¨ das LLVM-Compiler-Framework, um das Erzeugen von Maschinencode fur¨ die TriCore- Prozessorarchitektur zu erm¨oglichen. Die TriCore-Architektur ist eine fortschrittliche Plattform fur¨ eingebettete Systeme, welche die Merkmale eines Mikrocontrollers, einer RISC-CPU und eines digitalen Signal- prozessors auf sich vereint. Sie findet prim¨ar im Automobilbereich und anderen Echtzeit- systemen Verwendung und wird am Lehrstuhl fur¨ Verteilte Systeme und Betriebssysteme der Universit¨at Erlangen-Nurnberg¨ fur¨ diverse Forschungsprojekte eingesetzt. Bei LLVM handelt es sich um eine moderne Compiler-Infrastruktur, bei deren Ent- wurf besonderer Wert auf Modularit¨at und Erweiterbarkeit gelegt wurde. Aus diesem Grund findet LLVM zunehmend Verbreitung in den verschiedensten Projekten { von Codeanalyse-Werkzeugen uber¨ Just-in-time-Compiler bis hin zu vollst¨andigen Allzweck- Systemcompilern.
    [Show full text]
  • Modern Compiler Design 2E.Pdf
    Dick Grune • Kees van Reeuwijk • Henri E. Bal Ceriel J.H. Jacobs • Koen Langendoen Modern Compiler Design Second Edition Dick Grune Ceriel J.H. Jacobs Vrije Universiteit Vrije Universiteit Amsterdam, The Netherlands Amsterdam, The Netherlands Kees van Reeuwijk Koen Langendoen Vrije Universiteit Delft University of Technology Amsterdam, The Netherlands Delft, The Netherlands Henri E. Bal Vrije Universiteit Amsterdam, The Netherlands Additional material to this book can be downloaded from http://extras.springer.com. ISBN 978-1 - 4614 -4698 -9 ISBN 978-1-4614-4699-6 (eBook) DOI 10.1007/978-1 - 4614-4699-6 Springer New York Heidelberg Dordrecht London Library of Congress Control Number: 2012941168 © Springer Science+Business Media New York 2012 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center.
    [Show full text]
  • Retargetable Compilers and Architecture Exploration for Embedded Processors
    Retargetable compilers and architecture exploration for embedded processors R. Leupers, M. Hohenauer, J. Ceng, H. Scharwaechter, H. Meyr, G. Ascheid and G. Braun Abstract: Retargetable compilers can generate assembly code for a variety of different target processor architectures. Owing to their use in the design of application-specific embedded processors, they bridge the gap between the traditionally separate disciplines of compiler construction and electronic design automation. In particular, they assist in architecture exploration for tailoring processors towards a certain application domain. The paper reviews the state-of-the-art in retargetable compilers for embedded processors. Based on some essential compiler background, several representative retargetable compiler systems are discussed, while also outlining their use in iterative, profiling-based architecture exploration. The LISATek C compiler is presented as a detailed case study, and promising areas of future work are proposed. 1 Introduction high-performance off-the-shelf processors from the desktop computer domain (together with their well developed Compilers translate high-level language source code into compiler technology) impossible for many applications. machine-specific assembly code. For this task, any compiler As a consequence, hundreds of different domain or even uses a model of the target processor. This model captures application-specific programmable processors have the compiler-relevant machine resources, including the appeared in the semiconductor market, and this trend instruction set, register files and instruction scheduling is expected to continue. Prominent examples include constraints. While in traditional target-specific compilers low-cost=low-energy microcontrollers (e.g. for wireless this model is built-in (i.e. it is hard-coded and probably sensor networks), number-crunching digital signal pro- distributed within the compiler source code), a retargetable cessors (e.g.
    [Show full text]
  • 15-411 Compiler Design: Lab 6 - LLVM Fall 2011
    15-411 Compiler Design: Lab 6 - LLVM Fall 2011 Instructor: Andre Platzer TAs: Josiah Boning and Ryan Pearl Compilers due: 11:59pm, Sunday, December 6, 2011 Term Paper due: 11:59pm, Thursday, December 8, 2011 1 Introduction The main goal of the lab is to explore advanced aspects of compilation. This writeup describes the option of retargeting the compiler to generate LLVM code; other writeups detail the option of implementing garbage collection or optimizing the generated code. The language L4 does not change for this lab and remains the same as in Labs 4 and 5. 2 Requirements You are required to hand in two separate items: (1) the working compiler and runtime system, and (2) a term paper describing and critically evaluating your project. 3 Testing You are not required to hand in new tests. We will use a subset of the tests from the previous labs to test your compiler. However, if you wish to use LLVM to optimize your code, consult the handout for lab6opt for guidelines on how to test your compiler, and what information should be added to the term paper. 4 Compilers Your compilers should treat the language L4 as in Labs 4 and 5. You are required to support safe and unsafe memory semantics. Note that safe memory semantics is technically a valid implemen- tation of unsafe memory semantics; therefore, if you have trouble getting the exception semantics of L4 working in a manner that corresponds directly to x86-64, use the safe semantics as a starting point, and try to remove as much of the overhead as you can.
    [Show full text]
  • Retargeting Open64 to a RISC Processor
    Retargeting Open64 to A RISC processor -- A Student’s Perspective Huimin Cui, Xiaobing Feng Key Laboratory of Computer System and Architecture, Institute of Computing Technology, CAS, 100190 Beijing, China {cuihm, fxb}@ict.ac.cn compiler [6]. Abstract Open64 has been retargeted to a number of This paper presents a student’s experience architectures. Pathscale modified Open64 to in Open64-Mips prototype development, we create EkoPath, a compiler for the AMD64 and summarize three retargeting observations. X8664 architecture. The University of Open64 is easy to be retargeted and the Delaware's Computer Architecture and Parallel procedure takes only a short period. With the Systems Laboratory (CAPSL) modified Open64 retarget procedure done, the compiler can to create the Kylin Compiler, a compiler for achieve good and stable performance. Open64 Intel's X-Scale architecture [1]. Besides the also provides many supports for debugging, with targets mentioned above, there are several other which a beginner can debug the compiler supported targets including PowerPC [6], without difficulty. We also share some NVISA [9], Simplight [10] and Qualcomm[11]. experiences of our retarget, including In this paper, we will discuss three issues methodology of verifying the compiler when retargeting Open64: ease of retarget, framework, attention to switches in Open64, performance, and debuggability. The discussion importance of debugging and reading generated is based on our work in retargeting Open64 to code. the MIPS platform, using the Simplight branch as our starting point, which is a RISC style DSP 1 Introduction with mixed 32/16 bit instruction set. During our discussion, we will use GCC (Mips target) for Open64 receives contributions from a comparison.
    [Show full text]