Correctness Testing of Loop Optimizations in C and C++ Compilers
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Loop Pipelining with Resource and Timing Constraints
UNIVERSITAT POLITÈCNICA DE CATALUNYA LOOP PIPELINING WITH RESOURCE AND TIMING CONSTRAINTS Autor: Fermín Sánchez October, 1995 1 1 1 1 1 1 1 1 2 1• SOFTWARE PIPELINING 1 1 1 2.1 INTRODUCTION 1 Software pipelining [ChaSl] comprises a family of techniques aimed at finding a pipelined schedule of the execution of loop iterations. The pipelined schedule represents a new loop body which may contain instructions belonging to different iterations. The sequential execution of the schedule takes less time than the sequential execution of the iterations of the loop as they were initially written. 1 In general, a pipelined loop schedule has the following characteristics1: • All the iterations (of the new loop) are executed in the same fashion. • The initiation interval (ÍÍ) between the issuing of two consecutive iterations is always the same. 1 Figure 2.1 shows an example of software pipelining a loop. The DDG representing the loop body to pipeline is presented in Figure 2.1(a). The loop must be executed ten times (with an iteration index i G [0,9]). Let us assume that all instructions in the loop are executed in a single cycle 1 and a new iteration may start every cycle (otherwise the dependence between instruction A from 1 We will assume here that the schedule contains a unique iteration of the loop. 1 15 1 1 I I 16 CHAPTER 2 I I time B,, Prol AO Q, BO A, I Q, B, A2 3<i<9 I Steady State D4 B7 I Epilogue Cy I D9 (a) (b) (c) I Figure 2.1 Software pipelining a loop (a) DDG representing a loop body (b) Parallel execution of the loop (c) New parallel loop body I iterations i and i+ 1 will not be honored).- With this assumption,-the loop can be executed in a I more parallel fashion than the sequential one, as shown in Figure 2.1(b). -
CSE 582 – Compilers
CSE P 501 – Compilers Optimizing Transformations Hal Perkins Autumn 2011 11/8/2011 © 2002-11 Hal Perkins & UW CSE S-1 Agenda A sampler of typical optimizing transformations Mostly a teaser for later, particularly once we’ve looked at analyzing loops 11/8/2011 © 2002-11 Hal Perkins & UW CSE S-2 Role of Transformations Data-flow analysis discovers opportunities for code improvement Compiler must rewrite the code (IR) to realize these improvements A transformation may reveal additional opportunities for further analysis & transformation May also block opportunities by obscuring information 11/8/2011 © 2002-11 Hal Perkins & UW CSE S-3 Organizing Transformations in a Compiler Typically middle end consists of many individual transformations that filter the IR and produce rewritten IR No formal theory for order to apply them Some rules of thumb and best practices Some transformations can be profitably applied repeatedly, particularly if others transformations expose more opportunities 11/8/2011 © 2002-11 Hal Perkins & UW CSE S-4 A Taxonomy Machine Independent Transformations Realized profitability may actually depend on machine architecture, but are typically implemented without considering this Machine Dependent Transformations Most of the machine dependent code is in instruction selection & scheduling and register allocation Some machine dependent code belongs in the optimizer 11/8/2011 © 2002-11 Hal Perkins & UW CSE S-5 Machine Independent Transformations Dead code elimination Code motion Specialization Strength reduction -
Redundancy Elimination Common Subexpression Elimination
Redundancy Elimination Aim: Eliminate redundant operations in dynamic execution Why occur? Loop-invariant code: Ex: constant assignment in loop Same expression computed Ex: addressing Value numbering is an example Requires dataflow analysis Other optimizations: Constant subexpression elimination Loop-invariant code motion Partial redundancy elimination Common Subexpression Elimination Replace recomputation of expression by use of temp which holds value Ex. (s1) y := a + b Ex. (s1) temp := a + b (s1') y := temp (s2) z := a + b (s2) z := temp Illegal? How different from value numbering? Ex. (s1) read(i) (s2) j := i + 1 (s3) k := i i + 1, k+1 (s4) l := k + 1 no cse, same value number Why need temp? Local and Global ¡ Local CSE (BB) Ex. (s1) c := a + b (s1) t1 := a + b (s2) d := m&n (s1') c := t1 (s3) e := a + b (s2) d := m&n (s4) m := 5 (s5) if( m&n) ... (s3) e := t1 (s4) m := 5 (s5) if( m&n) ... 5 instr, 4 ops, 7 vars 6 instr, 3 ops, 8 vars Always better? Method: keep track of expressions computed in block whose operands have not changed value CSE Hash Table (+, a, b) (&,m, n) Global CSE example i := j i := j a := 4*i t := 4*i i := i + 1 i := i + 1 b := 4*i t := 4*i b := t c := 4*i c := t Assumes b is used later ¡ Global CSE An expression e is available at entry to B if on every path p from Entry to B, there is an evaluation of e at B' on p whose values are not redefined between B' and B. -
Polyhedral Compilation As a Design Pattern for Compiler Construction
Polyhedral Compilation as a Design Pattern for Compiler Construction PLISS, May 19-24, 2019 [email protected] Polyhedra? Example: Tiles Polyhedra? Example: Tiles How many of you read “Design Pattern”? → Tiles Everywhere 1. Hardware Example: Google Cloud TPU Architectural Scalability With Tiling Tiles Everywhere 1. Hardware Google Edge TPU Edge computing zoo Tiles Everywhere 1. Hardware 2. Data Layout Example: XLA compiler, Tiled data layout Repeated/Hierarchical Tiling e.g., BF16 (bfloat16) on Cloud TPU (should be 8x128 then 2x1) Tiles Everywhere Tiling in Halide 1. Hardware 2. Data Layout Tiled schedule: strip-mine (a.k.a. split) 3. Control Flow permute (a.k.a. reorder) 4. Data Flow 5. Data Parallelism Vectorized schedule: strip-mine Example: Halide for image processing pipelines vectorize inner loop https://halide-lang.org Meta-programming API and domain-specific language (DSL) for loop transformations, numerical computing kernels Non-divisible bounds/extent: strip-mine shift left/up redundant computation (also forward substitute/inline operand) Tiles Everywhere TVM example: scan cell (RNN) m = tvm.var("m") n = tvm.var("n") 1. Hardware X = tvm.placeholder((m,n), name="X") s_state = tvm.placeholder((m,n)) 2. Data Layout s_init = tvm.compute((1,n), lambda _,i: X[0,i]) s_do = tvm.compute((m,n), lambda t,i: s_state[t-1,i] + X[t,i]) 3. Control Flow s_scan = tvm.scan(s_init, s_do, s_state, inputs=[X]) s = tvm.create_schedule(s_scan.op) 4. Data Flow // Schedule to run the scan cell on a CUDA device block_x = tvm.thread_axis("blockIdx.x") 5. Data Parallelism thread_x = tvm.thread_axis("threadIdx.x") xo,xi = s[s_init].split(s_init.op.axis[1], factor=num_thread) s[s_init].bind(xo, block_x) Example: Halide for image processing pipelines s[s_init].bind(xi, thread_x) xo,xi = s[s_do].split(s_do.op.axis[1], factor=num_thread) https://halide-lang.org s[s_do].bind(xo, block_x) s[s_do].bind(xi, thread_x) print(tvm.lower(s, [X, s_scan], simple_mode=True)) And also TVM for neural networks https://tvm.ai Tiling and Beyond 1. -
C++ Programming: Program Design Including Data Structures, Fifth Edition
C++ Programming: From Problem Analysis to Program Design, Fifth Edition Chapter 5: Control Structures II (Repetition) Objectives In this chapter, you will: • Learn about repetition (looping) control structures • Explore how to construct and use count- controlled, sentinel-controlled, flag- controlled, and EOF-controlled repetition structures • Examine break and continue statements • Discover how to form and use nested control structures C++ Programming: From Problem Analysis to Program Design, Fifth Edition 2 Objectives (cont'd.) • Learn how to avoid bugs by avoiding patches • Learn how to debug loops C++ Programming: From Problem Analysis to Program Design, Fifth Edition 3 Why Is Repetition Needed? • Repetition allows you to efficiently use variables • Can input, add, and average multiple numbers using a limited number of variables • For example, to add five numbers: – Declare a variable for each number, input the numbers and add the variables together – Create a loop that reads a number into a variable and adds it to a variable that contains the sum of the numbers C++ Programming: From Problem Analysis to Program Design, Fifth Edition 4 while Looping (Repetition) Structure • The general form of the while statement is: while is a reserved word • Statement can be simple or compound • Expression acts as a decision maker and is usually a logical expression • Statement is called the body of the loop • The parentheses are part of the syntax C++ Programming: From Problem Analysis to Program Design, Fifth Edition 5 while Looping (Repetition) -
Compiler Construction Assignment 3 – Spring 2018
Compiler Construction Assignment 3 { Spring 2018 Robert van Engelen µc for the JVM µc (micro-C) is a small C-inspired programming language. In this assignment we will implement a compiler in C++ for µc. The compiler compiles µc programs to java class files for execution with the Java virtual machine. To implement the compiler, we can reuse the same concepts in the code-generation parts that were done in programming assignment 1 and reuse parts of the lexical analyzer you implemented in programming assignment 2. We will implement a new parser based on Yacc/Bison. This new parser utilizes translation schemes defined in Yacc grammars to emit Java bytecode. In the next programming assignment (the last assignment following this assignment) we will further extend the capabilities of our µc compiler by adding static semantics such as data types, apply type checking, and implement scoping rules for functions and blocks. Download Download the Pr3.zip file from http://www.cs.fsu.edu/~engelen/courses/COP5621/Pr3.zip. After unzipping you will get the following files Makefile A makefile bytecode.c The bytecode emitter (same as Pr1) bytecode.h The bytecode definitions (same as Pr1) error.c Error reporter global.h Global definitions init.c Symbol table initialization javaclass.c Java class file operations (same as Pr1) javaclass.h Java class file definitions (same as Pr1) mycc.l *) Lex specification mycc.y *) Yacc specification and main program symbol.c *) Symbol table operations test#.uc A number of µc test programs The files marked ∗) are incomplete. For this assignment you are required to complete these files. -
PDF Python 3
Python for Everybody Exploring Data Using Python 3 Charles R. Severance 5.7. LOOP PATTERNS 61 In Python terms, the variable friends is a list1 of three strings and the for loop goes through the list and executes the body once for each of the three strings in the list resulting in this output: Happy New Year: Joseph Happy New Year: Glenn Happy New Year: Sally Done! Translating this for loop to English is not as direct as the while, but if you think of friends as a set, it goes like this: “Run the statements in the body of the for loop once for each friend in the set named friends.” Looking at the for loop, for and in are reserved Python keywords, and friend and friends are variables. for friend in friends: print('Happy New Year:', friend) In particular, friend is the iteration variable for the for loop. The variable friend changes for each iteration of the loop and controls when the for loop completes. The iteration variable steps successively through the three strings stored in the friends variable. 5.7 Loop patterns Often we use a for or while loop to go through a list of items or the contents of a file and we are looking for something such as the largest or smallest value of the data we scan through. These loops are generally constructed by: • Initializing one or more variables before the loop starts • Performing some computation on each item in the loop body, possibly chang- ing the variables in the body of the loop • Looking at the resulting variables when the loop completes We will use a list of numbers to demonstrate the concepts and construction of these loop patterns. -
Research of Register Pressure Aware Loop Unrolling Optimizations for Compiler
MATEC Web of Conferences 228, 03008 (2018) https://doi.org/10.1051/matecconf/201822803008 CAS 2018 Research of Register Pressure Aware Loop Unrolling Optimizations for Compiler Xuehua Liu1,2, Liping Ding1,3 , Yanfeng Li1,2 , Guangxuan Chen1,4 , Jin Du5 1Laboratory of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3Digital Forensics Lab, Institute of Software Application Technology, Guangzhou & Chinese Academy of Sciences, Guangzhou, China 4Zhejiang Police College, Hangzhou, China 5Yunnan Police College, Kunming, China Abstract. Register pressure problem has been a known problem for compiler because of the mismatch between the infinite number of pseudo registers and the finite number of hard registers. Too heavy register pressure may results in register spilling and then leads to performance degradation. There are a lot of optimizations, especially loop optimizations suffer from register spilling in compiler. In order to fight register pressure and therefore improve the effectiveness of compiler, this research takes the register pressure into account to improve loop unrolling optimization during the transformation process. In addition, a register pressure aware transformation is able to reduce the performance overhead of some fine-grained randomization transformations which can be used to defend against ROP attacks. Experiments showed a peak improvement of about 3.6% and an average improvement of about 1% for SPEC CPU 2006 benchmarks and a peak improvement of about 3% and an average improvement of about 1% for the LINPACK benchmark. 1 Introduction fundamental transformations, because it is not only performed individually at least twice during the Register pressure is the number of hard registers needed compilation process, it is also an important part of to store values in the pseudo registers at given program optimizations like vectorization, module scheduling and point [1] during the compilation process. -
7. Control Flow First?
Copyright (C) R.A. van Engelen, FSU Department of Computer Science, 2000-2004 Ordering Program Execution: What is Done 7. Control Flow First? Overview Categories for specifying ordering in programming languages: Expressions 1. Sequencing: the execution of statements and evaluation of Evaluation order expressions is usually in the order in which they appear in a Assignments program text Structured and unstructured flow constructs 2. Selection (or alternation): a run-time condition determines the Goto's choice among two or more statements or expressions Sequencing 3. Iteration: a statement is repeated a number of times or until a Selection run-time condition is met Iteration and iterators 4. Procedural abstraction: subroutines encapsulate collections of Recursion statements and subroutine calls can be treated as single Nondeterminacy statements 5. Recursion: subroutines which call themselves directly or indirectly to solve a problem, where the problem is typically defined in terms of simpler versions of itself 6. Concurrency: two or more program fragments executed in parallel, either on separate processors or interleaved on a single processor Note: Study Chapter 6 of the textbook except Section 7. Nondeterminacy: the execution order among alternative 6.6.2. constructs is deliberately left unspecified, indicating that any alternative will lead to a correct result Expression Syntax Expression Evaluation Ordering: Precedence An expression consists of and Associativity An atomic object, e.g. number or variable The use of infix, prefix, and postfix notation leads to ambiguity An operator applied to a collection of operands (or as to what is an operand of what arguments) which are expressions Fortran example: a+b*c**d**e/f Common syntactic forms for operators: The choice among alternative evaluation orders depends on Function call notation, e.g. -
Repetition Structures
24785_CH06_BRONSON.qrk 11/10/04 9:05 M Page 301 Repetition Structures 6.1 Introduction Goals 6.2 Do While Loops 6.3 Interactive Do While Loops 6.4 For/Next Loops 6.5 Nested Loops 6.6 Exit-Controlled Loops 6.7 Focus on Program Design and Implementation: After-the- Fact Data Validation and Creating Keyboard Shortcuts 6.8 Knowing About: Programming Costs 6.9 Common Programming Errors and Problems 6.10 Chapter Review 24785_CH06_BRONSON.qrk 11/10/04 9:05 M Page 302 302 | Chapter 6: Repetition Structures The applications examined so far have illustrated the programming concepts involved in input, output, assignment, and selection capabilities. By this time you should have gained enough experience to be comfortable with these concepts and the mechanics of implementing them using Visual Basic. However, many problems require a repetition capability, in which the same calculation or sequence of instructions is repeated, over and over, using different sets of data. Examples of such repetition include continual checking of user data entries until an acceptable entry, such as a valid password, is made; counting and accumulating running totals; and recurring acceptance of input data and recalculation of output values that only stop upon entry of a designated value. This chapter explores the different methods that programmers use to construct repeating sections of code and how they can be implemented in Visual Basic. A repeated procedural section of code is commonly called a loop, because after the last statement in the code is executed, the program branches, or loops back to the first statement and starts another repetition. -
ICS803 Elective – III Multicore Architecture Teacher Name: Ms
ICS803 Elective – III Multicore Architecture Teacher Name: Ms. Raksha Pandey Course Structure L T P 3 1 0 4 Prerequisite: Course Content: Unit-I: Multi-core Architectures Introduction to multi-core architectures, issues involved into writing code for multi-core architectures, Virtual Memory, VM addressing, VA to PA translation, Page fault, TLB- Parallel computers, Instruction level parallelism (ILP) vs. thread level parallelism (TLP), Performance issues, OpenMP and other message passing libraries, threads, mutex etc. Unit-II: Multi-threaded Architectures Brief introduction to cache hierarchy - Caches: Addressing a Cache, Cache Hierarchy, States of Cache line, Inclusion policy, TLB access, Memory Op latency, MLP, Memory Wall, communication latency, Shared memory multiprocessors, General architectures and the problem of cache coherence, Synchronization primitives: Atomic primitives; locks: TTS, ticket, array; barriers: central and tree; performance implications in shared memory programs; Chip multiprocessors: Why CMP (Moore's law, wire delay); shared L2 vs. tiled CMP; core complexity; power/performance; Snoopy coherence: invalidate vs. update, MSI, MESI, MOESI, MOSI; performance trade-offs; pipelined snoopy bus design; Memory consistency models: SC, PC, TSO, PSO, WO/WC, RC; Chip multiprocessor case studies: Intel Montecito and dual-core, Pentium4, IBM Power4, Sun Niagara Unit-III: Compiler Optimization Issues Code optimizations: Copy Propagation, dead Code elimination , Loop Optimizations-Loop Unrolling, Induction variable Simplification, Loop Jamming, Loop Unswitching, Techniques to improve detection of parallelism: Scalar Processors, Special locality, Temporal locality, Vector machine, Strip mining, Shared memory model, SIMD architecture, Dopar loop, Dosingle loop. Unit-IV: Control Flow analysis Control flow analysis, Flow graph, Loops in Flow graphs, Loop Detection, Approaches to Control Flow Analysis, Reducible Flow Graphs, Node Splitting. -
CS153: Compilers Lecture 19: Optimization
CS153: Compilers Lecture 19: Optimization Stephen Chong https://www.seas.harvard.edu/courses/cs153 Contains content from lecture notes by Steve Zdancewic and Greg Morrisett Announcements •HW5: Oat v.2 out •Due in 2 weeks •HW6 will be released next week •Implementing optimizations! (and more) Stephen Chong, Harvard University 2 Today •Optimizations •Safety •Constant folding •Algebraic simplification • Strength reduction •Constant propagation •Copy propagation •Dead code elimination •Inlining and specialization • Recursive function inlining •Tail call elimination •Common subexpression elimination Stephen Chong, Harvard University 3 Optimizations •The code generated by our OAT compiler so far is pretty inefficient. •Lots of redundant moves. •Lots of unnecessary arithmetic instructions. •Consider this OAT program: int foo(int w) { var x = 3 + 5; var y = x * w; var z = y - 0; return z * 4; } Stephen Chong, Harvard University 4 Unoptimized vs. Optimized Output .globl _foo _foo: •Hand optimized code: pushl %ebp movl %esp, %ebp _foo: subl $64, %esp shlq $5, %rdi __fresh2: movq %rdi, %rax leal -64(%ebp), %eax ret movl %eax, -48(%ebp) movl 8(%ebp), %eax •Function foo may be movl %eax, %ecx movl -48(%ebp), %eax inlined by the compiler, movl %ecx, (%eax) movl $3, %eax so it can be implemented movl %eax, -44(%ebp) movl $5, %eax by just one instruction! movl %eax, %ecx addl %ecx, -44(%ebp) leal -60(%ebp), %eax movl %eax, -40(%ebp) movl -44(%ebp), %eax Stephen Chong,movl Harvard %eax,University %ecx 5 Why do we need optimizations? •To help programmers… •They write modular, clean, high-level programs •Compiler generates efficient, high-performance assembly •Programmers don’t write optimal code •High-level languages make avoiding redundant computation inconvenient or impossible •e.g.