A General Compiler Framework for Speculative Optimizations Using Data Speculative Code Motion
Total Page:16
File Type:pdf, Size:1020Kb
A General Compiler Framework for Speculative Optimizations Using Data Speculative Code Motion Xiaoru Dai, Antonia Zhai, Wei-Chung Hsu, Pen-Chung Yew Department of Computer Science and Engineering University of Minnesota Minneapolis, MN 55455 {dai, zhai, hsu, yew}@cs.umn.edu Abstract obtaining precise data dependence analysis is both difficult and expensive for languages such as C in which Data speculative optimization refers to code dynamic and pointer-based data structures are frequently transformations that allow load and store instructions to used. When the data dependence analysis is unable to be moved across potentially dependent memory show that there is definitely no data dependence between operations. Existing research work on data speculative two memory references, the compiler must assume that optimizations has mainly focused on individual code there is a data dependence between them. It is quite often transformation. The required speculative analysis that that such an assumption is overly conservative. The identifies data speculative optimization opportunities and examples in Figure 1 illustrate how such conservative data the required recovery code generation that guarantees the dependences may affect compiler optimizations. correctness of their execution are handled separately for each optimization. This paper proposes a new compiler S1: = *q while ( p ){ while( p ){ framework to facilitate the design and implementation of S2: *p= b 1 S1: if (p->f==0) S1: if (p->f == 0) general data speculative optimizations such as dead store 2 …… …… elimination, redundancy elimination, copy propagation, S3: = *q S2: p->f=0; S2: p->f=0; and code scheduling. This framework allows different …… …… data speculative optimizations to share the followings: (i) S4: *r= … p = p->n; p = p->n; 3 a speculative analysis mechanism to identify data } S3: if (p->f == 0) speculative optimization opportunities by ignoring low S5: = *p …… probability data dependences from optimizations, and (ii) S6: *r= … S4: p->f = 0; a recovery code generation mechanism to guarantee the …… correctness of the data speculative optimizations. The p = p->n; } proposed recovery code generation is based on Data Speculative Code Motion (DSCM) that uses code motion a) Example 1 b) Example 2 to facilitate a desired transformation. Based on the Figure 1. Examples of compiler optimizations position of the moved instruction, recovery code can be disabled by possible data dependences. generated accordingly. The proposed framework greatly simplifies the task of incorporating data speculation into In Figure 1, lines represent possible data dependences non-speculative optimizations by sharing the recovery between memory references. For the example in Figure 1 code generation and the speculative analysis. We have (a), the possible true dependence between *p and *q (line implemented the proposed framework in the ORC 2.1 1) prevents possible redundancy elimination of *q in S3. compiler and demonstrated its effectiveness on The possible output dependence between *p and *r (line SPEC2000 benchmark programs. 2) inhibits possible copy propagation of *p in S5. The possible true dependence between *p and *r (line 3) 1. Introduction disallows possible dead store elimination in S4. In this example, three compiler optimizations (redundancy Imprecise data dependence information may decrease elimination, copy propagation, dead store elimination) are the effectiveness of compiler optimizations. However, inhibited by possible data dependences. Without these Proceedings of the International Symposium on Code Generation and Optimization (CGO’05) 0-7695-2298-X/05 $ 20.00 IEEE possible dependences, those optimizations could have The work in [12] and [15] uses data speculation in been performed by the compiler. code scheduling to generate more efficient code sequence. In the left column of Figure 1 (b), another motivation In [13][14][22], data speculation is used to enable example with a C while statement is shown. In this speculative register allocation. They are all examples of example, the load of p->f in S1 may have possible data specific speculative code optimizations. dependence with the store of p->f in S2. After the loop is In [11], Ju et al. proposed a unified compiler unrolled, the result code is shown in the right column of framework for control and data speculation in a code Figure 1 (b). S1 and S2 are from the first iteration, S3 and scheduler. There are three main tasks in their speculative S4 are from the second iteration. The load of p->f in S3 code scheduler: marking speculative dependence edges, cannot be scheduled ahead of the store in S2 because of selecting speculative instructions as scheduling the possible data dependence. If this data dependence candidates, and check insertion and DAG update. These rarely happens at runtime, it may be profitable to schedule three tasks are integrated with the rest of the instruction the load in S3 before the store in S2 to hide the load scheduling phase. latency. If the data dependence indeed happens, a In [16], a framework that augments SSA form to recovery code needs to be executed to guarantee the incorporate data speculative information (obtained either correct results. from alias profiling or compiler heuristic rules) is Getting precise data dependence information is proposed. Speculative partial redundancy elimination difficult because it is hard for a compiler to know what based on the SSAPRE [5] is presented to exemplify the memory locations a memory reference may access at run use of such a framework. time. It is even more difficult when pointers are involved In both [11][16], the data speculative information is in the program. Therefore, using data speculation and explicitly annotated either through speculative runtime verification to overcome possible data dependence edges in dependence graph [11] or dependences (with low probabilities) has been proposed speculative weak updates in SSA form (i.e. χ and µ recently in [11-16]. Here, data speculation refers to the operators in [16]). All optimizations that try to incorporate execution of instructions which may potentially violate data speculation thus must be modified and made aware of possible memory dependences albeit infrequently. such explicitly annotated data speculative information. Compiler optimizations are normally divided into two In [11], the construction of dependence graph, phases: the analysis phase and the code transformation selection of scheduling candidates and DAG update are all phase. The analysis phase identifies optimization modified to handle the speculative dependence edges. opportunities based on the internal representation (IR) and Recovery code generation is decoupled from the data dependence information. The code transformation scheduling phase, and works well only for code phase modifies IR to generate improved code. To support scheduling. It may not handle other optimizations directly. data speculation, we need a recovery mechanism using For example, the identification of speculative chains in either hardware or software support to guarantee the their recovery code generation will not be applicable for correctness of their speculative optimizations. eliminating instructions due to speculative redundancy. In [16], the construction of SSA form, the Φ-insertion step, Speculative Data Dependence Analysis the rename step and the code motion step in SSAPRE all need be modified to identify speculative optimization opportunities and to generate recovery code. In [11][16], Analysis phase … Analysis phase the accommodation of data speculative information in of optimization 1 of optimization n optimizations and the recovery code generation have to be tailored to each specific compiler optimization. They can’t be shared among optimizations. Such existing frameworks are difficult to adopt, to extend, and to maintain. Data Speculative Code Motion In our framework, as shown in Figure 2, the data speculative information is integrated into a shared Speculative Data Dependence Analysis (SDDA) phase by ignoring low probability data dependences from the Code transformation … Code transformation optimizations. Hence, more optimization opportunities of optimization 1 of optimization n could be exposed for existing optimizations without requiring any modification to accommodate such information as in [11] and [16]. When an optimization Figure 2. Structure of our proposed data opportunity is identified in the analysis phase of an speculative optimizations optimization, a shared mechanism is provided for Proceedings of the International Symposium on Code Generation and Optimization (CGO’05) 0-7695-2298-X/05 $ 20.00 IEEE recovery code generation if it is data speculative. The dependence between two memory references unless we proposed recovery code generation is based on Data could prove that it is very likely, or most definitely, that Speculative Code Motion (DSCM) which uses a code those two memory references will access the same motion model to determine whether a transformation is memory locations. Any data dependence with a low data speculative, and to generate necessary recovery code. probability will be assumed as no data dependence in the Our framework has two advantages. First, SDDA and speculative optimizations. As it turns out, the probability DSCM are shared by all optimizations. Second, the distribution of most data dependences are very bimodal, existing non-speculative optimizations need no i.e. it is either very likely, or not likely at all [20]. Using modifications.