
Context Threading: A flexible and efficient dispatch technique for virtual machine interpreters ∗ Marc Berndl, Benjamin Vitale, Mathew Zaleski and Angela Demke Brown University of Toronto, Systems Research Lab {berndl,matz,bv,demke}@cs.toronto.edu Abstract poor performance compared to native code. Consequently, many important systems, such as Sun’s HotSpot [1] and Direct-threaded interpreters use indirect branches to dis- IBM’s production Java virtual machine [21] run in mixed patch bytecodes, but deeply-pipelined architectures rely on mode, compiling and executing some parts of a program branch prediction for performance. Due to the poor cor- while interpreting others. Baseline interpreter performance relation between the virtual program’s control flow and the thus continues to be relevant. hardware program counter, which we call the context prob- Recently, Ertl and Gregg observed that the performance lem, direct threading’s indirect branches are poorly pre- of otherwise efficient direct-threaded interpretation is lim- dicted by the hardware, limiting performance. Our dispatch ited by pipeline stalls and flushes due to extremely poor technique, context threading, improves branch prediction indirect branch prediction [8]. Modern pipelined architec- and performance by aligning hardware and virtual machine tures, such as the Pentium IV (P4) and the PowerPC (PPC), state. Linear virtual instructions are dispatched with na- must keep their pipelines full to perform well. Hardware tive calls and returns, aligning the hardware and virtual PC. branch predictors use the native PC to exploit the highly- Thus, sequential control flow is predicted by the hardware biased branches found in typical (native code) CPU work- return stack. We convert virtual branching instructions to loads [3,12]. Direct-threaded virtual machine (VM) inter- native branches, mobilizing the hardware’s branch predic- preters, however,are not typical workloads. Their branches’ tion resources. We evaluate the impact of context thread- targets are unbiased and therefore unpredictable [8]. For an ing on both branch prediction and performance using inter- interpreted program, it is the virtual program counter (or preters for Java and OCaml on the Pentium and PowerPC vPC) that is correlated with control flow. We therefore pro- architectures. On the Pentium IV, our technique reduces pose to organize the interpreter so that the native PC cor- mean mispredicted branches by 95%. On the PowerPC, it relates with the vPC, exposing virtual control flow to the reduces mean branch stall cycles by 75% for OCaml and hardware. 82% for Java. Due to reduced branch hazards, context We introduce a technique based on subroutine threading, threading reduces mean execution time by 25% for Java and once popular in early interpreters for languages like Forth. by19% and37%for OCaml on theP4 andPPC970,respec- To leverage return address stack prediction. we implement tively. We also combine context threading with a conserva- each virtual instruction as a subroutine which ends in a na- tive inlining technique and find its performance comparable tive return instruction. Note, however, that these subrou- to that of selective inlining. tines are not full-fledged functions in the sense of a higher- level programming language such as C (no register save/re- store, stack frame creation, etc.). When the instructions of 1 Introduction a virtual program are loaded by the interpreter, we trans- late them to a sequence of call instructions, one per virtual instruction, whose targets are these subroutines. Virtual in- Interpretation is a powerful tool for implementing pro- structions are then dispatched simply by natively execut- gramming language systems. It facilitates interactive pro- ing this sequence of calls. The key to the effectiveness of gram development and debugging, compact and portable this simple approach is that at dispatch time, the native PC deployment, and simple language prototyping. This com- is perfectly correlated with the virtual PC. Thus, for non- bination of features makes interpreted languages attractive branching bytecodes, the return address stack in modern in many settings, but their applicability is constrained by processors reliably predicts the address of the next byte- ∗This research was supported by NSERC, IBM CAS and CITO. code to execute. Because the next dynamic instruction is not generally the next static instruction in the virtual pro- Direct Threading Table Opcode Implementation Bodies gram, branches pose a greater challenge, For these virtual (DTT) INST_GOTO: 0 &INST_PUSH arg0 = *vPC; vPC=arg0; instructions, we provide a limited form of specialized inlin- Program bytecode 1 11 goto *vPC++; ing, replacing indirect with relative branches, thus exposing created by compiler 2 &INST_PUSH ... push 11 3 7 INST_PRINT: virtual branches to the hardware’s branch predictors. 4 &INST_MUL push 7 loader ... 5 We review techniques for virtual instruction dispatch in mul 5 &INST_PRINT goto *vPC++; print 6 &INST_DONE 6 interpreters, describe their performance problems, and de- done INST_DONE: vPC ... fine the context problem in Section 2. Then, we discuss exit; #the end Virtual Program Stack ¡ ¡ ¡ ¡ other work on improvingdispatch performancein Section 3. ¡ INST_MUL: ¡ ¡ ¡ ¡ ¡ 4 ¡ ¡ ¡ ¡ ¡ ... ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ We provide relevant details of our implementations in a Java ¡ goto *vPC++; ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ 0 ¡ ¡ ¡ ¡ virtual machine (SableVM) and the OCaml interpreter on ¡ ¡ ¡ ¡ ¡ ¡ INST_PUSH: 2 ¡ ¡ ¡ ¡ ¡ program ¡ ¡ ¡ ¡ ¡ ... start ¡ ¡ ¡ ¡ the Pentium IV and Power PC architectures in Section 4. ¡ ¡ ¡ ¡ ¡ ¡ goto *vPC++; ¡ ¡ ¡ ¡ Using an extensive suite of benchmarks for both Java and ¡ OCaml, we evaluate context threading in Section 5. This paper makes the following contributions: Figure 1. Direct Threaded VM Interpreter • We introduce a new dispatch technique for virtual ma- mov eax = (r1) ; r1 is vPC lwz r2 = 0(r1) chine interpreters that dramatically improves branch addl 4,r1 mtctr r2 prediction and demonstrate that our technique does not jmp *eax addi r1,r1,4 depend on a specific language or CPU architecture. bctr (a) Pentium IV assembly (b) Power PC assembly • We show context threading is effective. On both the P4 Figure 2. Direct Threaded Dispatch and the PPC970, it eliminates 25% of the mean elapsed time of Java benchmarks, with individual benchmarks running twice as fast as with direct threading. For OCaml, we achieve a 20% reduction in the mean ex- as the Direct Threading Table, or DTT. The operands are ecution time on the P4 and a 35% reduction on the also stored in this list, immediately after the corresponding PPC970 with some benchmarks achieving as much as opcode address. The vPC points into the DTT to indicate 40% and 50%, respectively. the instruction to be executed. The actual dispatch to the next instruction is accomplished by the “goto *vPC++” We show that context threading is compatible with in- • at the end of each opcode body, which is supportedby GNU lining, using a simple heuristic that we call Tiny Inlin- C’s labels-as-values extensions. In Figure 2, we show the ing. On OCaml, we achieve speedups relative to di- assembly code corresponding to this dispatch statement for rect threading of at least 10% over context threading the Pentium IV and PowerPC architectures. The overhead alone. On Java, we perform as well as or better than of these instructions is comparable to the execution time of SableVM’s implementation of selective inlining. small bodies in some VMs. When executing the indirect branch in Figure 2(a) the 2 The Context Problem Pentium IV will speculatively dispatch instructions using a predicted target address. The PowerPC uses a differ- An interpreter executes a virtual program by dispatching ent strategy for indirect branches, as shown in Figure 2(b). its virtual instructions in sequence. The current instruction First the target address is loaded into a register, and then a is indicated by a virtual program counter, or vPC. The vir- branch is executed to this register address. Rather than spec- tual program is a list of virtual instructions, each consisting ulate, the PowerPC stalls until the target address is known, of an opcode and zero or more operands. The exact repre- although other instructions may be scheduled between the sentation depends on the dispatch technique used. load and the branch to reduce or eliminate these stalls. A typical switch-dispatched interpreter, implemented in Stalling and incorrect speculation are serious pipeline C, is a for loop which fetches the opcode at vPC, and hazards. To perform at full speed, modern CPU’s need to then executes a switch statement, each opcode being im- keep their pipelines full by correctly predicting branch tar- plemented by a separate case block (the opcode body, or gets. Indirect branch predictors assume that the branch des- body). However, switch dispatch is considerably slower tination is highly correlated with the address of the indi- than the start-of-the-art, direct-threaded dispatch [7]. rect branch instruction itself. As observed by Ertl [8,9], As shown in Figure 1, a direct threaded interpreter rep- this assumption is usually wrong for direct threaded in- resents virtual program instructions as a list of addresses. terpreter workloads. In a direct-threaded implementation, Each address points to the opcode body. We refer to this list there is only one jump instruction per virtual opcode imple- mented. For example, in Figure 1, there are two instances stack. On the other hand, Curley found subroutine thread- of INST PUSH. In the context of vPC=0, the dispatch at ing faster on the 68000 [5]. On modern hardware the cost the end of the INST PUSH body results in a native indirect of the call and return is much lower, due to return branch branch back to the start of the INST PUSH body (since the prediction hardware, while the cost of direct threading has next virtual instruction at vPC=2 is also an INST PUSH). increased due to misprediction. In Section 5 we demon- However, the target of the same native indirect branch in strate this effect on several modern CPUs. the context of vPC=2 is determined by the address stored Superinstructions reduce the numberof dispatches.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-