Dynamic Optimization of Micro-Operations

Total Page:16

File Type:pdf, Size:1020Kb

Dynamic Optimization of Micro-Operations Dynamic Optimization of Micro-Operations Brian Slechta David Crowe Brian Fahs Michael Fertig Gregory Muthler Justin Quek Francesco Spadini Sanjay J. Patel Steven S. Lumetta Center for Reliable and High-Performance Computing Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Abstract As complex instructions are individually decoded into constituent micro-operations, an opportunity is created to Inherent within complex instruction set architectures such globally optimize their constituent micro-operations. For ex- as x86 are inefficiencies that do not exist in a simpler ISAs. ample, a basic block’s micro-operations, optimized as a unit, Modern x86 implementations decode instructions into one or can be more efficient than micro-operations generated one more micro-operations in order to deal with the complexity instruction at a time. For simpler ISAs, such as PowerPC, of the ISA. Since these micro-operations are not visible to the SPARC, and MIPS, this type of optimization is performed compiler, the stream of micro-operations can contain redun- during compilation, as the emitted code is effectively at the dancies even in statically optimized x86 code. Within a pro- same level as the micro-operations of a complex ISA. This cessor implementation, however, barriers at the ISA level do optimization opportunity thus exists with complex ISAs, even not apply, and these redundancies can be removed by opti- when basic blocks are statically optimized. Leveraging this mizing the micro-operation stream. opportunity, of course, requires raising the architectural state In this paper, we explore the opportunities to optimize code boundaries to the basic block level. That is, architectural state at the micro-operation granularity. We execute these micro- is only guaranteed to correspond at basic block boundaries. operation optimizations using the rePLay Framework as a mi- The x86 instruction set architecture in particular can ben- croarchitectural substrate. Using a simple set of seven opti- efit from optimizations at the micro-operation level. The mizations, including two that aggressively and speculatively x86 ISA provides only a few 32-bit registers for storing tem- attempt to remove redundant load instructions, we examine porary values, which constrains optimization at compile time. the effects of dynamic optimization of micro-operations using The ISA’s two-address instruction format, along with the a trace-driven simulation environment. high-granularity of some common instructions, also increase Simulation reveals that across a sampling of the opportunity for micro-operation optimization. Further- SPECint 2000 and real x86 applications, rePLay is able to more, non-uniform instruction semantics, such as opcodes reduce micro-operation count by 21% and, in particular, that requires specific source registers (e.g., the x86 DIV in- load micro-operation count by 22%. These reductions cor- struction), limit a compiler’s ability to efficiently allocate reg- respond to a boost in observed instruction-level parallelism isters. Optimization at the micro-operation level partially al- on an 8-wide optimizing rePLay processor by 17% over a leviates these problems because the register assignment is not non-optimizing configuration. restricted to the architectural register space. In this paper, we evaluate the potential of performing opti- 1 Introduction mizations on regions of x86 micro-operations, where the re- gions are larger than basic blocks. Using the rePLay Frame- work as our microarchitectural substrate, we evaluate a pro- Complex instruction set architectures present significant cessor architecture that performs micro-operation optimiza- design challenges for high-performance implementations. tion on atomic dynamic instruction traces, or frames.rePLay Variable-length instruction formats and high-granularity in- contains hardware support for performing optimizations (via structions complicate the decoding and execution processes. an optimization engine) and hardware support for specula- In order to deal with ISA complexities, current implemen- tion recovery (enabling speculative optimization). The op- tations decode instructions into simpler micro-operations. timizer performs simple optimizations on each frame, such These micro-operations are essentially control words that as dead code elimination, reassociation, and store forward- traverse the processor pipeline and perform an equivalent ing. We study the effect of optimization using trace-driven amount of processing to the instructions of a simple ISA. simulation on both SPECint 2000 benchmarks and commer- Fetch Engine Frame Sequencer cial x86 applications. On average across this spectrum of Cache applications, we observe that optimization with rePLay re- duces micro-operation count by 21% and, in particular, load Optimization Engine micro-operation count by 22%. These reductions boost the Recovery instruction-level parallelism of a deeply-pipelined 8-wide op- Mechanism timizing rePLay processor by 17% over a non-optimizing Execution Engine configuration across the applications. In this paper, we provide two major contributions over the Frame Completing instructions previous work on rePLay [4, 13]. First, we evaluate the im- Constructor pact of micro-operation-level optimization in the context of the x86 ISA. Previous work examined hardware-based dy- namic optimization of Alpha instructions. This distinction is Figure 1. The rePLay mechanism coupled to a important: the granularity of the x86 instruction set and its processor architecture. limited register set create inefficiencies that are more preva- lent than in binaries of simpler ISAs. The full benefit of region, or frame. A frame is similar to a trace in a trace- x86 optimization requires dealing with memory dependen- scheduling compiler [5] or a block in the Block-Structured cies, and we describe our scheme for speculatively optimiz- ISA [8, 11]. All control dependencies within a frame are ing around them. We present our evaluation on continuous removed, ensuring that all instructions within the frame are traces (i.e., including DLL calls and system code) of SPECint mutually control independent. In particular, either all instruc- benchmarks and real applications. tions in a frame commit their results, or none of them do. Second, we describe the high-level microarchitecture of a This atomicity simplifies the optimization algorithms used programmable optimization engine datapath that can be used and allows high-bandwidth instruction fetch. To remove con- to perform micro-operation optimizations. The complexity trol flow, rePLay’s frame constructor converts dynamically of the optimization datapath and the optimization software biased branches into assertions, then merges sequences of is reduced by utilizing three properties of frames: (1) they constituent basic blocks into frames. Using frames as an op- are atomic, (2) they embody a single control path, and (3) timization entity is of particular importance because precise the register renaming process renders the frames into a form architectural state need only be maintained at frame bound- amenable to optimizations by guaranteeing that each opera- aries, thus allowing the optimizer more leeway in performing tion writes to a unique physical register. optimizations. We demonstrate this in Section 3 with an ex- The remainder of the paper is organized as follows. The ample. next section provides an overview of the rePLay microarchi- An assertion whose condition is not true triggers a hard- tectural substrate. Section 3 contains examples of the micro- ware recovery event that rolls back architectural state to the operation optimizations that we evaluate in this paper. Sec- beginning of the frame. A rollback mechanism of this type is tion 4 introduces the primitive operations and structure of the already present in most modern processors to support out-of- optimization datapath. The experimental infrastructure is de- order, speculative execution. This restoration capability im- scribed in Section 5, and experimental results are presented plies that any state generated during frame execution must be in Section 6. Section 7 contains related work. Section 8 pro- buffered until all assertions within the frame have executed vides conclusions. successfully (not fired). Once all assertions within a frame have been checked and all micro-operations have been exe- 2 The rePLay Microarchitecture cuted, state changes generated within the frame can be com- mitted. This atomicity enables the optimizer to make specu- lative optimizations safely, with the speculative assumptions In this section we provide an overview of the rePLay enforced by assertions. Framework, which is designed to facilitate aggressive dy- With this general design, frame construction can be done namic optimization with low overhead. To this end, the in hardware or by the compiler as a specification of the ISA. framework consists of five key components: (1) a frame con- All of our current work has focused on hardware-level frame structor for creating candidate optimization regions, (2) a pro- construction, but this property is not inherent to rePLay. grammable engine for optimizing these regions, (3) a frame cache for storing these regions on-chip, (4) a component for sequencing between regions, and (5) a mechanism to recover 3 Micro-operation Optimization architectural state when speculative optimizations prove in- correct. These components are integrated into a processor’s Processors that implement complex
Recommended publications
  • Intel® Architecture Instruction Set Extensions and Future Features
    Intel® Architecture Instruction Set Extensions and Future Features Programming Reference May 2021 319433-044 Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. All product plans and roadmaps are subject to change without notice. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. Code names are used by Intel to identify products, technologies, or services that are in development and not publicly available. These are not “commercial” names and not intended to function as trademarks. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be ob- tained by calling 1-800-548-4725, or by visiting http://www.intel.com/design/literature.htm. Copyright © 2021, Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.
    [Show full text]
  • An Evaluation of Multiple Branch Predictor and Trace Cache Advanced Fetch Unit Designs for Dynamically Scheduled Superscalar Processors Slade S
    Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2004 An evaluation of multiple branch predictor and trace cache advanced fetch unit designs for dynamically scheduled superscalar processors Slade S. Maurer Louisiana State University and Agricultural and Mechanical College, [email protected] Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_theses Part of the Electrical and Computer Engineering Commons Recommended Citation Maurer, Slade S., "An evaluation of multiple branch predictor and trace cache advanced fetch unit designs for dynamically scheduled superscalar processors" (2004). LSU Master's Theses. 335. https://digitalcommons.lsu.edu/gradschool_theses/335 This Thesis is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Master's Theses by an authorized graduate school editor of LSU Digital Commons. For more information, please contact [email protected]. AN EVALUATION OF MULTIPLE BRANCH PREDICTOR AND TRACE CACHE ADVANCED FETCH UNIT DESIGNS FOR DYNAMICALLY SCHEDULED SUPERSCALAR PROCESSORS A Thesis Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering in The Department of Electrical and Computer Engineering by Slade S. Maurer B.S., Southeastern Louisiana University, Hammond, Louisiana, 1998 December 2004 ACKNOWLEDGEMENTS This thesis would not be possible without several contributors. I want to thank my advisor Professor David Koppelman for his guidance and support. He brought to my attention this fascinating thesis topic and trained me to modify RSIML to support my research.
    [Show full text]
  • Improving the Utilization of Micro-Operation Caches in X86 Processors Jagadish B
    Improving the Utilization of Micro-operation Caches in x86 Processors Jagadish B. Kotra, John Kalamatianos AMD Research, USA. Email: [email protected], [email protected] Abstract—Most modern processors employ variable length, hidden from the Instruction Set Architecture (ISA) to ensure Complex Instruction Set Computing (CISC) instructions to backward compatibility. Such an ISA level abstraction enables reduce instruction fetch energy cost and bandwidth require- processor vendors to implement an x86 instruction differently ments. High throughput decoding of CISC instructions requires energy hungry logic for instruction identification. Efficient CISC based on their custom micro-architectures. However, this ad- instruction execution motivated mapping them to fixed length ditional step of translating each variable length instruction in micro-operations (also known as uops). To reduce costly decoder to fixed length uops incurs high decode latency and consumes activity, commercial CISC processors employ a micro-operations more power thereby affecting the instruction dispatch band- cache (uop cache) that caches uop sequences, bypassing the width to the back-end [5, 34, 22]. decoder. Uop cache’s benefits are: (1) shorter pipeline length for uops dispatched by the uop cache, (2) lower decoder energy To reduce decode latency and energy consumption of x86 consumption, and, (3) earlier detection of mispredicted branches. decoders, researchers have proposed caching the decoded uops In this paper, we observe that a uop cache can be heavily in a separate hardware structure called uop cache2 [51, 36]. An fragmented under certain uop cache entry construction rules. uop cache stores recently decoded uops anticipating temporal Based on this observation, we propose two complementary reuse [17, 51].
    [Show full text]
  • I See Dead Μops: Leaking Secrets Via Intel/AMD Micro-Op Caches
    I See Dead µops: Leaking Secrets via Intel/AMD Micro-Op Caches Xida Ren Logan Moody Mohammadkazem Taram Matthew Jordan University of Virginia University of Virginia University of California, San Diego University of Virginia [email protected] [email protected] [email protected] [email protected] Dean M. Tullsen Ashish Venkat University of California, San Diego University of Virginia [email protected] [email protected] translation; the resulting translation penalty is a function of Abstract—Modern Intel, AMD, and ARM processors translate complex instructions into simpler internal micro-ops that are the complexity of the x86 instruction, making the micro-op then cached in a dedicated on-chip structure called the micro- cache a ripe candidate for implementing a timing channel. op cache. This work presents an in-depth characterization study However, in contrast to the rest of the on-chip caches, the of the micro-op cache, reverse-engineering many undocumented micro-op cache has a very different organization and design; features, and further describes attacks that exploit the micro- op cache as a timing channel to transmit secret information. as a result, conventional cache side-channel techniques do not In particular, this paper describes three attacks – (1) a same directly apply – we cite several examples. First, software is thread cross-domain attack that leaks secrets across the user- restricted from directly indexing into the micro-op cache, let kernel boundary, (2) a cross-SMT thread attack that transmits alone access its
    [Show full text]
  • The Pentium 4 and the G4e: an Architectural Comparison
    http://arstechnica.com/cpu/01q2/p4andg4e/p4andg4e-1.html The Pentium 4 and the G4e: An Architectural Comparison Part I by Jon Stokes Copyright Ars Technica, LLC 1998-2001. The following disclaimer applies to the information, trademarks, and logos contained in this document. Neither the author nor Ars Technica make any representations with respect to the contents hereof. Materials available in this document are provided "as is" with no warranty, express or implied, and all such warranties are hereby disclaimed. Ars Technica assumes no liability for any loss, damage or expense from errors or omissions in the materials available in this document, whether arising in contract, tort or otherwise. The material provided here is designed for educational use only. The material in this document is copyrighted by Ars Technica, LLC., and may not be reprinted or electronically reproduced unless prior written consent is obtained from Ars Technica, LLC. Links can be made to any of these materials from a WWW page, however, please link to the original document. Copying and/or serving from your local site is only allowed with permission. As per copyright regulations, "fair use" of selected portions of the material for educational purposes is permitted by individuals and organizations provided that proper attribution accompanies such utilization. Commercial reproduction or multiple distribution by any traditional or electronic based reproduction/publication method is prohibited. Any mention of commercial products or services within this document does not constitute an endorsement. "Ars Technica" is trademark of Ars Technica, LLC. All other trademarks and logos are property of their respective owners. Copyright © 1998-2001 Ars Technica, LLC 1 http://arstechnica.com/cpu/01q2/p4andg4e/p4andg4e-1.html The Pentium 4 and the G4e: an Architectural Comparison by Jon "Hannibal" Stokes When the Pentium 4 hit the market in November of 2000, it was the first major new x86 microarchitecture from Intel since the Pentium Pro.
    [Show full text]
  • The Microarchitecture of Intel, AMD and VIA Cpus: an Optimization Guide for Assembly Programmers and Compiler Makers
    3. The microarchitecture of Intel, AMD, and VIA CPUs An optimization guide for assembly programmers and compiler makers By Agner Fog. Technical University of Denmark. Copyright © 1996 - 2021. Last updated 2021-08-17. Contents 1 Introduction ....................................................................................................................... 7 1.1 About this manual ....................................................................................................... 7 1.2 Microprocessor versions covered by this manual ........................................................ 8 2 Out-of-order execution (All processors except P1, PMMX) .............................................. 10 2.1 Instructions are split into µops ................................................................................... 10 2.2 Register renaming .................................................................................................... 11 3 Branch prediction (all processors) ................................................................................... 12 3.1 Prediction methods for conditional jumps .................................................................. 12 3.2 Branch prediction in P1 ............................................................................................. 18 3.3 Branch prediction in PMMX, PPro, P2, and P3 ......................................................... 21 3.4 Branch prediction in P4 and P4E .............................................................................. 23 3.5 Branch
    [Show full text]
  • Superscalar Processors
    Superscalar Processors Superscalar Processor Multiple Independent Instruction Pipelines; each with multiple stages Instruction-Level Parallelism determine dependencies between nearby instructions o input of one instruction depends upon the output of a preceding instruction locate nearby independent instructions o issue & complete instructions in an order different than specified in the code stream uses branch prediction methods rather than delayed branches RISC or CICS Super Pipelined Processor Pipeline stages can be segmented into n distinct non-overlapping parts each of which can execute in of a clock cycle Limitations of Instruction-Level Parallelism True Data Dependency (Flow Dependency, Write-After-Read [WAR] Dependency) o Second Instruction requires data produced by First Instruction Procedural Dependency o Branch Instruction – Instructions following either Branches-Taken Branches-NotTaken have a procedural-dependency on the branch o Variable-Length Instructions Partial Decoding is required prior to the fetching of the subsequent instruction, i.e., computing PC value Resource Conflicts o Memory, cache, buses, register ports, file ports, functional units access Output Dependency o See “In-Order Issue Out-of-Order Completion” below Anti-dependency o See “Out-of-Order Issue Out-of-Order Completion” below Instruction-Level Parallelism Instructions in sequence are independent instructions can be executed in parallel by overlapping Degree of Instruction-Level Parallelism depends upon o Frequencies of True Data Dependencies & Procedural
    [Show full text]
  • Software Trace Cache
    ACM International Conference on Supercomputing 25th Anniversary Volume Author Retrospective for Software Trace Cache Alex Ramirez1,2, Ayose J. Falcón3, Oliverio J. Santana4, Mateo Valero1,2 1 Universitat Politècnica de Catalunya, BarcelonaTech 2 Barcelona Supercomputing Center 3 Intel Labs, Barcelona 4 Universidad de Las Palmas de Gran Canaria {[email protected]} ABSTRACT The Software Trace Cache is a compiler transformation, or a post- In superscalar processors, capable of issuing and executing compilation binary optimization, that extends the seminar work of multiple instructions per cycle, fetch performance represents an Pettis and Hansen PLDI'90 to perform the reordering of the upper bound to the overall processor performance. Unless there is dynamic instruction stream into sequential memory locations some form of instruction re-use mechanism, you cannot execute using profile information from previous executions. The major instructions faster than you can fetch them. advantage compared to the trace cache, is that it is a software Instruction Level Parallelism, represented by wide issue out of optimization, and does not require additional hardware to capture order superscalar processors, was the trending topic during the the dynamic instruction stream, nor additional memories to store end of the 90's and early 2000's. It is indeed the most promising them. Instructions are still captured in the regular instruction way to continue improving processor performance in a way that cache. The major disadvantage is that it can only capture one of does not impact application development, unlike current multicore the dynamic instruction sequences to store it sequentially. Any architectures which require parallelizing the applications (a other control flow will result in taken branches, and interruptions process that is still far from being automated in the general case).
    [Show full text]
  • Using Tracing to Enhance Data Cache Performance in Cpus
    Using Tracing to Enhance Data Cache Performance in CPUs The creation of a Trace-Assisted Cache to increase cache hits and decrease runtime Jonathan Paul Rainer A thesis presented for the degree of Doctor of Philosophy (PhD) Computer Science University of York September 2020 Abstract The processor-memory gap is widening every year with no prospect of reprieve. More and more latency is being added to program runtimes as memory cannot satisfy the demands of CPUs quickly enough. In the past, this has been alleviated through caches of increasing complexity or techniques like prefetching, to give the illusion of faster memory. However, these techniques have drawbacks because they are reactive or rely on incomplete information. In general, this leads to large amounts of latency in programs due to processor stalls. It is our contention that through tracing a program’s data accesses and feeding this information back to the cache, overall program runtime can be reduced. This is achieved through a new piece of hardware called a Trace-Assisted Cache (TAC). This uses traces to gain foreknowledge of the memory requests the processor is likely to make, allowing them to be actioned before the processor requests the data, overlapping memory and computation instructions. Comparing the TAC against a standard CPU without a cache, we see improvements in runtimes of up to 65%. However, we see degraded performance of around 8% on average when compared to Set-Associative and Direct-Mapped caches. This is because improvements are swamped by high overheads and synchronisation times between components. We also see that benchmarks that exhibit several qualities: a balance of computation and memory instructions and keeping data well spread out in memory fare better using TAC than other benchmarks on the same hardware.
    [Show full text]