Characterizing Latency, Throughput, and Port Usage of Instructions on Intel Microarchitectures

Characterizing Latency, Throughput, and Port Usage of Instructions on Intel Microarchitectures

uops.info: Characterizing Latency, Throughput, and Port Usage of Instructions on Intel Microarchitectures Andreas Abel and Jan Reineke {abel,reineke}@cs.uni-saarland.de Saarland University Saarland Informatics Campus Saarbrücken, Germany Abstract 1 Introduction Modern microarchitectures are some of the world’s most Developing tools that predict, explain, or even optimize the complex man-made systems. As a consequence, it is increas- performance of software is challenging due to the complexity ingly difficult to predict, explain, let alone optimize the per- of today’s microarchitectures. Unfortunately, this challenge formance of software running on such microarchitectures. is exacerbated by the lack of a precise documentation of As a basis for performance predictions and optimizations, their behavior. While the high-level structure of modern mi- we would need faithful models of their behavior, which are, croarchitectures is well-known and stable across multiple unfortunately, seldom available. generations, lower-level aspects may differ considerably be- In this paper, we present the design and implementation tween microarchitecture generations and are generally not of a tool to construct faithful models of the latency, through- as well documented. An important aspect with a relatively put, and port usage of x86 instructions. To this end, we first strong influence on performance is how ISA instructions discuss common notions of instruction throughput and port decompose into micro-operations (µops); which ports these usage, and introduce a more precise definition of latency that, µops may be executed on; and what their latencies are. in contrast to previous definitions, considers dependencies Knowledge of this aspect is required, for instance, to build between different pairs of input and output operands. We precise performance-analysis tools like CQA [8], Kerncraft then develop novel algorithms to infer the latency, through- [18], or llvm-mca [6]. It is also useful when configuring cycle- put, and port usage based on automatically-generated mi- accurate simulators like Zesto [27], gem5 [7], McSim+ [3] crobenchmarks that are more accurate and precise than ex- or ZSim [31]. Optimizing compilers, such as LLVM [26] and isting work. GCC [15], can profit from detailed instruction characteriza- To facilitate the rapid construction of optimizing compil- tions to generate efficient code for a specific microarchitec- ers and tools for performance prediction, the output of our ture. Similarly, such knowledge is helpful when manually tool is provided in a machine-readable format. We provide fine-tuning a piece of code for a specific processor. experimental results for processors of all generations of In- Unfortunately, information about the port usage, latency, tel’s Core architecture, i.e., from Nehalem to Coffee Lake, and throughput of individual instructions at the required and discuss various cases where the output of our tool differs level of detail is hard to come by. Intel’s processor manu- considerably from prior work. als [23] only contain latency and throughput data for a num- ber of “commonly-used instructions.” They do not contain ACM Reference Format: information on the decomposition of individual instructions Andreas Abel and Jan Reineke. 2019. uops.info: Characterizing into µops, nor do they state the execution ports that these arXiv:1810.04610v3 [cs.PF] 5 Mar 2019 Latency, Throughput, and Port Usage of Instructions on Intel Mi- µops can use. croarchitectures. In 2019 Architectural Support for Programming The only way to obtain accurate instruction characteriza- Languages and Operating Systems (ASPLOS ’19), April 13–17, 2019, tions for many recent microarchitectures is thus to perform Providence, RI, USA. ACM, New York, NY, USA, 14 pages. https: //doi.org/10.1145/3297858.3304062 measurements using microbenchmarks. Such measurements are aided by the availability of performance counters that provide precise information on the number of elapsed cycles ASPLOS ’19, April 13–17, 2019, Providence, RI, USA and the cumulative port usage of instruction sequences. A rel- © 2019 Copyright held by the owner/author(s). Publication rights licensed atively large body of work [1, 2, 4, 9, 10, 19, 28–30, 32, 33, 35, to ACM. 36] uses microbenchmarks to infer properties of the memory This is the author’s version of the work. It is posted here for your personal hierarchy. Another line of work [5, 13, 14, 25] uses automati- use. Not for redistribution. The definitive Version of Record was published cally generated microbenchmarks to characterize the energy in 2019 Architectural Support for Programming Languages and Operating consumption of microprocessors. Comparably little work [8, Systems (ASPLOS ’19), April 13–17, 2019, Providence, RI, USA, https://doi.org/ 10.1145/3297858.3304062. 12, 17, 34] is targeted at instruction characterizations. Fur- consider information provided by Intel directly, and then thermore, existing approaches, such as [12], require signifi- look at measurement-based approaches. cant manual effort to create the microbenchmarks and toin- terpret the results of the experiments. Furthermore, its results 2.1 Information provided by Intel are not always accurate and precise, as we will show later. Intel’s Optimization Reference Manual [23] contains a set In this paper, we develop a new approach that can auto- of tables with latency and throughput data for “commonly- matically generate microbenchmarks in order to characterize used instructions.” The tables are not complete; for some the latency, throughput, and port usage of instructions on instructions, only throughput information is provided. The Intel Core CPUs in an accurate and precise manner. manual does not contain detailed information about the port Before describing our algorithms and their implementa- usage of individual instructions. tion, we first discuss common notions of instruction latency, IACA [20] is a closed-source tool developed by Intel that throughput, and port usage. For latency, we propose a new can statically analyze the performance of loop kernels on definition that, in contrast to previous definitions, considers several microarchitectures (which can be different from the dependencies between different pairs of input and output system that the tool is run on). The tool generates a report operands, which enables more accurate performance predic- which includes throughput and port usage data of the an- tions. alyzed loop kernel. By considering loop kernels with only We then develop algorithms that generate assembler code one instruction, it is possible to obtain the throughput of the for microbenchmarks to measure the properties of interest corresponding instruction. However, it is, in general, not pos- for most x86 instructions. Our algorithms take into account sible to determine the port bindings of the individual µops explicit and implicit dependencies, such as, e.g., dependen- this way. Early versions of IACA were also able to analyze cies on status flags. Therefore, they require detailed informa- the latency of loop kernels; however, support for this was tion on the x86 instruction set. We create a machine-readable dropped in version 2.2. IACA is updated only infrequently. XML representation of the x86 instruction set that contains Support for the Broadwell microarchitecture (which was re- all the information needed for automatically generating as- leased in 2014), for example, was added only in 2017. There sembler code for each instruction. The relevant information is currently no support for the two most recent microarchi- is automatically extracted from the configuration files of tectures, Kaby Lake and Coffee Lake, which were released Intel’s x86 Encoder Decoder (XED) [21] library. in 2016 and 2017, respectively. We have implemented our algorithms in a tool that we The instruction scheduling components of LLVM [26] for have successfully applied to all microarchitecture genera- the Sandy Bridge, Haswell, Broadwell, and Skylake microar- tions of Intel’s Core architecture, i.e., from Nehalem to Coffee chitecture were recently updated and extended with latency Lake. In addition to running the generated microbenchmarks and port usage information that was, according to the com- on the actual hardware, we have also implemented a variant mit message (https://reviews.llvm.org/rL307529), provided of our tool that runs them on top of Intel IACA [20]. IACA by the architects of these microarchitectures. The resulting is a closed-source tool published by Intel that can statically models are available in the LLVM repository. analyze the performance of loop kernels on different Intel microarchitectures. It is, however, updated only infrequently, 2.2 Measurement-based Approaches and its results are not always accurate, as we will show later. The output of our tool is available in a machine-readable Agner Fog [12] provides lists of instruction latency, through- format, so that it can be easily used to implement, e.g., simula- put, and port usage data for different x86 microarchitectures. tors, performance prediction tools, or optimizing compilers. The data in the lists is not complete; e.g., latency data for Finally, we discuss several interesting insights obtained instructions with memory operands is often missing. The by comparing the results from our measurements with previ- port usage information is sometimes inaccurate or impre- ously published data. Our precise latency data, for example, cise; reasons for this are discussed in Section 5.1. The data is

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us