Lightning BOLT: Powerful, Fast, and Scalable Binary Optimization

Lightning BOLT: Powerful, Fast, and Scalable Binary Optimization

Lightning BOLT: Powerful, Fast, and Scalable Binary Optimization Maksim Panchenko Rafael Auler Facebook, Inc. Facebook, Inc. Menlo Park, CA, USA Menlo Park, CA, USA [email protected] [email protected] Laith Sakka Guilherme Ottoni Facebook, Inc. Facebook, Inc. Seattle, WA, USA Menlo Park, CA, USA [email protected] [email protected] Abstract CCS Concepts: · Software and its engineering → Profile-guided binary optimization has proved to be an im- Compilers; · Computing methodologies → Parallel al- portant technology to achieve peak performance, particu- gorithms;· General and reference → Performance. larly for large-scale binaries that are typical for data-center Keywords: binary optimization, code optimization, paral- applications. By applying the profile data at the same repre- lelization sentation where sampling-based profiling is collected, binary optimizers can provide double-digit speedups over binaries ACM Reference Format: compiled with profile-guided optimizations using similarly Maksim Panchenko, Rafael Auler, Laith Sakka, and Guilherme Ot- collected profile data. The main blocker for adoption ofbi- toni. 2021. Lightning BOLT: Powerful, Fast, and Scalable Binary Op- timization. In Proceedings of the 30th ACM SIGPLAN International nary optimizers in practice is the overhead that they add Conference on Compiler Construction (CC ’21), March 2ś3, 2021, to the already long and demanding build pipelines used for Virtual, Republic of Korea. ACM, New York, NY, USA, 12 pages. producing highly optimized binaries, which already include https://doi.org/10.1145/3446804.3446843 aggressive compiler optimizations guided by profile data and also link-time optimizations. This paper addresses the over- heads of binary optimizers in the context of BOLT, a modern and powerful open-source binary optimizer. More specifi- 1 Introduction cally, this paper describes Lightning BOLT, which is an im- A binary optimizer is a special flavor of a compiler that trans- proved version of the BOLT binary optimizer that drastically forms an input binary into a more performant output binary. reduces BOLT’s processing time and memory requirements, Binary optimization has proved to be a powerful approach while preserving BOLT’s effectiveness in improving the final to achieve peak performance [12, 15, 22]. Previous work has binary’s performance. Using a set of real-world data-center demonstrated that significant, double-digit speedups can be and open-source applications, we show that Lightning BOLT achieved via binary optimization on top of highly optimized speeds up BOLT’s processing by an average of 4.71× and binaries compiled with mature compilers like GCC [10] and reduces BOLT’s memory consumption by 70.5% on average. Clang [14], even when these compilers are empowered with Furthermore, Lightning BOLT also provides an adjustable link-time optimizations (LTO) and profile-guided optimiza- mechanism to further reduce BOLT’s overheads at the cost tions (PGO) [22]. In particular, binary optimizers are well of some lost performance for the final binary. positioned to perform code-layout optimizations with much greater accuracy than compilers can achieve with profile- guided optimizations. As Panchenko et al. [22] clearly iden- tified, these opportunities arise from the compilers’ intrinsic Permission to make digital or hard copies of part or all of this work for inaccuracy and limitations of mapping profile data collected personal or classroom use is granted without fee provided that copies are at the machine-instruction level back to their intermediate not made or distributed for profit or commercial advantage and that copies representations, where optimizations are applied. bear this notice and the full citation on the first page. Copyrights for third- Unfortunately, the benefits of employing a binary opti- party components of this work must be honored. For all other uses, contact the owner/author(s). mizer come with a cost. Even though profile data collection CC ’21, March 2ś3, 2021, Virtual, Republic of Korea can be made essentially free [24], binary optimization still © 2021 Copyright held by the owner/author(s). incurs extra processing overheads in the binary-building ACM ISBN 978-1-4503-8325-7/21/03. https://doi.org/10.1145/3446804.3446843 †Artifact available at: https://doi.org/10.5281/zenodo.4451492 119 CC ’21, March 2ś3, 2021, Virtual, Republic of Korea Maksim Panchenko, Rafael Auler, Laith Sakka, and Guilherme Ottoni pipeline. These overheads, in terms of build time and mem- ory usage, can be significant to the point that they prevent Function Discovery the use of binary optimizers in real production environments, despite providing significant speedups to widely deployed Read Debug Info applications [26, 28]. Disassembly To address these scalability concerns about binary opti- mization, this paper studies techniques to reduce the pro- CFG Construction cessing overheads in the context of BOLT [22], a modern, production-quality, open-source binary optimizer. Specifi- Read Profile Data cally, this paper shows how BOLT’s overheads can be greatly reduced by employing two techniques: parallel processing and selective optimizations. Parallel compilers have been studied and employed be- Local Optimizations fore. However, they require significant engineering effort. In general, this effort is only justified in dynamic compilation Global Optimizations systems, where the benefits of quickly producing optimized code are bigger due to less time spent executing interpreted or less optimized code [5, 17, 18, 21]. On the static com- Emit and Link Functions pilation domain, mainstream compilers are still sequential applications due to the engineering challenges in paralleliz- Update Debug Info ing them [3, 4]. Instead, build systems leverage process-level parallelism by compiling independent files concurrently [2]. Rewrite Binary File However, binary optimizers, by processing a single linked binary, do not have the luxury of leveraging such parallelism. This paper describes our approach to parallelizing BOLT. Figure 1. Overview of BOLT pipeline. Highlighted are the The second technique we employ to reduce the overhead steps parallelized in Lightning BOLT. of binary optimization is to selectively apply optimizations. BOLT’s original design processed all the functions in the in- put binary. In this paper, we demonstrate how that overhead The rest of this paper is organized as follows. Section 2 can be significantly reduced by restricting BOLT’s optimiza- reviews the architecture of binary optimizers in general and tions to only a portion of the binary. Although straightfor- BOLT in particular. After that, Section 3 and Section 4 re- ward at the high level, this approach requires careful en- spectively describe how we have enhanced BOLT with par- gineering because even the portions of the binary that are allel processing and selective optimizations. Section 5 then not optimized still require patching to account for the opti- presents the results of our extensive evaluation demonstrat- mized portions of the binary. This paper also describes how ing the impact of this work on large-scale applications. Fi- Lightning BOLT leverages this technique to increase BOLT’s nally, Section 6 discusses related work and Section 7 con- robustness and applicability, allowing processing of binaries cludes the paper. even when they cannot be correctly disassembled. By combining parallel processing and selective optimiza- tions, this paper demonstrates how Lightning BOLT reduces 2 Background BOLT’s processing time and memory overheads by 78.8% and This section gives an overview of the architecture of the 70.5%, respectively, when optimizing real-world data-center BOLT binary optimizer [22]. Although we focus on BOLT and open-source workloads. Overall, this paper makes the here, this general architecture is common to other binary following contributions: optimizers [7, 12, 15, 25]. Figure 1 illustrates a block diagram of BOLT’s binary rewriting pipeline. BOLT starts by identifying all the func- 1. It describes how parallel processing and selective opti- tions in the binary by leveraging ELF’s symbol table. In case mization can be applied on a state-of-the-art binary op- the binary was compiled with debug information, BOLT timizer to drastically reduce its processing overheads. then reads this information. After that, BOLT proceeds to 2. It provides extensive evaluation demonstrating the disassemble the functions and to create an in-memory repre- impact of the studied techniques when optimizing real sentation of the functions using a control-flow graph (CFG) large-scale data-center workloads. with basic blocks containing sequences of machine instruc- 3. It points to future directions for further reducing the tions. This is the single internal representation of the binary overheads of binary optimizers. code in which BOLT’s optimizations are applied. Next, BOLT 120 Lightning BOLT: Powerful, Fast, and Scalable Binary Optimization CC ’21, March 2ś3, 2021, Virtual, Republic of Korea Table 1. BOLT’s optimization pipeline. 3 Parallel Processing Compilation of large projects in parallel is mandatory in Optimization Description Local PGO modern systems. However, parallelization in compilers is 1. strip-rep-ret Strip repz from repz retq instructions ✓ used for legacy AMD processors usually achieved at the build-system level, invoking the com- 2. icf Identical code folding piler multiple times in separate processes, as it is easier to 3. icp Indirect call promotion ✓ ✓ 4. peepholes Simple peephole optimizations

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us