Accelerating Sequential Consistency for Java with Speculative Compilation
Total Page:16
File Type:pdf, Size:1020Kb
Accelerating Sequential Consistency for Java with Speculative Compilation Lun Liu Todd Millstein Madanlal Musuvathi University of California, Los Angeles University of California, Los Angeles Microsoft Research, Redmond USA USA USA [email protected] [email protected] [email protected] Abstract be compiled without fences. If an object is ever accessed A memory consistency model (or simply a memory model) by a second thread, any speculatively compiled code for specifies the granularity and the order in which memory the object is removed, and future JITed code for the object accesses by one thread become visible to other threads in will include the necessary fences in order to ensure SC. We the program. We previously proposed the volatile-by-default demonstrate that this technique is effective, reducing the (VBD) memory model as a natural form of sequential con- overhead of enforcing VBD by one-third on average, and ad- sistency (SC) for Java. VBD is significantly stronger than ditional experiments validate the thread-locality hypothesis the Java memory model (JMM) and incurs relatively modest that underlies the approach. overheads in a modified HotSpot JVM running on Intel x86 CCS Concepts · Software and its engineering → Con- hardware. However, the x86 memory model is already quite current programming languages; Just-in-time compil- close to SC. It is expected that the cost of VBD will be much ers; Runtime environments. higher on the other widely used hardware platform today, namely ARM, whose memory model is very weak. Keywords memory consistency models, volatile by default, In this paper, we quantify this expectation by building sequential consistency, Java virtual machine, speculative and evaluating a baseline volatile-by-default JVM for ARM compilation called VBDA-HotSpot, using the same technique previously ACM Reference Format: used for x86. Through this baseline we report, to the best Lun Liu, Todd Millstein, and Madanlal Musuvathi. 2019. Accelerat- of our knowledge, the first comprehensive study of the cost ing Sequential Consistency for Java with Speculative Compilation. of providing language-level SC for a production compiler In Proceedings of the 40th ACM SIGPLAN Conference on Program- on ARM. VBDA-HotSpot indeed incurs a considerable per- ming Language Design and Implementation (PLDI ’19), June 22ś formance penalty on ARM, with average overheads on the 26, 2019, Phoenix, AZ, USA. ACM, New York, NY, USA, 15 pages. DaCapo benchmarks on two ARM servers of 57% and 73% https://doi.org/10.1145/3314221.3314611 respectively. Motivated by these experimental results, we then present 1 Introduction a novel speculative technique to optimize language-level SC. A memory consistency model (or simply a memory model) While several prior works have shown how to optimize SC specifies the granularity and the order in which memory in the context of an offline, whole-program compiler, to our accesses by one thread become visible to other threads in the knowledge this is the first optimization approach that is com- program. We recently proposed a strong memory model for patible with modern implementation technology, including Java called volatile-by-default [20]. In this semantics, all vari- dynamic class loading and just-in-time (JIT) compilation. ables are treated as if they were declared volatile, thereby The basic idea is to modify the JIT compiler to treat each providing sequential consistency (SC) at the level of Java object as thread-local initially, so accesses to its fields can bytecode. Specifically, all programs are guaranteed to obey the per-thread program order, and accesses to individual Permission to make digital or hard copies of all or part of this work for memory locations (including doubles and longs) are always personal or classroom use is granted without fee provided that copies atomic. In contrast, the Java memory model (JMM) [22] only are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights provides these guarantees for programs that are data-race- for components of this work owned by others than the author(s) must free, and it has a weak and complex semantics for programs be honored. Abstracting with credit is permitted. To copy otherwise, or that contain data races. republish, to post on servers or to redistribute to lists, requires prior specific The cost of the strong volatile-by-default semantics is a permission and/or a fee. Request permissions from [email protected]. loss of performance versus the JMM Ð the compiler must re- PLDI ’19, June 22ś26, 2019, Phoenix, AZ, USA strict its optimizations to avoid reordering accesses to shared © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. memory, and it must also insert fence instructions in the gen- ACM ISBN 978-1-4503-6712-7/19/06...$15.00 erated code to prevent the hardware from performing such https://doi.org/10.1145/3314221.3314611 reorderings. This cost is relatively modest for a modified 16 PLDI ’19, June 22ś26, 2019, Phoenix, AZ, USA Lun Liu, Todd Millstein, and Madanlal Musuvathi version of Oracle’s HotSpot JVM running on Intel x86 hard- virtual-machine technology, where classes are loaded dynam- ware [20], but the x86 memory model is already quite close ically and code is optimized and compiled during execution. to SC. In particular, volatile reads do not require any hard- Indeed, the HotSpot JVM already includes an escape analysis, ware fences in x86. As reads tend to far outnumber writes and VBDA-HotSpot does not insert fences for field accesses in programs, it is reasonable to expect the overhead of the of non-escaping objects, but the constraints of the JVM re- volatile-by-default semantics to be much higher on weaker quire this analysis to be fast and intraprocedural, thereby memory models such as ARM and PowerPC, which require limiting its effectiveness. fences for both volatile reads and writes. We propose a new optimization technique that, to the best In this paper, we quantify this expectation by building and of our knowledge, is the first optimization for language-level evaluating a baseline volatile-by-default JVM for ARM using SC that is compatible with modern language features such the same implementation technique as previously used for as JIT compilation and dynamic class loading. Like other x86 [20]. This new JVM, which we call VBDA-HotSpot, is JIT-based optimizations, it is speculative. The basic idea is to a modification of the ARM port of the HotSpot JVMfrom modify the JIT compiler to treat each object as safe initially, OpenJDK8u1. Through this baseline we report, to the best meaning that accesses to its fields can be compiled without of our knowledge, the first comprehensive study of the cost fences. If an object ever becomes unsafe during execution, of providing language-level SC for a production compiler any speculatively compiled code for the object is removed, on ARM. Motivated by these experimental results, we then and future JITed code for the object will include the necessary present a novel speculative technique to optimize language- fences in order to ensure SC. level SC and demonstrate its effectiveness in reducing the It is challenging to turn this high-level idea into an ef- overhead of VBDA-HotSpot. While several prior works have fective optimization. First, dynamic concurrency analyses shown how to optimize SC in the context of an offline, whole- for Java are very expensive, so a direct application of these program compiler [17, 35, 38], to our knowledge this is the analyses to track whether an object is safe would defeat the first optimization approach that is compatible with modern purpose. For example, dynamic data-race detection would implementation technology, including dynamic class loading provide the most precise notion of safe, but such analyses and just-in-time (JIT) compilation. The implementations of for Java have average overheads of 8× or more [40]. VBDA-HotSpot and the optimized version, called S-VBD, Instead our approach treats an object as safe if it is thread- are available on our GitHub repository: https://github.com/ local, i.e. accessed only by the thread that created it. While Lun-Liu/schotspot-aarch64. this approach unnecessarily inserts fences on objects that are shared but data-race-free, we hypothesize that many ob- Baseline Overhead. Experiments on our baseline imple- jects are thread-local, thereby allowing us to remove a large mentation, VBDA-HotSpot, show that the volatile-by-default percentage of fences. Our experimental evaluation shows semantics incurs a considerable performance penalty on that this hypothesis is well founded. Further, we leverage ARM, as expected. However, we observe that the perfor- this approximate notion of safe to optimize its enforcement, mance overhead crucially depends on the specific fences and in particular we develop an intraprocedural analysis volatile used to implement the semantics. With the de- to remove many per-access thread-locality checks that are volatile fault fences that HotSpot employs to implement provably redundant. loads and stores on ARM, VBDA-HotSpot incurs average Second, our technique must properly account for memory and maximum overheads of 195% and 429% (!) on the Da- accesses that come from interpreted code as well as those Capo benchmarks [7] for a modern 8-core ARM server. But from JITed native code. We have chosen to always insert employing an alternative choice of ARM fences reduces the fence instructions for memory accesses in interpreted code. average and maximum overheads on that machine respec- This approach trades some performance for implementation tively to 73% and 129%. We also find similar results ona simplicity, but since łhotž code will eventually get compiled 96-core ARM server, with VBDA-HotSpot incurring an av- we expect the performance loss to be minimal.