TECHNIQUES for TRANSPARENT PROGRAM SPECIALIZATION in DYNAMIC OPTIMIZERS by S.Subramanya Sastry a Dissertation Submitted in Parti
Total Page:16
File Type:pdf, Size:1020Kb
TECHNIQUES FOR TRANSPARENT PROGRAM SPECIALIZATION IN DYNAMIC OPTIMIZERS by S.Subramanya Sastry A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN-MADISON 2003 i ACKNOWLEDGMENTS 8 years, 5 houses, 30 housemates, countless officemates is how long it has taken to get this done. I have a number of people to thank who helped me get here, both academically, and personally. First and foremost, I have to acknowledge the support of my advisors, Jim Smith and Ras Bodik. It has been great working in Jim’s group because of the tremendous flexibility and “let-go” atmo- sphere I worked in. That, more than anything, helped in ways more than Jim himself could possibly know. Coming into Madison, as a newly minted undergraduate, I had a keen interest in computer science and in challenging projects and I had the opportunity to exercise those interests in the Virtual Machines project developing the Strata virtual machine initially and in working on my PhD research problem. That flexibility and easy-going attitude has also enabled me to pursue interests outside Computer Science, sometimes at the cost of my PhD research, which have since profoundly affected and changed me in ways I couldn’t have imagined. On the other hand, it was Ras’ push and sustained effort to get me done and graduate which has seen me here (finally!). His role has been wonderfully complementary to Jim’s and his effort came in at a time when I needed it most. His relentless stress for clarity in presentation has made the dissertation as good as it is now. I am also thankful to Mikko Lipasti, Guri Sohi, Charles Fischer, and Susan Horwitz for serving on my prelim and defense committees. I would also like to thank Mario Wolczko of Sun Labs for his feedback on my work. Among my numerous officemates, I had the opportunity to work most closely with Timothy Heil. On several occasions, I was able to bounce my ideas off him and get good feedback. In addition, he has made significant contributions to the Strata JVM – he is definitely a better software engineer than I am. Todd Bezenek and Min Zhong have also made critical contributions to the Strata JVM. I am grateful to all the three of them for their contributions. In addition, Manoj Plakal has provided several bug reports that helped in making Strata more usable. ii Thanks to Ho-Seop Kim and Quinn Jacobson for doing system administration in the lab in addi- tion to carrying on their research – it can be a thankless job. I’ve also enjoyed my interactions with all my officemates and several fellow graduate students in Computer Science during all the years I have been here – thanks to all of them for the camaraderie and the fun times. This work has been supported by NSF grants CCR-9900610, EIA-0071924, and IBM, Microsoft, and Intel. Outside of academic circles, I have a lot of people to thank – I don’t even know how to acknowl- edge all of them. The person I am today is as much a result of the people that have come into my life and influenced me as it is a result of my own effort. First and foremost, I am thankful and grateful to my parents and my grandmother in ways that words cannot express – they have been tremendously supportive of pretty much anything I have wanted to do and have gone through tremendous hard- ships to support me and to get me to this stage in my life. Thanks also to my brother and sister for their support and affection. My cousin Amba, and my uncle, Dr.A.B.P.Rao first got me hooked onto computer science. It is doubtful that I would have been so excited by computer science, but for them. My time at Madison has been made extremely enjoyable thanks to all the people I’ve lived with – I couldn’t possibly name them all. In particular, I have had great times living with Venkatesh Ganti, Vinod John, Harit Modi, and Sridhar Gopal. My final years at Madison at 930 Jenifer St are particularly memorable thanks to the company of Daniel Lipson, Kum Yi Lee, Leigh Rosenberg, Navin Ramankutty, and Jeanine Rhemtulla. Of all the numerous friends I’ve made at Madison, I’ve been strongly influenced by many of my women friends. In particular, Vidhi Parthasarathy has the unique distinction of seeing me through all my changes, my joys, my sorrows, my heartbreaks, and everything else I’ve been through while at Madison. She has been a wonderful friend, loving and affectionate. In my earlier years at Madison, Muthatha Ramanathan has helped me cope through tough times. Anita and Leigh have also been profound influences on me and have changed me and my life in iii ways I couldn’t have predicted. They have, each in their own ways, given me their unconditional support and love and have made my life at Madison all that more beautiful. Beyond that, I’ve met a number of wonderful people committed to issues of social justice through my involvement with Asha, AID, Friends of River Narmada, and South Asia Forum. All these volunteers and other social activists have left a deep impression on me and I’m grateful to them for teaching me things about the world that the University setting couldn’t have. And finally, thank you Madison – my times here count for a lot. iv ABSTRACT Program specialization speeds up program execution by eliminating computation that depends on semi-invariant values: known program inputs, or repetitive data encountered during program execution. There are three parts to this problem: (i) determining what values are semi-invariant (ii) determining what code to specialize, and (iii) how to specialize. There is a vast body of literature that addresses the last part of the problem – how to specialize. However, all existing techniques either rely on the programmer or expensive offline program analysis to determine the semi-invariant values, and to determine what code to specialize. Consequently, there is no existing program specialization technique that can be performed at runtime without user intervention (transparently). This dissertation proposes and investigates techniques for enabling transparent runtime special- ization of programs within a dynamic optimizer. The primary contribution of this dissertation is in the form of techniques that address the first two parts above: determining what values are semi- invariant, and determining what code to specialize. The specializer collects a store profile to identify frequently modified portions of the program’s data structures. Based on this, the specializer predicts which parts of the program’s data structures might be invariant for the rest of the program’s execution. This store-profile based invariance de- tection technique, in addition to being transparent, is more powerful than existing annotation-based approaches. The correctness of this optimistic, but speculative invariance assumptions is ensured by runtime guards that detect modifications of the specialized memory locations. Due to infrastruc- tural limitations, these runtime guarding techniques have not been implemented and studied in this dissertation. The collected runtime profiles are used by a scope-building algorithm that identifies the code re- gions (specialization scopes) that can be profitably specialized. The identified scopes are specialized using a specializer based on the Sparse Conditional Constant Propagation algorithm that has been ex- tended to eliminate invariant memory accesses as well as unroll and specialize loops. However, other v specializers could be used in combination with the proposed invariance-detection and scope-building techniques. This dissertation implements a specializer that specializes Java programs available in bytecode form. However, due to infrastructural limitations, the evaluation uses a profile-driven recompilation process (as opposed to true runtime specialization). Experimental evaluation shows that transparent runtime specialization is feasible and can be implemented within dynamic optimizers. The runtime costs are not prohibitive and the specializer can provide good speedups. For example, the specializer could speedup a publicly available Scheme interpreter by a factor of 1.9x. However, with a better engineered implementation than Strata (which is inefficient and unoptimized for runtime compilation), the runtime compilation overheads will be lower leading to to higher speedups. In addition, with an improved specialization model and an improved scope-building algorithm, more specialization opportunities can be exploited which might also translate into higher speedups. vi Contents 1 Introduction 1 1.1 Motivations . 2 1.1.1 Motivation 1: Performance . 2 1.1.2 Motivation 2: Co-designed Virtual Machines . 3 1.1.3 Motivation 3: Domain Specific Languages . 3 1.2 Example . 5 1.3 Implementing a transparent runtime program specializer: Key Challenges . 7 1.4 Implementing a transparent runtime program specializer: Key Ideas behind our solution 9 1.5 Summary of contributions . 11 1.6 Thesis Organization . 13 2 Background and Related Work 14 2.1 Program Specialization . 14 2.1.1 Partial Evaluation . 14 2.2 Comparing Program Specialization Techniques . 16 2.2.1 Exploiting invariance in Runtime Data Structures . 18 2.3 Computation Reuse . 19 3 Program Specialization in Dynamic Optimizers: An Overview 21 3.1 Specialization model . 21 3.1.1 Specialization scopes and keys . 21 3.1.2 Specialization transformation . 22 3.2 Example . 23 3.3 Dynamic Optimizer environment . 24 vii 3.4 The specialization process . 25 3.4.1 Specialization Meta Issues . 26 3.5 Profiling . 29 3.5.1 Profile-based Invariance Detection . 29 3.5.2 Comparison with Annotation-based Invariance Detection . 30 3.5.3 Low-overhead Runtime Profiling . 31 3.6 Guarding specialized memory locations . 32 3.7 Building scopes .