Hardware Mechanisms for Distributed Dynamic Software Analysis

Hardware Mechanisms for Distributed Dynamic Software Analysis

Hardware Mechanisms for Distributed Dynamic Software Analysis by Joseph Lee Greathouse A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Science and Engineering) in The University of Michigan 2012 Doctoral Committee: Professor Todd M. Austin, Chair Professor Scott Mahlke Associate Professor Robert Dick Associate Professor Jason Nelson Flinn c Joseph Lee Greathouse All Rights Reserved 2012 To my parents, Gail and Russell Greathouse. Without their support throughout my life, I would never have made it this far. ii Acknowledgments First and foremost, I must thank my advisor, Professor Todd Austin, for his help and guid- ance throughout my graduate career. I started graduate school with the broad desire to “research computer architecture,” but under Professor Austin’s watch, I have been able to hone this into work that interests us both and has real applications. His spot-on advice about choosing topics, performing research, writing papers, and giving talks has been an invaluable introduction to the research world. The members of my committee, Professors Robert Dick, Jason Flinn, and Scott Mahlke, also deserve special gratitude. Their insights, comments, and suggestions have immea- surably improved this dissertation. Together their expertise spans low-level hardware to systems software and beyond. This meant that I needed to ensure that any of my ideas were defensible from all sides. I have been fortunate enough to collaborate with numerous co-authors. I have often published with Professor Valeria Bertacco, who, much like Professor Austin, has given me invaluable advice during my time at Michigan. I am extremely lucky to have had the chance to work closely with Ilya Wagner, David Ramos, and Andrea Pellegrini, all of whom have continued to be good friends even after the high-stress publication process. My other Michigan co-authors are Professor Seth Pettie, Gautam Bhatnagar, Chelsea LeBlanc, Yixin Luo, and Hongyi Xin, each of whom has significantly contributed to my graduate career and whose help I greatly appreciate. I was also fortunate to have spent five months doing research at Intel. This opportunity broadened my research into concurrency bugs and helped break me out of my post-quals slump. Matt Frank, Zhiqiang Ma, and Ramesh Peri were co-authors on the paper that re- sulted from this work, and I heartily thank them for all of their help. I also wish to thank the entire Inspector XE team, whose product was amazing to work with. Matt Braun, especially, was always available to help keep my project moving forward. Matt Frank also aided me immensely when I was an undergraduate at the University of Illinois. He and Professors Sanjay Patel and Donna Brown were instrumental in convincing me to go to graduate school in the first place. I am certain that their recommendation letters iii were the main reason I was admitted into Michigan. I am also thankful to Nick Carter, who was a faculty member at UIUC at the time, for allowing me to do research with his group during my senior year. I failed completely at the tasks he assigned me, but it was an incomparable learning experience. I doubt I would have been able to complete this dissertation without the support of the other graduate students in my lab. I’m extremely lucky to have had such amazing people in close proximity, which has really proven to me the power of a good group of motivated people. Thanks to all of you for being great: Bill Arthur, Adam Bauserman, Kai-hui Chang, Debapriya Chatterjee, Jason Clemons, Kypros Constantinides, Shamik Ganguly, Jeff Hao, Andrew Jones, Rawan Khalek, Shashank Mahajan, Biruk Mammo, Jinsub Moon, Andreas Moustakas, Ritesh Parikh, Robert Perricone, Steve Plaza, Shobana Sudhakar, and Dan Zhang. There are also loads of people in the department that have been an inspiration. Michi- gan’s CSE program is strong because of people like you: Professor Tom Wenisch, Timur Alperovich, Bashar Al-Rawi, Hector´ Garcia, Anthony Gutierrez, Jin Hu, Anoushe Jamshidi, Daya Khudia, Dave Meisner, Jarrod Roy, Korey Sewell, and Ken Zick. I think it’s also important to acknowledge everyone who has volunteered to help out with the ACAL reading group. Andrew DeOrio deserves special thanks for putting up with me as a labmate for five years and as a roommate for three. We started at nearly the same time and are finishing at nearly the same time, even if our day-to-day schedules almost never overlap. My friends in Illinois were also a big help in making it through this process. Paul Mathewson, especially, kept me focused on things besides engineering (and let me crash on his couch during my internship). Dan Bassett and Dan Baker both kept me grounded in reality in their own ways. Finally, my family is the biggest reason that I’ve been able to do this work. Without their constant support throughout childhood, college, and graduate school, I would’ve stopped a hundred times along the way. My parents and grandparents have always encouraged me in any decision I’ve made. I couldn’t ask for anything more. iv Table of Contents Dedication ....................................... ii Acknowledgments ................................... iii List of Tables ...................................... viii List of Figures ..................................... ix Abstract ......................................... xi Chapter 1 Introduction ...............................1 1.1 Complexity Causes Software Errors . .2 1.2 Hardware Plays a Role in the Problem . .4 1.3 Attempts to Alleviate the Problem . .7 1.4 Analysis Yields Better Software . .8 1.4.1 Static Software Analysis . .8 1.4.2 Dynamic Software Analysis . .9 1.5 Contributions of this Work . 12 Chapter 2 Distributed Dynamic Dataflow Analysis ................ 15 2.1 Introduction to Sampling . 16 2.1.1 Sampling for Performance Analysis . 17 2.1.2 Sampling for Assertion Checking . 17 2.1.3 Sampling for Concurrency Tests . 18 2.1.4 Sampling for Dynamic Dataflow Analyses . 18 2.2 Background . 20 2.2.1 Dynamic Dataflow Analysis . 20 2.2.2 Demand-Driven Dataflow Analysis . 23 2.2.3 The Deficiencies of Code-Based Sampling . 23 2.3 Effectively Sampling Dataflow Analyses . 25 2.3.1 Definition of Performance Overhead . 28 2.3.2 Overhead Control . 29 2.3.3 Variable Probability of Stopping Analysis . 31 2.4 Prototype System Implementation . 34 v 2.4.1 Enabling Efficient User Interaction . 36 2.5 Experimental Evaluation . 37 2.5.1 Benchmarks and Real-World Vulnerabilities . 38 2.5.2 Controlling Dataflow Analysis Overheads . 40 2.5.3 Accuracy of Sampling Taint Analysis . 43 2.5.4 Exploring Fine-Grained Performance Controls . 46 2.6 Chapter Conclusion . 49 Chapter 3 Testudo: Hardware-Based Dataflow Sampling ............ 50 3.1 Limitations of Software Sampling . 51 3.2 Hardware-Based Dynamic Dataflow Analysis . 53 3.2.1 Limitations of Hardware DDA . 55 3.2.2 Contributions of this Chapter . 56 3.3 Hardware for Dataflow Sampling . 57 3.3.1 Baseline Support for Dataflow Analysis . 58 3.3.2 Limiting Overheads with a Sample Cache . 59 3.3.3 Intra-flow Selection Policy . 61 3.3.4 Inter-flow Selection Policy . 61 3.3.5 Testudo Sampling Example . 62 3.4 An Analytical Model of Dataflow Coverage . 63 3.4.1 Analytical Model Overview . 63 3.4.2 Analytical Model Derivation . 64 3.4.3 On the Relation to the Coupon Collector’s Problem . 65 3.5 Experimental Evaluation . 66 3.5.1 System Simulation Framework . 66 3.5.2 Taint Analysis Coverage and Performance . 68 3.5.3 Worst Case Analytical Bounds . 70 3.5.4 Beyond Taint Analysis . 71 3.5.5 Sample Cache Hardware Overheads . 71 3.6 Chapter Conclusion . 73 Chapter 4 Demand-Driven Data Race Detection ................. 75 4.1 Introduction . 76 4.1.1 Contributions of this Chapter . 77 4.2 Data Race Detection Background . 79 4.3 Demand-Driven Data Race Detection . 80 4.3.1 Unsynchronized Sharing Causes Races . 81 4.3.2 Performing Race Detection When Sharing Data . 82 4.4 Monitoring Data Sharing in Hardware . 84 4.4.1 Cache Events . 85 4.4.2 Performance Counters . 87 4.5 Demand-Driven Race Detector Design . 88 4.6 Experimental Evaluation . 90 4.6.1 Experimental Setup . 90 4.6.2 Performance Improvement . 92 vi 4.6.3 Accuracy of Demand-Driven Data Race Detection . 95 4.6.4 Races Missed by the Demand-Driven Data Race Detector . 96 4.6.5 Observing more W!W Data Sharing . 97 4.6.6 Negating the Limited Cache Size . 97 4.7 Related Work . 98 4.7.1 Data Race Detection . 98 4.7.2 Software Acceleration Methods . 99 4.7.3 Hardware Race Detection . 100 4.7.4 Observing Hardware Events . 100 4.8 Chapter Conclusion . 101 Chapter 5 Hardware Support for Unlimited Watchpoints ............ 103 5.1 Introduction . 103 5.1.1 Contributions of This Chapter . 105 5.2 Fast Unlimited Watchpoints . 107 5.2.1 Existing HW Watchpoint Support . 107 5.2.2 Unlimited Watchpoint Requirements . 108 5.2.3 Efficient Watchpoint Hardware . 110 5.3 Watchpoint Applications . 117 5.3.1 Dynamic Dataflow Analysis . 117 5.3.2 Deterministic Concurrent Execution . 119 5.3.3 Data Race Detection . 120 5.4 Experimental Evaluation . 121 5.4.1 Experimental Setup . 121 5.4.2 Results for Taint Analysis . 124 5.4.3 Results for Deterministic Execution . 126 5.4.4 Results for Data Race Detection . 127 5.4.5 Experimental Takeaways . 128 5.5 Related Work . 129 5.5.1 Memory Protection Systems . 129 5.5.2 Other Uses for Watchpoints . 131 5.6 Chapter Conclusion . 132 Chapter 6 Conclusion ................................ 134 6.1 Thesis Summary . 135 6.2 Future Research Directions . 136 6.3 Conclusion . 137 Bibliography ...................................... 138 vii List of Tables Table 2.1 Overheads from a Selection of Dynamic Analyses .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    169 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us