
Memory Consistency Models for High Performance Distributed Computing by Victor Luchangco S.M., Electrical Engineering and Computer Science Massachusetts Institute of Technology (1995) S.B., Computer Science and Engineering Massachusetts Institute of Technology (1995) S.B., Mathematics Massachusetts Institute of Technology (1992) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Science in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2001 c Victor Luchangco, MMI. All rights reserved. The author hereby grants to MIT permission to reproduce and distribute publicly paper and electronic copies of this thesis document in whole or in part, and to grant others the right to do so. Author . Department of Electrical Engineering and Computer Science September 7, 2001 Certified by . Nancy A. Lynch Professor of Computer Science and Engineering Thesis Supervisor Accepted by . Arthur C. Smith Chairman, Departmental Committee on Graduate Students Memory Consistency Models for High Performance Distributed Computing by Victor Luchangco Submitted to the Department of Electrical Engineering and Computer Science on September 7, 2001, in partial fulfillment of the requirements for the degree of Doctor of Science in Electrical Engineering and Computer Science Abstract This thesis develops a mathematical framework for specifying the consistency guarantees of high performance distributed shared memory multiprocessors. This framework is based on computations, which specify the operations requested and constraints on how these op- erations may be applied; we call the framework computation-centric. This framework is expressive enough to specify high level synchronization mechanisms such as locks. We use the computation-centric framework to specify and compare several memory models, to characterize programming disciplines, and to prove that weakly consistent sys- tems provide strong consistency guarantees when certain programming disciplines are obeyed. Specifically, we define computation-centric versions of several memory models from the literature, including sequential consistency, weak ordering and release consistency, and we give a computation-centric characterization of data-race-free programs. We prove that when running data-race-free programs, weakly ordered systems appear sequentially consistent. We also define memory models that have higher level guarantees such as locks and transactions. The strongly consistent versions of these models make guarantees that are stronger than sequential consistency, and thus are easier for programmers to use. We introduce a new model called weak sequential locking, which has very weak guarantees, and prove that it guarantees sequential consistency and mutually exclusive locking for programs that protect memory accesses using locks. We also show that by using two- phase locking, programmers can implement serializable transactions on any memory sys- tem with weak sequential locking. The framework is intended primarily to help programmers of such systems reason about their programs. It supports a high level of abstraction, insulating programmers from system details and enhancing the portability of their programs. The framework is also useful for implementors of such systems, in determining what guarantees their im- plementations provide and in assessing the advantages of providing one memory model rather than another. Thesis Supervisor: Nancy A. Lynch Title: Professor of Computer Science and Engineering Contents 1 Introduction 9 1.1 Background . 10 1.1.1 Concurrent Programming from the 70s and 80s . 10 1.1.2 Weak Consistency . 11 1.1.3 Methods for Modeling Memory Consistency . 14 1.2 Research Goals . 15 1.3 Computations . 16 1.4 Contributions and Thesis Organization . 18 2 Serial Semantics of Memory 21 2.1 Preliminary Mathematics . 22 2.2 Serial Data Types . 23 2.3 Operator Sequences . 26 2.4 Reachable States and Properties of Data Type Operators . 27 2.5 Return Value Functions and Validity . 29 2.6 Equivalences for Operator Sequences . 32 2.7 Proving Operator Sequences Equivalent . 34 2.8 Data Type Equivalence . 35 2.9 Locations . 38 2.10 Data Type Composition . 39 2.11 Discussion . 41 3 Computations 43 3.1 Preliminary Graph Theory . 44 3.2 Definition . 45 3.3 What Computations Are Not . 49 3.4 Some Types of Annotations . 50 3.5 Getting Computations From Programs . 53 3.6 Schedules . 58 3.7 Races and Determinacy . 59 3.8 Discussion . 61 4 The Computation-Centric Framework 65 4.1 Computation-Centric Memory Models . 66 4.2 Properties of Computation-Centric Models . 69 3 4.3 Implementing Memory Models . 70 4.4 Client Restrictions . 71 4.5 Computation Transformations . 73 4.6 Discussion . 76 5 Simple Memories 79 5.1 Precedence-Based Memory Models . 80 5.2 Sequential Consistency . 81 5.3 Eliminating Races . 83 5.4 Coherent Memory . 85 5.5 Synchronization . 87 6 Processor-Centric Memories 91 6.1 Characteristics of Processor-Centric Models . 92 6.2 Processor-Centrism in the Computation-Centric Framework . 93 6.3 Write Serialization and Coherence . 101 6.4 Comparisons Between Processor-Centric Models . 104 6.5 A Possible Pitfall with Reordering . 109 6.6 Interpreting Reordering in Processor-Centric Models . 111 6.7 Programmer-Centric Models . 113 6.8 Discussion . 116 7 Locks 119 7.1 Preliminary Graph Theory: Regions, Guards and Sections . 120 7.2 Computations with Locks . 123 7.3 Well-Formedness . 125 7.4 Respecting Locking . 128 7.5 Memories with Locks . 131 7.6 Data Races Under Locking . 135 7.7 Locks vs. Direct Synchronization . 137 7.8 Locks and Locations . 138 7.9 Shared/Exclusive Locks . 141 7.10 Discussion . 142 8 Transactions 145 8.1 Computations with Transactions . 146 8.2 Sequentially Consistent Transactions . 150 8.3 Reserialization . 151 8.4 Two-Phase Locking . 153 8.5 Program Reduction . 157 8.6 Relaxed Transactional Memory Models . 160 8.7 Integrity . 162 8.8 Races within Transactions and Transaction Races . 164 8.9 Discussion . 165 4 9 Dynamic Memory Models 169 9.1 The Input/Output Automaton Model . 171 9.1.1 Formal Definitions . 171 9.1.2 State Variables and Precondition-Effect Statements . 175 9.2 The Dynamic Interface . 176 9.3 Modeling Memory Systems in the Dynamic Framework . 179 9.4 Simple Dynamic Memory Models . 181 9.5 Client Restrictions in the Dynamic Framework . 183 9.6 Relating the Two Frameworks . 184 9.7 From Computation-Centric to Dynamic Models . 186 9.8 From Computation-Centric to Dynamic Models, Part II . 188 10 Conclusions and Future Work 193 10.1 Implications on Memory System Design and Use . 193 10.2 Future Work . 195 5 Acknowledgments It's hard to believe that I'm actually done. There are so many people to thank and not nearly enough time or space to do so. But it's easy to know where to start: Nancy Lynch has been an incredible advisor, both for her example as a researcher and clear thinker and for her patience and encouragement and faith in my ability. I had never thought of going to graduate school; if not for Nancy, I doubt I ever would have. When I first joined Nancy's research group, she gave me several papers on consensus and patiently answered my questions over the course of several weeks. At the end of the summer, she had me give a talk to the group, saying “You're an expert on this topic.” I was a sophomore! Since then, her support has never flagged, though I rarely deserved it. Thank you, Nancy. I owe a large debt to my thesis committee: Arvind, Butler Lampson, Charles Leiserson and, of course, Nancy. They identified many important points that were missing from my thesis or obscured by my writing, and they cut through a lot of the fog in my thinking. Despite the warnings I had gotten about the impossibility of trying to find a three-hour time slot mutually agreeable to four faculty members, scheduling was quite easy, even though Arvind and Charles were on leave working full-time at start-ups. I appreciate their generosity with their time. Surprisingly, I had a great time at my thesis defense. Butler definitely had all the most memorable quotes.1 Charles Leiserson has also been an example and role model for me, especially for his gusto and clarity in teaching. It was from Charles that I first learned that writing a thesis was also hard for other people, and that courage and perseverance were as important as intelligence. The perspective of Xiaowei Shen and Arvind as computer architects was invaluable to me in improving the accessibility of this thesis to that audience. They also helped me understand the implications of relaxing various memory consistency guarantees. Larry Rudolph and Matteo Frigo have also helped me understand real computer systems better. The Theory of Distributed Systems group (TDS), has been my research “home” for sev- eral years, and I would like to thank the group, especially Paul Attie, Idit Keidar, Roger Khazan, Carl Livadas, Alex Shvartsman and.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages212 Page
-
File Size-