Principles of Distributed Computing
Total Page:16
File Type:pdf, Size:1020Kb
ii !"#$!%& ''''()*+$!,-' Principles of Distributed Computing Roger Wattenhofer [email protected] Spring 2016 iv CONTENTS 7.4 Applications . 80 8 Locality Lower Bounds 85 8.1 Model . 85 8.2 Locality . 86 8.3 The Neighborhood Graph . 88 Contents 9 Social Networks 93 9.1 Small World Networks . 94 9.2 Propagation Studies . 100 10 Synchronization 105 1 Vertex Coloring 5 10.1 Basics . 105 1.1 Problem & Model . .5 10.2 Synchronizer α ............................ 106 1.2 Coloring Trees . .8 10.3 Synchronizer β ............................ 107 2 Tree Algorithms 15 10.4 Synchronizer γ ............................ 108 2.1 Broadcast . 15 10.5 Network Partition . 110 2.2 Convergecast . 17 10.6 Clock Synchronization . 112 2.3 BFS Tree Construction . 18 2.4 MST Construction . 19 11 Communication Complexity 119 11.1 Diameter & APSP . 119 3 Leader Election 23 11.2 Lower Bound Graphs . 121 3.1 Anonymous Leader Election . 23 11.3 Communication Complexity . 124 3.2 Asynchronous Ring . 24 11.4 Distributed Complexity Theory . 129 3.3 Lower Bounds . 26 3.4 Synchronous Ring . 29 12 Wireless Protocols 133 12.1 Basics . 133 4 Distributed Sorting 33 12.2 Initialization . 135 4.1 Array & Mesh . 33 4.2 Sorting Networks . 36 12.2.1 Non-Uniform Initialization . 135 4.3 Counting Networks . 39 12.2.2 Uniform Initialization with CD . 135 12.2.3 Uniform Initialization without CD . 137 5 Shared Memory 47 12.3 Leader Election . 137 5.1 Model . 47 12.3.1 With High Probability . 137 5.2 Mutual Exclusion . 48 12.3.2 Uniform Leader Election . 138 5.3 Store & Collect . 51 12.3.3 Fast Leader Election with CD . 139 5.3.1 Problem Definition . 51 12.3.4 Even Faster Leader Election with CD . 139 5.3.2 Splitters . 52 12.3.5 Lower Bound . 142 5.3.3 Binary Splitter Tree . 53 12.3.6 Uniform Asynchronous Wakeup without CD . 142 5.3.4 Splitter Matrix . 55 12.4 Useful Formulas . 143 6 Shared Objects 59 13 Stabilization 147 6.1 Centralized Solutions . 59 6.2 Arrow and Friends . 60 13.1 Self-Stabilization . 147 6.3 Ivy and Friends . 65 13.2 Advanced Stabilization . 152 7 Maximal Independent Set 71 14 Labeling Schemes 157 7.1 MIS . 71 14.1 Adjacency . 157 7.2 Original Fast MIS . 73 14.2 Rooted Trees . 159 7.3 Fast MIS v2 . 76 14.3 Road Networks . 160 iii CONTENTS v vi CONTENTS 15 Fault-Tolerance & Paxos 165 23 Dynamic Networks 261 15.1 Client/Server . 165 23.1 Synchronous Edge-Dynamic Networks . 261 15.2 Paxos . 169 23.2 Problem Definitions . 262 23.3 Basic Information Dissemination . 263 16 Consensus 177 23.4 Small Messages . 266 16.1 Two Friends . 177 23.4.1 k-Verification . 266 23.4.2 k-Committee Election . 267 16.2 Consensus . 177 23.5 More Stable Graphs . 269 16.3 Impossibility of Consensus . 178 16.4 Randomized Consensus . 183 24 All-to-All Communication 273 16.5 Shared Coin . 186 25 Multi-Core Computing 281 17 Byzantine Agreement 189 25.1 Introduction . 281 17.1 Validity . 190 25.1.1 The Current State of Concurrent Programming . 281 25.2 Transactional Memory . 283 17.2 How Many Byzantine Nodes? . 191 25.3 Contention Management . 284 17.3 The King Algorithm . 193 17.4 Lower Bound on Number of Rounds . 194 26 Dominating Set 293 17.5 Asynchronous Byzantine Agreement . 195 26.1 Sequential Greedy Algorithm . 294 26.2 Distributed Greedy Algorithm . 295 18 Authenticated Agreement 199 18.1 Agreement with Authentication . 199 27 Routing 303 27.1 Array . 303 18.2 Zyzzyva . 201 27.2 Mesh . 304 27.3 Routing in the Mesh with Small Queues . 305 19 Quorum Systems 211 27.4 Hot-Potato Routing . 306 19.1 Load and Work . 212 27.5 More Models . 308 19.2 Grid Quorum Systems . 213 19.3 Fault Tolerance . 215 28 Routing Strikes Back 311 19.4 Byzantine Quorum Systems . 218 28.1 Butterfly . 311 28.2 Oblivious Routing . 312 20 Eventual Consistency & Bitcoin 223 28.3 Offline Routing . 313 20.1 Consistency, Availability and Partitions . 223 20.2 Bitcoin . 225 20.3 Smart Contracts . 231 20.4 Weak Consistency . 233 21 Distributed Storage 237 21.1 Consistent Hashing . 237 21.2 Hypercubic Networks . 238 21.3 DHT & Churn . 244 22 Game Theory 251 22.1 Introduction . 251 22.2 Prisoner’s Dilemma . 251 22.3 Selfish Caching . 253 22.4 Braess’ Paradox . 255 22.5 Rock-Paper-Scissors . 256 22.6 Mechanism Design . 257 2 CONTENTS them when we use them. Towards the end of the course a general picture should emerge, hopefully! Course Overview This course introduces the basic principles of distributed computing, highlight- Introduction ing common themes and techniques. In particular, we study some of the funda- mental issues underlying the design of distributed systems: Communication: Communication does not come for free; often communi- • cation cost dominates the cost of local processing or storage. Sometimes What is Distributed Computing? we even assume that everything but communication is free. In the last few decades, we have experienced an unprecedented growth in the Coordination: How can you coordinate a distributed system so that it area of distributed systems and networks. Distributed computing now encom- • performs some task efficiently? How much overhead is inevitable? passes many of the activities occurring in today’s computer and communications world. Indeed, distributed computing appears in quite diverse application areas: Fault-tolerance: A major advantage of a distributed system is that even The Internet, wireless communication, cloud or parallel computing, multi-core • in the presence of failures the system as a whole may survive. systems, mobile networks, but also an ant colony, a brain, or even the human society can be modeled as distributed systems. Locality: Networks keep growing. Luckily, global information is not always These applications have in common that many processors or entities (often • needed to solve a task, often it is sufficient.