
25. Distributed state/clock synchronization CS-350 Flynn Classification: SISD Computer Science Computer Science Concurrency & Synchronization: Distributed Systems Processor D D D D D D D Azer Bestavros Instructions Computer Science Department Boston University Flynn Classification: SIMD Flynn Classification: MIMD Computer Science Computer Science Processor Processor D D D D D D D D0 D0 D0 D0 D0 D0 D0 D1 D1 D1 D1 D1 D1 D1 D2 D2 D2 D2 D2 D2 D2 Instructions D3 D3 D3 D3 D3 D3 D3 D4 D4 D4 D4 D4 D4 D4 Processor … … … … … … … Dn Dn Dn Dn Dn Dn Dn D D D D D D D Instructions Instructions Memory Topology: Shared Memory Topology: Distributed Computer Science Computer Science Processor Memory Processor Memory Processor Processor Network Memory Processor Processor Processor Memory Processor Memory 1 Memory Topology: Hybrid Parallel vs. Distributed Computer Science Computer Science Parallel computing ~ a shared memory model Processor Processor Distributed computing ~ no shared memory Memory Memory Processor Processor Network Processor Processor Memory Memory Processor Processor Inevitability of distributed computing Inevitability of distributed computing Computer Science Computer Science Moore’s Law: “The density of transistors on a chip doubles every 18 months, for the same cost” (1965) Moore’s Law: Constants as exponents matter! Processor-memory Ammendment: “… CPU clock speed cannot keep up” (2006) performance gap increases by ~ 50% per year… Shared memory cannot go beyond a fixed number of cores… Distributed computing on a chip?! … or data center Computer Science Computer Science 2 What does scale buy us? What does scale buy us? Computer Science Computer Science Rendering multiple frames of high-quality animation Indexing the web (Google) Simulating several hundred or thousand characters Simulating an Internet-sized network (PlanetLab) Speeding up content delivery (Akamai) Computing clouds as a utility (Amazon) © DreamWorks Animation Happy Feet © Kingdom Feature Productions; Lord of the Rings © New Line Cinema But, how? Example: Pipelining Computer Science Computer Science It all boils down to… Divide problem into stages Divide-and-conquer the problem Each stage consumes output from prior stage Throw more hardware at the problem Each stage produces input to the next stage Develop abstractions to hide the mess below Make stages balanced to operate efficiently Simple to understand… … and make sure to synchronize them A lifetime to master… Example: Divide & Conquer Choices, Choices, Choices Computer Science Computer Science “Work” Partition Assignment of workers to resources (scheduling!) Assign to different threads in the same core Assign to different in the same CPU w1 w2 w3 Assign to different in a multi-processor system “worker” “worker” “worker” Assign to different in a distributed system r1 r2 r3 Choice of infrastructure Commodity vs. “exotic” hardware Combine “Result” Number of machines, CPUs, cores per CPU Bandwidth of memory vs. disk vs. network 3 Challenges Distributed Synchronization Computer Science Computer Science But this model is too simple! How is it different from what we have seen before? How do we assign work units to worker threads? What if we have more work units than threads? Entities are autonomous; nothing is really shared . No shared software (heterogeneous platforms, versions, …) How do we aggregate the results at the end? . No shared resources (memory, file system, …) How do we know all the workers have finished? . No shared notion of time (no global clock) What if the work cannot be divided into separate tasks? Communication via messaging . Asynchronous communication For each of these problems multiple threads must . Wires have state (e.g., packets in transit) communicate and/or access shared resources. Unreliability is inherent . Failures are to be expected . Messages may be lost These are all distributed synchronization problems! . Malicious and adversarial (“Byzantine”) failures are possible Example: Distributed Snapshot Global State Consistency Computer Science Computer Science Problem: Can you record a consistent state of a distributed system? Branch A Motivation: We need to have an accurate accounting of inventory Branch B managed in a distributed fashion. We need to get accurate account balances for a Transfer distributed banking application. $100 Account has $100 @ 3:00pm Account has $100 Global State Consistency Global State Consistency Computer Science Computer Science Branch A Branch A Branch B Branch B Transfer Transfer $100 $100 Account has $100 @ 3:00pm Account has $0 Account has $100 @ 3:00pm Account has $200 4 Distributed Global Snapshot Distributed Global Snapshot Computer Science Computer Science Assumptions (can be made realistic) Key idea: Use a “marker” to flush link state into Communication is reliable; no messages are lost. the receiver’s state. Message delivery delays can be arbitrary, but not infinite. Messages are delivered in order. There is no globally acceptable/consistent clock All nodes must execute the same protocol Any node can start the protocol at will Responsibility No node failures. of S S How can we solve it? Markers Distributed Global Snapshot Distributed Global Snapshot Computer Science Computer Science Upon receipt of 1st marker, record state and Questions: send markers on all outgoing links How long would it take? The state recorded at a node is Who starts the process? State when a first marker is received + How do I collect the (distributed) answer? changes resulting from messages received on link i, Can I take multiple snapshots concurrently? until marker is received on that link i Stop when markers are received on all incoming links Distributed Global Snapshot Distributed Synchronization Computer Science Computer Science Answers: In global snapshot synchronization we assumed Can be initiated by multiple nodes at the same that there are no “global” clocks time. Many synchronization problems can be addressed if we can synchronize clocks in a distributed system Terminates in finite time (assuming messages are delivered in finite time) proportional to the diameter of the graph. Let’s discuss “clock synchronization”… Collection of global state can be done through gossiping or by requiring initiating node to act as a “leader”. 5 Why clock synchronization matters? Time keepers in history Computer Science Computer Science Astronomers: Look at the skies… Oops… Earth rotation is slowing down Physicists: Look at atoms… 9,192,631,770 cesium transitions == 1 solar second Computer Scientists: Look at CPU clock cycles… 1 second is an eternity for a 4GHz CPU Clock Synchronization Algorithms The Berkeley Algorithm (circa 1980s) Computer Science Computer Science Centralized Algorithms Cristian’s Algorithm (1989) Berkeley Algorithm (1989) Distributed Algorithms Averaging Algorithms (e.g., NTP) Multiple External Time Sources a) The time daemon asks all the other machines for their clock values. b) The machines answer and the time daemon computes the average. c) The time daemon tells everyone how to adjust their clock. Lets Go Back to Basics… Event (Message) Ordering… Computer Science Computer Science Why do we need to agree on time? Could we live with something less constraining? For most distributed algorithms/protocols, it is the consistency of the clocks that matters. Lamport: “what is important is that all processes agree on the order in which events occur.” Clocks do not have to be correct in the absolute sense, but only in the logical sense… Say hello to “logical clocks” (or Lamport Clocks) What’s wrong with the above picture? 6 Lamport Timestamps [1978] Finally a definition of concurrency Computer Science Computer Science The “Happens Before” Graph a b means that “a happens before b”. It captures a weaker notion of causality. “Concurrency is absence of evidence to the contrary” “Happens Before” is observable in two situations: 1. If a and b are events in the same process, and a occurs before b, then a b is true. 2. If a is the event of a message being sent by one process, and b is the event of the message being received by another process, then a b is also true. Notes “Happens Before” is a transitive relation Two events are “concurrent” if neither happens before the other… Example Example a d Computer Science a d Computer Science P1 X X P1 X X c e c e P2 X X P2 X X b b P3 X P3 X Identify the happens-before relations, by each Imposed by process: a<d ; c<e Imposed by messaging: a<c ; b<e rule of definition. Imposed by transitivity: a<e Identify concurrent events. Concurrent: a || b ; b || c ; b || d; d || c ; d || e Is happens-before synonymous to causality? Maintaining Lamport Clocks Maintaining Lamport Clocks Computer Science Computer Science Requirement: If a and b are process events and a occurs Requirement: Clocks are monotonically increasing; never before b, then a b is true. ever go back in time… Enforcement: Each event has a timestamp (= local clock Enforcement: No need to enforce! The only operation on at time of event) and a process increment its logical clocks will be to increment them (OK, will have to wrap around at some point…) clock counter for every event it encounters Desirable: Never ever have two events with the same Requirement: If a is the event of a message transmittal timestamp and b is the event of the same message receipt, then a Enforcement: Tag the local clock with (unique) process ID. b is also true. We have seen this before! Recall where? Enforcement:
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-