<<

Next Steps…

• Have: digital point-to-point We’ve worked on link signaling, reliability, sharing

• Want: many interconnected points

6.02 Spring 2011 Lecture #18 Your Network • multi-hop networks: design criteria Here! • network topologies • circuit vs. • queues, Little’s Law

6.02 Spring 2011 Lecture 18, Slide #1 6.02 Spring 2011 Lecture 18, Slide #2

Multi-hop Networks Network Reliability

• Design engineers: Link – Low MTBF components, failure prediction – Easy to identify and fix problems • Remote observability and controllability End point – Replace/expand/evolve network incrementally Switch – Defend against malicious users • Redundancy – No single point of failure – Fail-safe, Fail-safe, Fail-hard • Switches orchestrate flow of information through the network, – Automated adaptation to component failures often many logically-independent flows over a single physical link. • Degradation, not failure – Appropriate sharing of network resources • Users: – Circuit switching vs. packet switching – High availability • Design criteria – Meaningful feedback on failure (e.g., busy signal) – Reliability, scalability, performance, cost, security, …

6.02 Spring 2011 Lecture 18, Slide #3 6.02 Spring 2011 Lecture 18, Slide #4 Network Scalability Network Performance

• Enable incremental build-out • Design Engineers – increase in usage involves incremental costs (both at edges and – Utilization in interior of network) – Minimize protocol overhead – Address bottlenecks without fundamental changes – Quality of Service (performance tiers) • Economies of scale • User – Larger number of users ! less cost/user – Throughput (guaranteed minimum) – “lose money on each customer, but make it up in volume” • Opportunistic improvements • Slow growth of scale factors (N = number of users) – Latency (guaranteed maximum) • One-way, round-trip 2N > N2 > N log(N) > N > log(N) > constant – Isochrony?

6.02 Spring 2011 Lecture 18, Slide #5 6.02 Spring 2011 Lecture 18, Slide #6

Network Costs Dealing with System Complexity

• NRE (non-recurring expenses, ie, one-time costs) • Manage complexity with abstraction layers • Basic infrastructure – Easier to design if details have been abstracted away • Per connection – Desired behavior implemented using only component behaviors (not component implementation details), but check that correct • Per message transported behavior actually happened! • Economies of scale, amortization – Change implementation without changing behavior • Communication Network Layers

Application Application Presentation Transport Session TCP/IP ISO/OSI Transport Network Network Link Physical

6.02 Spring 2011 Lecture 18, Slide #7 6.02 Spring 2011 Lecture 18, Slide #8 Abstraction Layers Network Topologies

Networks built from two types of channels: Application Layer: HTTP, Mail, FTP, Telnet, RPC, DNS, NFS • Point-to-point channels – Simplex: one-way communication – Half-duplex: bidirectional, but one-way at a time Transport Layer (eg, TCP or UDP): – Full-duplex: simultaneous bidirectional communications Using packets to create illusion of continuous stream of • Symmetric and asymmetric bandwidths data. In-order, connection-based vs. connectionless • Multiple-access channels (eg, , ethernet) Network Layer (eg, IP): – Sharing mechanism: Addressing and routing fixed-length datagrams through a • Regimented: Fixed allocations in time or frequency multi-hop network, breaking up larger packets • Ad-hoc: Contend for use, eg, collision detect/back-off

• Local-area (LANs) vs. wide-area (WANs) Link Layer: – Appropriate technologies? topologies? Channel access, how bits are transmitted on the channel, framing of data, channel-specific addressing, channel coding.

6.02 Spring 2011 Lecture 18, Slide #9 6.02 Spring 2011 Lecture 18, Slide #10

Fully-Connected Graph Star

Asymptotic analysis: Each point-to-point link requires 1 unit of hardware, each communication, 1 Key idea: share this link unit of time to avoid N2 costs.

Other stars… *But in the real world, hardware lives in 3- 1/3 • Throughput: O(N) 2 space, so N nodes take at least O(N ) space • Throughput: O(N ) which means that comm. time grows as O(N1/3) • Latency: O(1)* • Latency: O(1) • Cost: O(N) for center , but only O(1) for leaf node • Cost: O(N2) overall, O(N) per node • single point of failure! • no single points of failure, many alternate paths • congested link affects all communications to that node • no blocking/congestion • Inexpensive, long links possible • Expensive, doesn’t easily scale to large regions

6.02 Spring 2011 Lecture 18, Slide #11 6.02 Spring 2011 Lecture 18, Slide #12 Example: Network Wide Area Networks

The pioneering research of Paul Baran in the 1960s, who envisioned a communications network that would survive a major enemy attack. The sketch shows three different network topologies described in his RAND Memorandum, "On Distributed Communications: 1. Introduction to Distributed Communications Network" (August 1964). The distributed network structure was judged to offer the best survivability.

6.02 Spring 2011 Lecture 18, Slide #13 6.02 Spring 2011 http://www.cybergeography.org/atlas/historical.html Lecture 18, Slide #14

Modern Networks: LANs + WANs Sharing the internetwork links

Wide-area Network (WAN) Consider a single node in our wide-area network:

We have many (low-) application-level connections that need to mapped onto a small number of (high-bandwidth) channels. Some originate/ terminate at the current node, some are just passing through on their way to another node.

How should we share the link between all the connections?

Circuit switching (isochronous) Packet switching (asynchronous) Local-area network (LAN) with links to other LANs 6.02 Spring 2011 Lecture 18, Slide #15 6.02 Spring 2011 Lecture 18, Slide #16 Circuit Switching Multiplexing/Demultiplexing

• First establish a Switch Frames circuit between end Caller Callee points – E.g., done when you Slots = 0 1 2 3 4 5 0 1 2 3 4 5 dial a phone number – Message propagates from caller toward (1) Establish One sharing technique: time-division multiplexing (TDM) callee, establishing some state in each • Time divided into frames and frames divided into slots switch (2) – Number of slots = number of concurrent conversations • DATA Then, ends send Communicate • Relative slot position inside a frame determines which data (“talk”) to each conversation the data belongs to other – E.g., slot 0 belongs to the red conversation (3) • After call, tear down Tear down – Mapping established during setup, removed at tear down (close) circuit • Forwarding step at switch: consult table – Remove state

6.02 Spring 2011 Lecture 18, Slide #17 6.02 Spring 2011 Lecture 18, Slide #18

Packet Switching TDM Shares Link Equally, But Has Limitations • Used in the Switch • Data is sent in packets Time-slots (header contains control info, Host 1 Host 2 e.g., source and destination addresses) Node 1 Node 2 frames =0 1 2 3 4 5 0 1 2 3 4 5

• Suppose link capacity is C bits/sec • Each communication requires R bits/sec Header Data • #frames in one “epoch” (one frame per communication) propagation delay between = C/R transmission Host 1 & processing • Per-packet routing time of Packet 1 Packet 1 • Maximum number of concurrent communications is C/R Node 2 delay of • At each node the entire at Host 1 Packet 2 • What happens if we have more than C/R communications? Packet 1 Packet 1 at packet is received, stored, Packet 3 Node 2 • What happens if the communication sends less/more than R and then forwarded (store- Packet 2 Packet 1 bits/sec? and-forward networks) Packet 3 Packet 2 ! Design is unsuitable when traffic arrives in bursts • No capacity is allocated Packet 3

6.02 Spring 2011 Lecture 18, Slide #19 6.02 Spring 2011 Lecture 18, Slide #20 Packet Switching: Multiplexing/Demultiplexing “Best Efforts” Delivery No Guarantees!

• Each packet is individually routed Router – May arrive at final destination in any order Queue • No time guarantee for delivery – Delays through the network vary packet-to-packet • No guarantee of delivery at all! – Packets get dropped (due to corruption or congestion) – Use Acknowledgement/Retransmission protocol to recover • Router has a routing table that contains information about • How to determine when to retransmit? Timeout? which link to use to reach a destination • If packet is re-transmitted too soon ! duplicate • For each link, packets are organized using a queue – If queue is full, packets will be dropped Sounds like the US Mail! • Demultiplex using information in packet header – Header has destination

6.02 Spring 2011 Lecture 18, Slide #21 6.02 Spring 2011 Lecture 18, Slide #22

Comparison of Two Techniques We’ll Explore Packet Switching

• Commonly used for data networks – Data traffic is “bursty”, fixed BW allocation isn’t best Circuit switching Packet Switching – makes most efficient use of communications link

Guaranteed capacity No guarantees (best effort) • Routing: choose paths through network – “best” route changes dynamically Capacity is wasted if data is More efficient – To preserve scalability prefer distributed, decentralized bursty management of routing choices

Before sending data Send data immediately • Transport: Deal with “best efforts” deficiencies via higher establishes a path level acknowledgement/retransmission protocol All data in a single flow Different packets might – Greatly simplifies engineering at packet level follow one path follow different paths – Example of end-to-end argument in engineering

No reordering; constant Packets may be reordered, delay; no dropped packets delayed, or dropped

6.02 Spring 2011 Lecture 18, Slide #23 6.02 Spring 2011 Lecture 18, Slide #24 Queues are Essential Little’s Law n(t) = # pkts at time t in queue H G H D F G H Queues C D E F G H B C D E F G H A B C D E F G H A B C D E F G H 0 T t • P packets are forwarded in time T (assume T large) • Rate = λ = P/T • Queues manage packets between arrival and departure • Let A = area under the n(t) curve from 0 to T “ ” • They are a necessary evil • Mean number of packets in queue = N = A/T – Needed to absorb bursts • A is aggregate delay weighted by each packet’s time in queue. – But they add delay by making packets wait until link is So, mean delay D per packet = A/P available • Therefore, N = λD ← Little’s Law • So they shouldn’t be too big • For a given link rate, increasing queue size increases delay

6.02 Spring 2011 Lecture 18, Slide #25 6.02 Spring 2011 Lecture 18, Slide #26

Multi-Hop Packet Networks

!"#$%"$&'()*'+$&,%,$-'%#''.$,./$%#"$0"123%'+45$67"38.9:$ !"#$0,.$#'$0"113.)0,%'$).;"+1,8".$+'(),-(/5$6<+,.42"+%:$ Baby Blues (Kirkman/Scott)

6.02 Spring 2011 Lecture 18, Slide #27 6.02 Spring 2011 Lecture 18, Slide #28