Back to the Future Part 4: the Internet
Total Page:16
File Type:pdf, Size:1020Kb
∗ Back to the Future Part 4: The Internet Simon S. Lam Department of Computer Sciences The University of Texas at Austin [email protected] Slide 1 I was much younger then. The great thing about young people is that they have no fear. I don’t remember that I ever worried about losing money. I arranged to use the Thompson conference center on campus which charged us only a nominal fee. We had a great lineup of speakers. The pre-conference tutorial on network protocol design was by David Clark, a former SIGCOMM Award winner. The Keynote session had three speakers: Vint Cerf, another for- mer SIGCOMM Award winner, John Shoch of Xerox who Back to the Future Part 4: was in charge of Ethernet development at that time, and The Internet Louis Pouzin, yet another former SIGCOMM Award win- ner. We had 220 attendees, an excellent attendance for a first-time conference. We ended up making quite a bit of Simon S. Lam money.Endofstory. Department of Computer Sciences The University of Texas at Austin Slide 2 Sigcomm 2004 Keynote 1 S. S. Lam IP won the networking race I thank the SIGCOMM Awards committee for this great Many competitors in the honor. I would like to share this honor with my former and FTP SMTP HTTP RTP DNS current students, and colleagues, who collaborated and co- past authored papers with me. Each one of them has contributed SNA, DECnet, XNS to make this award possible. To all my coauthors, I thank X.25, ATM, CLNP TCP UDP you. It is most gratifying for me to accept this award at a SIG- IP provides end-to-end IP COMM conference because my relationship with the confer- delivery of datagrams, ence goes back a long way. Over 21 years ago, I organized best-effort service only . the very first SIGCOMM conference held on the campus of IP can use any link-layer LINK1 LINK2 LINKn the University of Texas at Austin in March 1983. Let me technology that delivers tell you a little story about the first SIGCOMM. datagrams (packets) Back in 1982–1983, the SIGCOMM chair was David Wood. Sigcomm 2004 Keynote 2 The vice chair was Carl Sunshine. One day during the sum- S. S. Lam mer of 1982, Carl called me up on the phone. He said, “Si- mon, SIGCOMM is broke. We have a lot of red ink. How IP won the networking race for data communications. would you like to organize a SIGCOMM conference to make However, as recently as ten years ago, before widespread some money for us?” I said, “Okay, I will do it.” Then, we use of the World Wide Web, it was not clear that IP would talked a bit. Towards the end of our conversation, Carl said, be the winner. The original ARPANET in the 1970s and “By the way, Simon, as I said, SIGCOMM is broke. So if later the Internet had many competitors in the past. you lose any money on the conference, you will have to take In industry, the competitors included SNA, IBM’s Sys- care of the loss yourself.” tems Network Architecture, DECnet with the Digital Net- work Architecture, and XNS which stands for Xerox Net- ∗This is an edited transcript of the author’s SIGCOMM 2004 keynote speech given on August 31, 2004. This work was work systems. SNA was developed around 1975. At that sponsored in part by Nation Science Foundation grants ANI- time, IBM was the 500-pound gorilla in the computer indus- 0319168 and CNS-0434515 and Texas Advanced Research try. IBM already had networking products used by numer- Program grant 003658-0439-2001. ous customers. Those early networks had a tree topology with a mainframe computer connected to data concentra- packets may be discarded because of buffer overflow. As a tors and terminals. But SNA was designed to have a new result, a network of queues is prone to congestion collapse, as architecture with a mesh topology similar to ARPANET’s illustrated in this figure, where system throughput decreases architecture. Back in 1975, very few people would have pre- rapidly as offered load becomes large. This is a weakness of dicted that the small, experimental ARPANET, rather than IP that we should keep in mind. SNA, would emerge 25 years later as winner of the network- ing race. In fact, when SNA was first developed, SNA was Slide 4 the acronym for Single Network Architecture. IBM had very ambitious goals. However, by the time SNA was announced to the public, the name was changed to Systems Network Architecture, possibly due to antitrust concerns. IP also had strong competitors in the standards world. Congestion collapse—ALOHA channel First, there was X.25 in the 1970s and 1980s. Then there was ATM in the 1990s. Also, there were ISO protocols such The ALOHA System (Abramson 1970) as CLNP (Connectionless Network Protocol). Poisson process assumption IP is a very simple protocol. It provides end-to-end deliv- ery of datagrams, also called packets. It provides no service pure ALOHA throughput guarantee to its users. In turn, IP expects no service guar- S = G e-2G (Abramson) antee from any link-layer technology it uses. Therefore, any slotted ALOHA throughput network can connect to IP. S = G e-G (Roberts) Ten years ago, when Internet applications were primarily email, ftp, and web, IP’s simplicity was its greatest strength in fighting off competitors. In the future, however, IP’s ARPANET Satellite System (1972) simplicity is possibly a liability because the requirements of Internet’s future applications will be more demanding, Sigcomm 2004 Keynote 4 particularly the requirements of interactive multimedia ap- S. S. Lam plications. Congestion collapse was a subject of great interest to me Slide 3 when I was a Ph.D. student at UCLA. Congestion collapse of the ALHOA channel was the inspiration of my disserta- tion research. In the next several slides, I will show you a little bit of my early work in the 1970s. My early work had IP‛s underlying model is a network a lot of influence on my thinking, and may explain why I of queues hold certain opinions later on in this talk. I went to UCLA as a graduate student in 1969 when the Revolutionary change Unreliable channels, limited buffer capacity first IMP of the ARPANET was being installed there. The from the circuit model next year, Norm Abramson presented his seminal paper on (Kleinrock 1961) Prone to congestion collapse the ALOHA System in the Fall Joint Computer Conference. Abramson made use of a Poisson process assumption and t Each packet is routed u p obtained the well-known formula for the throughput of pure independently using its h g destination IP address u ALOHA. o r No concept of a flow h The throughput formula for slotted ALOHA was an obser- T between source and m vation by Larry Roberts in 1972. By then, the ARPANET e destination, no flow t s state in routers y was already up and running. Roberts was looking for some- S thing else to do. Motivated by the ALOHA System, he Offered Load started the ARPANET Satellite System project, later re- named the Packet Satellite project. I was one of the first Sigcomm 2004 Keynote 3 S. S. Lam graduate students to work on the project. Others include Bob Metcalfe who was then at Xerox PARC and Dick Binder IP’s underlying model is a network of queues. Packets are who was working on the ALOHA System. transmitted from one node to another, leaving one queue and joining the next queue as they travel from source to Slide 5 destination. Each packet travels independently. There is no concept of a flow of packets belonging to the same session Congestion collapse was not a possibility discussed by between source and destination processes. ALOHA System researchers. In their early presentations, I give credit to Len Kleinrock for proposing the network of the ALOHA System was always said to work well. This queues model for data communications as an alternative to is because the ALOHA channel was vastly under-utilized. the circuit model in telephone networks. Some might argue In today’s terminology, it was over-provisioned. One of my that Kleinrock did not invent the name “packet switching.” opinions is that over-provisioning covers up problems. But he was definitely the first to propose the network of Also, with the Poisson process assumption, Abramson ab- queues model in his 1961 Ph.D. dissertation proposal. stracted away in his analysis the need for a backoff algo- In a real network, channels are unreliable. More impor- rithm. So the first thing Len Kleinrock and I did for slotted tantly, buffer capacity for queueing is limited. Therefore, ALOHA was to introduce a realistic backoff algorithm for tously after about 3,000 time slots. I believe that this was Congestion collapse—ALOHA channel the first experimental demonstration of congestion collapse (cont.) and ALOHA instability. The early ALOHA System was vastly under- Slide 7 utilized Backoff algorithm for slotted ALOHA Congestion collapse—ALOHA channel (Kleinrock-Lam 1973) (cont.) Retransmit a collided packet randomly into Adaptive backoff algorithm (Lam 1974): one of K future time slots Retransmit a packet with m previous collisions into K(m) slots, where K(m) is monotonically Poisson process assumption implies nondecreasing in m K ˳ Sigcomm 2004 Keynote 5 S. S. Lam collided packets. In our algorithm each collided packet is re- K(1) = 10 K transmitted randomly into one of future time slots, where K(m) = 150, m ˻ 2 K is a parameter.