The Great Telecom Meltdown
Total Page:16
File Type:pdf, Size:1020Kb
4 The Internet Boom and the Limits to Growth Nothing says “meltdown” quite like “Internet.” For although the boom and crash cycle had many things feeding it, the Internet was at its heart. The Internet created demand for telecommunications; along the way, it helped create an expectation of demand that did not materialize. The Internet’s commercializa- tion and rapid growth led to a supply of “dotcom” vendors; that led to an expec- tation of customers that did not materialize. But the Internet itself was not at fault. The Internet, after all, was not one thing at all; as its name implies, it was a concatenation [1] of networks, under separate ownership, connected by an understanding that each was more valuable because it was able to connect to the others. That value was not reduced by the mere fact that many people overesti- mated it. The ARPAnet Was a Seminal Research Network The origins of the Internet are usually traced to the ARPAnet, an experimental network created by the Advanced Research Projects Agency, a unit of the U.S. Department of Defense, in conjunction with academic and commercial contrac- tors. The ARPAnet began as a small research project in the 1960s. It was pio- neering packet-switching technology, the sending of blocks of data between computers. The telephone network was well established and improving rapidly, though by today’s standards it was rather primitive—digital transmission and switching were yet to come. But the telephone network was not well suited to the bursty nature of data. 57 58 The Great Telecom Meltdown A number of individuals and companies played a crucial role in the ARPAnet’s early days [2]. It is unlikely, though, that any of them envisioned what would come of it 30 years later. The ARPAnet was designed to connect government facilities with their suppliers and academic researchers. It was not open to the public at large but grew from its original three nodes in 1969 to sev- eral hundred a decade later (see Figure 4.1). Along the way, many changes were made to the technology. This was, after all, a research network, and the network itself was the heart of the research. In the early ARPAnet, the machines that interconnected computers were called IMPs, for intermachine processors. The original IMPs were built by Bolt Beranek and Newman, a Cambridge, Mass., firm that also operated the ARPAnet’s Network Control Center. The network’s core protocol was called NCP, for Network Control Protocol. The TCP/IP protocol suite was developed in the 1970s, and NCP support was turned off in the IMPs in 1983. TCP/IP was important because it created two separate layers with distinct functions out of what had been NCP’s function. IP ran in all of the IMPs, relaying packets across the network on a connectionless basis. Each packet stood independent of all others, with no prior connection between the two ends required, and the full source and destination addresses in each packet. (By analogy, postcards and let- ters travel the same way, albeit more slowly.) TCP only ran at each end of the network, in the host computers. It resequenced received packets (IP did not guarantee delivery order), detected dropped packets, and used retransmission to correct for lost or errored packets. This allowed the intermediate systems to be relatively simple; many functions only needed to be performed at the end points. The ARPAnet grew well into the 1980s but at one point was split into the Department of Defense’s own internal MILNET and the more research- and academic-oriented ARPAnet. These interconnected networks became, in effect, the Internet, a network of networks. The ARPAnet itself was soon phased out, as the National Science Foundation took over funding of a new NSFnet back- bone, and other privately operated networks, such as CSNET, became part of the Internet. Other Packet-Switching Technologies Had Their Adherents Although TCP/IP has become the dominant form of data networking, it did not grow up in a vacuum; there were other packet-switching technologies that com- peted with it or were designed for different niche markets. One has to recall that in the 1970s and into the 1980s, most end users did not have computers on their desks; they had terminals. A good deal of computer networking was about connecting terminals to their hosts. In the IBM-dominated world of MIT44 Lincoln MIT6 LBL CCA Moffett AFGL RCC5 The Internet Boom and the Limits to Growth AMES15 AMES16 LLL NYU RCC49 CORDACOM DEC RCC71 SRI1 Utah SRI51 BBN40 Xerox GWC RADC Stanford BBN63 Tymshare ANL CMU BBN72 Sumex NPS Harvard DTI DOCS WPAFB Stla DARCOM Aberdeen Hawaii NOSC ACCAT UCLA ANDRW NRL AFSD CIT Scott DCEC NSA NBS ISI27 USC AFWL NORSAR SDAC Rand ISI52 MITRE YUMA Pentagon ARPA ISI22 WSMR Collins Gunter Bragg Eglin Robins London Satellite circuit Texas IMP TIP Pluribus IMP Pluribus TIP C30 59 Figure 4.1 ARPAnet map from 1980 [3]. 60 The Great Telecom Meltdown mainframes, the terminals were typically intelligent devices, block-mode termi- nals like the 3270 family, that performed local screen editing under tight control of the host. In the alternative computer universe then dominated by minicom- puter vendors such as DEC and Hewlett-Packard Inc. (HP), terminals were typically “dumb” character-by-character asynchronous devices. AT&T’s own Teletype Corp. had been a large supplier of terminals to minicomputer users in the 1960s and early 1970s, but its slow teleprinters (designed primarily for use as Telex machines) were being replaced by backward-compatible cathode-ray- tube terminals. In either case, the minicomputer terminal was more likely to be hard-wired to a small local machine. Big mainframes, on the other hand, were more likely to be located in a “glass house” computer center some distance away, so terminal-to-host data communications, or “teleprocessing” technology, was important. Minicomputer users were more concerned with host-to-host interconnection. One very important network technology was the packet-switching proto- col suite based on CCITT Recommendation X.25. This was developed in the mid-1970s as a way to reconcile competing packet-switched networks developed in several countries. (Like many CCITT Recommendations, as their standards were formally known, there was substantial wiggle room; each X.25 network had its own “profile” and actual devices attached to it required significant cus- tomization.) The design center was connecting asynchronous terminals to remote hosts, as an alternative to using modems on long distance telephone calls. The X.25 protocols were optimized for the slow, noisy analog telephone lines of the day, when a dial-up connection typically went no faster than 300 bps, and a very costly leased-line modem could only reach the blazing speed of 9,600 bps if the line was specially conditioned. Thus X.25 [4] had both hop-by- hop and edge-to-edge error correction and flow control, which the PTTs thought necessary. X.25 became ubiquitous across Europe by the early 1980s but never achieved more than niche status in the United States. New X.25 deployments largely halted by the mid-1980s. Many have since closed, but a few networks remained in place beyond the turn of the twenty-first century. They remained at least marginally useful for some niche applications such as credit card verification. The American market for remote terminal connectivity was dominated by two carriers, Telenet and Tymnet, although other companies, such as AT&T and some of the Bell companies, also participated. Telenet was originally started by Bolt Beranek and Newman as a commercial spin-off of its ARPAnet technol- ogy, though its own technology took a very different path as X.25 and the ARPAnet diverged. By the 1980s, it was owned by GTE Corp., and it eventually became part of Sprint. Tymnet was started as the remote-access arm of com- puter timesharing company Tymshare. It outlasted its parent company, and eventually became part of British Telecom and later MCI. Both did provide The Internet Boom and the Limits to Growth 61 optional X.25 interfaces; both faded into obscurity by the 1990s along with the terminal-to-host market. IBM’s original block-mode terminals used a protocol known as bisync (for “binary synchronous”). This was a simple Ack/Nak technique: Blocks were transmitted and the other side had to acknowledge each one before proceeding, or the block would be retransmitted. This limited throughput when transit delay was high, because only one block could be outstanding at a time for any given terminal. In the early 1970s, IBM saw satellites as the way of the future. It invested in Satellite Business Systems (SBS), hoping to bypass the Bell System monopoly with direct-to-the-business satellite service. The delays inherent in geosynchronous satellites were harmful to bisync. This gave them an impetus to develop a new protocol suite, called Systems Network Architecture (SNA). Although SNA eventually had some host-to-host connectivity, it was primarily designed for remote block-mode terminals. Large corporations built worldwide SNA networks, primarily to connect desktop terminals to large mainframes. It remained popular until terminals themselves were replaced by desktop comput- ers in the 1990s. Indeed the IBM mainframe world had its own sort-of-version of the Inter- net. Bitnet (whose name allegedly meant “because it’s there”) interconnected numerous universities beginning in 1981. Although not a packet-switched net- work, Bitnet used store-and-forward techniques said to be based on punch-card images. It was used for electronic mail and mailing lists. It grew to more than 500 organizations and 3,000 nodes (hosts) by 1989, but by 1994 most of its users had migrated to the TCP/IP-based Internet [5].