Arrested Development How Policy Failure Impairs Internet Progress

RICHARD BENNETT DECEMBER 2015

AMERICAN ENTERPRISE INSTITUTE Arrested Development How Policy Failure Impairs Internet Progress

RICHARD BENNETT DECEMBER 2015

AMERICAN ENTERPRISE INSTITUTE © 2015 by the American Enterprise Institute for Public Policy Research. All rights reserved.

The American Enterprise Institute for Public Policy Research (AEI) is a nonpartisan, nonprofit, 501(c)(3) educational organization and does not take institutional positions on any issues. The views expressed here are those of the author(s). Contents

Executive Summary...... 1

The Science of Network Innovation...... 3

Moore’s Law...... 3

Network Innovation...... 6

Device Innovation...... 6

Application Innovation...... 6

The Technology of Network Convergence...... 8

Technical Constraints on Network Innovation...... 8

Structural Issues...... 9

Overcoming Interconnection Bias...... 10

Signs of Progress...... 10

Freeing the Untapped Potential...... 11

Case Studies...... 11

Network Innovation Policy...... 22

How We Got Here...... 22

Excusing Policy Inequity...... 24

Arguments for Differential Regulation...... 25

An Example of Differential Regulation...... 27

The Firewall Model of Internet Regulation...... 28

The Integrated Model of Internet Policy...... 29

Conclusions...... 30

Notes...... 31

About the Author...... 36

iii Executive Summary

he Internet and related networking technolo- While the technical elements of convergence have Tgies have fueled unprecedented, disruptive change been well developed for nearly 20 years, policy, law, and across the entire global economy. These technologies regulation have failed to keep pace with technology. have enabled entrepreneurs to reinvent the creation and Unwinding the regulatory apparatus established for the sale of news, entertainment, professional services, shop- traditional networks—especially the public switched ping, and a host of other cultural and economic activities. telephone network—has proved to be a more substan- While the Internet has ushered in remarkable tial challenge than developing the technology. changes, it has so far left many activities relatively The refusal of regulators to embrace the opportu- untouched. Experiments are underway in telemedi- nities provided by Internet convergence is a peculiar cine, remote learning, and remote group and individual development. The low-water mark of the regulatory conferencing, but progress in these fields has been slow obstruction of convergence is the Federal Commu- despite jaw-dropping increases in network speed and nications Commission’s (FCC) 2015 Open Internet device power brought about by Moore’s Law–driven order, a remarkable departure from the regulatory con- technology advances. The Internet has yet to upend sensus that prevailed in the mid-1990s. The collected interpersonal communication in the same way that it papers from the 1995 and 1996 Telecommunications has disrupted content distribution. Research Policy Conference clearly show regulators, The Internet technical community has engaged in scholars, and policy analysts of all stripes embracing a developing the means for communications applications consensus that the Internet must be a deregulated space to connect to richer network services for a very long in which competition rather than regulation would time; such means were actually incorporated (in a basic provide market discipline. way) in the Internet’s original design. In fact, Internet While the emphasis on competition remains a teleconferencing standards play a vital role in today’s LTE strong feature of intellectual discourse on Internet pol- mobile networks. Network engineers actually realized icy, other voices dominate the wider political and social advanced networking capabilities were important appli- debate on Internet policy. Progress toward a converged cation enablers long before application developers did. Internet cannot continue until regulators balance the The technical work that enabled the Internet to positives that can come from convergence against the support applications traditionally enabled by the tele- worst-case scenarios touted by advocates who seem to phone, cable television, and mobile communications prey on public ignorance, fear, and animosity. networks was reasonably mature by the time the Tele- The Internet has reached an impasse because of inap- communications Act was enacted in 1996 and has propriate regulation. Restoring the Internet’s dynamic improved since then. The migration of these discrete character will require innovation on the part of regula- networks to a common Internet is known as “Internet tors that parallels the innovation produced by the Inter- convergence,” and its essential elements are Internet net engineering community in the wake of the 1996 standards known as Integrated Services and Differen- Telecommunications Act. tiated Services, as well as Quality of Service mecha- The paper consists of three main sections. The nisms in the mobile and fixed broadband networks that first examines technical drivers of innovation, primar- underpin the Internet. ily those related to Moore’s Law. The second section

1 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

examines convergence technologies in network engi- Some portions of the paper delve more deeply into neering through an overview and five case studies. the inner workings of Internet technology than may be Finally, the third section examines the arc of policy reac- customary in policy discourse. The nature of the sub- tions from the innovation-friendly, pro-competition­ ject matter makes technology discussion unavoidable, consensus of the 1990s to the interventionist and tradi- but those whose interests lie exclusively in politics or tionalist spirit evident in the FCC’s recent order. It con- law may safely treat the paper’s technology exposition cludes with suggestions for restoring a more optimistic as evidentiary rather than explanatory. spirit to Internet policy.

2 The Science of Network Innovation

nnovation in networked applications and services is • Between 1973 and 2014, passenger-car fuel effi- Imost visible at the point of use: when we use Face- ciency has improved by 2.5 percent annually. book, Google, YouTube, Instagram, Amazon, Skype, Pandora, Twitter, or iTunes, we are immediately aware • The energy cost of steel declined by 1.7 percent of the cleverness of the entrepreneurs who created these per year between 1950 and 2010.1 novel applications. But entrepreneurs do not work in a vacuum, and innovation is not magic. Individual feats In most fields, performance gains and cost reduc- of creativity are enabled by the human character and a tions range from 1.5 to 3 percent a year, but in elec- host of cultural, legal, economic, and technical factors. tronics these figures have improved by a whopping 50 Technical factors are among the least understood by percent a year for the past 50 years.2 Something special the general public, but they are not difficult to grasp at has been going on in the electronics industry, and that a high level. The bedrock of innovation in information phenomenon is known as Moore’s Law. technology (IT) is Moore’s Law, an observation made by Intel Cofounder Gordon Moore 50 years ago. Tech- nology advances in networks, devices, and applications Something special has been going on may be rightly viewed as side effects of Moore’s Law. in the electronics industry, and that phenomenon is known as Moore’s Law. Moore’s Law

Moore’s Law is simply a prediction about the rate of This law—which is more a conjecture or a predic- improvement in integrated circuit electronics. Improve- tion based on past experience than an actual scientific ment is the essence of innovation, but it does not hap- theory or law—works in different ways. As Chris Mack pen at the same rate in all fields. For example: explains, in its first generation (the 1960s and 1970s), progress in integrated circuit electronics consisted of • Average yields of corn have increased by 2 percent adding more components to chips, or “scaling up.” a year since 1950. This produced high-capacity, dynamic memory chips and high-powered microprocessors. • The generation of electricity from steam improved More recently, Moore’s Law 2.0 has been about by 1.5 percent annually in the 20th century. “scaling down,” or decreasing the size and cost of electronic components and improving their power • Outdoor lighting efficiency has improved by 3.1 efficiency. As present materials have little room for percent annually over the past 135 years. improvement in scaling down past the next decade, we may be entering a Moore’s Law 3.0 phase in which • The speed of intercontinental travel improved by analog components, such as sensors and cameras, join 5.6 percent per year from the 1900 ocean liner to their digital relatives in a new generation of integrated the 1958 Boeing 707 but has been flat since. circuits.3

3 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

Figure 1. 1960 Integrated Circuit, Texas Instruments circuit was a commercial reality in the form Type 502 Solid Circuit of the Texas Instruments Type 502 Solid Cir- cuit, a bistable multivibrator (figure 1). Integrated circuits are manufactured by printing wires and electronic components such as transistors, diodes, resistors, and capacitors on a silicon wafer treated with chemicals to isolate the components and to connect them where needed. They become more efficient as the purity of the silicon wafer improves, as photolithography becomes more precise, and as the size and scale of the com- ponents and their interconnections improves. Generations of integrated circuits in the Moore’s Law 1.0 phase were identified by the capacity of dynamic memories, such as 64 bits, 64 kilobits, or 64 megabits. In the Source: Texas Instruments. 2.0 phase, they are distinguished by the size of the logic gates or features printed on the Figure 2. Single-Molecule Diode wafer; in the current generation, this is 22 nanometers (nm). By contrast, a human hair is 100,000 nm in diameter, a strand of DNA is 2.5 nm, and the previous integrated-circuit feature size was 32 nm. If the electron itself has a size—a debated question in physics—it is at most one- millionth of 22 nm, but controlling electron- ics in a solid depends on molecules no more than 1,000 times smaller than today’s feature size. Hence, Moore’s Law as we understand it today has limits that will probably arrive within 10 to 20 years, barring new discoveries in materials science. Until then, and perhaps even after, integrated circuits will continue to Note: This is an illustration of the molecule used by Columbia Engineering profes- grow more powerful, more energy efficient, sor Latha Venkataraman to create the first single-molecule diode with a nontrivial and more economical according to design rectification ratio overlaid on the raw current versus voltage data. Diodes are fun- damental building blocks of integrated circuits; they allow current to flow in only requirements. IBM announced one such one direction. advance in early October 2015: a novel use Source: Latha Venkataraman, Columbia School of Engineering. of carbon nanotubes.4 Moore’s Law as we have known it will stop Moore’s Law operates within the sphere of integrated definitively only when a single molecule can form an circuits, which leapt from concept to commercial reality electronic device. The first single-molecule diode capa- on the strength of a pair of inventions in 1958: Jack Kil- ble of rectifying a nontrivial load has already been cre- by’s “flying wire” integrated circuit and Robert Noyce’s ated in a lab (figure 2), but we are far from exploiting tidy interconnection technique. By 1960, the integrated any of its capabilities, let alone all of them.5

4 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

The most recent advance in semiconduc- Figure 3. IBM’s 7 nm Transistors tor process paves the way to 7 nm gates, half the size of today’s leading-edge 14 nm chips. The 7 nm process relies on an advanced material—silicon germanium—capable of transporting more electricity through tiny gates than pure silicon can and of a new method of photolithography, the extreme ultraviolet laser (figure 3).6 Each plays a vital role, and each can theoretically enable fur- ther advances. Of course, 7 nm chips are far from production, and 10 nm—the interme- diate stage between 14 nm and 7 nm—has encountered production problems. But Tai- wan Semiconductor Manufacturing Com- pany has said it plans to produce the new 7 nm chips by 2017.7 On the networking side, a new approach to signal processing in optical fiber prom- ises a two to four times increase in range, which has been an intractable problem for years, and the next generation of mobile Source: IBM Research, www.flickr.com//photos/ibm_research_zurich/sets/ broadband, 5G, promises to increase data 72157655342048792/show/. rates from 1 gigabit per second (Gbps) to 10 Gbps or more within five years.8 Sometimes networking technology advances more slowly than The dynamics of ingenuity, risk, and innovation Moore’s Law, and sometimes it advances more quickly, are largely the same across all electronic-device mar- but it always advances. kets, but they manifest differently in networks and applications for reasons that will be explained shortly. The technology base is built the same way in both Improper regulation of networks spheres: harms innovation overall. • Moore’s Law improvements in semiconductor processes enable engineers to design more effi- Slowing down the rate of improvement in network- cient and powerful computation and networking ing slows down the rate of innovation in technolo- platforms; gies that depend on networks: network applications. Hence, improper regulation of networks harms inno- • More productive platforms give rise to more com- vation overall. plex, better-targeted service platforms; Integrated circuits are the foundational building blocks of networks, computers, and all other forms of • More effective service platforms give rise to more commercial electronics. As circuits improve in speed useful applications; and and power, systems improve as well, regardless of their function. • Users benefit.

5 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

Network innovation requires a higher degree of second (Gbps). Fiber-optic broadband progresses along coordination and cooperation than does application a different line, with speeds doubling twice as fast as development. This is true because the value of net- Moore’s Law. works is utterly dependent on broad adoption. While every application benefits from broad adoption, the value curve for adoption of many applications tends to Device Innovation be linear, while the corresponding curve for networks is more commonly exponential: value can be derived The systems that make use of networks progress along from a single use of an application, but network value a similar path. The first computers that were entirely depends on large numbers of users.9 Consequently, vis- based on the microprocessor were the personal com- ible network innovation is less frequent than applica- puters of 1975: the Altair 8800 and IMSAI 8080. tion innovation but dramatically more meaningful. These computers used the Intel 8080 microprocessor For a more complete examination of Moore’s Law, with 4,500 transistors and a clock speed of 2 megahertz see Moore’s Law at 50: The Performance and Prospects of (MHz). Today’s 15-core Intel Xeon processors have the Exponential Economy by AEI Visiting Fellow Brett as many as 4.31 billion transistors and clock speeds as Swanson.10 high as 2.8 gigahertz (GHz), a thousand times faster than the 8080. Visible network innovation is less The performance rating of the 8080 was 0.29 mil- lion instructions per second (MIPS), while the 15-core frequent than application innovation Xeon is rated at more than 300,000 MIPS. Overall, this but dramatically more meaningful. is an improvement of a million times in 40 years, just as Moore’s Law predicted. It should be noted that Xeon processors are more likely to be used in data-center Network Innovation servers than in ordinary desktop computers, because such intense processing power is not needed for com- Mobile broadband networks tend to be redesigned mon tasks such as word processing and web surfing. approximately every 10 years: the analog voice systems Video streams are commonly formatted, com- of the 1980s were replaced by second-generation 2G pressed, and streamed by Xeon-based systems. Stream- digital systems in the 1990s; 3G came online in the ing accounts for a large proportion of the traffic on 2000s; 4G/LTE has been deployed since 2010; and modern broadband networks; compression allows 5G will begin its rollout by 2018, if not sooner. Each streaming applications to use network capacity more wireless generation increases data rates by about 10 efficiently. times, thus enabling a new collection of more powerful Within network devices, the Moore’s Law dynamics applications. that produced the Xeon processor also provide band- Similar dynamics are afoot with wired networks. width, routing, and network management and, under In the late 1990s, first-generation broadband net- ideal conditions, would also fully ensure that video works offered speeds from 350 kilobits per second to streams do not produce harmful side effects on other a few megabits; within 10 years, speeds increased 20 applications or among each other; this point will be times. By the early 2010s, vectored very high-speed developed further in the sections to follow. digital subscriber line (VDSL) was pushing 80 mega- bits per second (Mbps), and cable modem was up to hundreds of Mbps in many areas. The next round of Application Innovation DSL innovation, G.fast, will push data rates to several hundred Mbps over short distances, and DOCSIS 3.1 Compared to innovation in networks and devices, can push cable modem speeds to multiple gigabits per application innovation is easy.

6 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

As noted above, network innovations such as packet (DRAM) fabs, and $6–7 billion for persistent memory switching, fiber optics, local area networks (LAN), (NAND flash) fabs.11 This not the sort of development mobile broadband, and residential broadband cannot that takes place on a whim. take off until they have been accepted by the broad By comparison, developing a new application for a group of stakeholders who participate in network stan- smartphone or laptop computer is almost trivially easy, dards bodies, are designed into infrastructure compo- even for network applications. Pierre Omidyar coded nents such as network switches, and are deployed in the initial version of eBay all by himself in the summer real networks. Each phase or generation of network of 1995 (by some accounts, over the Labor Day week- evolution requires this high degree of collaboration, so end).12 Bill Gates and Paul Allen wrote Microsoft’s it does not happen overnight. initial program in their free time on a university com- Similarly, when Intel or ARM designs a new micro- puter.13 Mark Zuckerberg effectively wrote the first processor, it does not have economic impact until it can version of Facebook on a laptop computer in his free be manufactured, the company scores design wins, cli- time, with a little help from some friends.14 ents build it into devices, and consumers buy and use In each case, these landmark applications required the new devices. A key obstacle in this process is the negligible capital investment, minimal planning, and mammoth investment in semiconductor factories, or essentially no coordination outside the inventors’ circle “fabs,” which are required to reach new semiconductor of friends. The processes of network and device innova- generations. A 2012 Gartner Group report predicted tion on the one side and application innovation on the 2016 minimum capital expenditures of $8–10 billion other are so different that parties on each side have little for logic fabs, $3.5–4.5 billion for dynamic memory to say about the dynamics that characterize the other.

7 The Technology of Network Convergence

well-functioning network tends to be invisible to emerging applications such as virtual reality, telemed- A the user. When our networks connect us to the icine, and high-definition voice.17 resources of our choice and carry traffic without inci- dent, we take them for granted; nobody ever called her Internet service provider (ISP) to congratulate the com- Technical Constraints on Network Innovation pany for its excellent service. But we tend to blame network operators for every Much of the confusion about network innovation failure we experience. If Netflix, Facebook, or Goo- stems from the present state of the Internet. From an gle has a service interruption for any reason, the con- engineering perspective, networks are organized in lay- sumer’s first instinct is often to blame the ISP. Even ers, corresponding to the scope of data-transfer inter- sophisticated users fall into this trap: PC World Con- actions. The layered architecture of the Internet is tributing Editor Rick Broida recently described a prob- depicted below (table 1). lem he experienced that involved Wi-Fi drivers, Google The bottom layers deal with interactions that take Chrome, and a Samsung smartphone, admitting he place in a small area, such as the representation of infor- usually blames .15 mation by electromagnetic signals and the commu- Application developers commit a similar error, nication of information packets across LANs such as sometimes characterizing broadband networks as and Wi-Fi.18 The higher layers deal with inter- simply “dumb, fat pipes” incapable of providing actions across global networks, encompassing the flow bespoke services tailored to application needs.16 and pacing of web pages between servers and browsers Dumb, fat pipes are indeed fine for today’s most or the delivery of motion pictures from video servers to popular applications—websites and video-streaming screens. Internet architecture thereby tolerates diversity services—but they leave much to be desired for in both transmission technologies and applications.

Table 1. Layered Architecture and Standards

Layer Function Standards Organization

7. Application Program-to-Program W3C, Broadband Forum 6. Presentation Data Formats W3C/IETF 5. Session User to Internetwork IETF/3GPP 4. Transport End-to-End Interactions IETF 3. Network Routing and Addressing IETF 2. Data Link Single Network Behavior IEEE, 3GPP, Cable Labs, Broadband Forum 1. Physical Coding and Signal Processing IEEE, 3GPP, Cable Labs, Broadband Forum Source: Author.

8 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

Figure 4. Steve Deering’s Hourglass Networks are generally capable of meeting these needs, but IP interconnection norms prevent them from com- municating their needs to the underlying network.

Diverse applications have diverse requirements from networks.

The narrow waist can be a barrier to innovation because it forces all applications to operate within the constraints of traditional implementations of the IP layer. But all applications are not alike when it comes to their use of network resources. Consider just four examples:

• Web browsing is an episodic activity. Web brows- ers consume all available network capacity while loading pages but impose virtually no load while the user reads a fresh page. Video streaming is similar to web browsing in terms of the on/off duty cycle, although it cycles more quickly and Source: Steve Deering, “Watching the Waist of the Protocol Hour- moves more data while active. glass,” IETF 51 Plenary, August 2001, www.ietf.org/proceedings/ 51/slides/plenary-1/sld001.htm. • Audio conferencing applications such as Skype send information at very regular intervals and cannot Structural Issues tolerate delays of more than one-tenth of a second without suffering noticeable service degradation. However, the Internet enables diversity by extract- ing a horrible toll. While the upper and lower layers • Video conferencing applications such as Cisco of the Internet are diverse, the middle layer is one- TelePresence are similar to audio conferencing dimensional, consisting in many cases solely of an but with greater sensitivity to packet loss and incomplete implementation of the Internet protocol much greater bandwidth requirement. (IP). This uniformity is illustrated by Internet engineer Steve Deering’s famous hourglass diagram (figure 4). • Background activities, such as software updates and The narrow waist of the hourglass represents the file system backups, typically take place at night tradeoff between diversity and performance embod- and are not at all sensitive to performance. Because ied in the IP layer. Networks are capable of support- they frequently transmit very large amounts of ing a broader range of applications than current IP data, they do tend to be price sensitive. interconnection norms will allow them to support. These norms reduce IP to a single class of service— Forcing these diverse applications to share a com- best efforts—which is harmful to real-time commu- mon service class imposes a bias in favor of one type of nications such as Voice over IP. In other words, the application and against the others. The favored class in narrow waist is harmful to innovation, because diverse today’s Internet is episodic, noninteractive applications applications have diverse requirements from networks. like web browsing and video streaming. Overcoming

9 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

Table 2. Wi-Fi Service Classes across network boundaries, because Open Inter- net norms and regulations invite the abuse of such ACI AC Description mechanisms. Moreover, this problem is not easily solved. For 00 AC_BE Best Effort example, if DiffServ markings were respected at net- 01 AC_BK Background work boundaries without careful restrictions, it would 10 AC_VI Video be possible for digital pirates to mark peer-to-peer file-sharing transactions with urgent priority to make 11 AV_VO Voice them run faster, reducing the ability of service provid- Source: IEEE Standards Association, “IEEE Standard 802.11e-2005,” ers to detect transactions involving piracy and creating 2005. delays for other applications. Even more importantly, the Internet undergoes hundreds of Denial of Service (DoS) attacks (where a target web site is flooded with this bias is key to accelerating progress in other types of data) every day; if attackers were able to access higher- applications that require different, more sophisticated priority transmission classes, they could effectively network functionality. shield their attacks from correction.20 DiffServ is not unique in this respect, as many classical Internet protocols have trust issues that are Overcoming Interconnection Bias subjects of ongoing efforts to improve Internet secu- rity, including the Internet’s routing protocol, Bor- Networks are generally engineered with the goal of pro- der Gateway Protocol (BGP), as well as the name and viding each application class with the particular type address resolution protocol, Domain Name System of service it requires. For example, the Wi-Fi Quality (DNS).21 of Service standard, IEEE 802.11e, defines four service classes: Best effort (ordinary service), Voice, Video, and Background (table 2). Signs of Progress The versions of IP in wide use today were designed to allow applications to select a service class from the As a result of these shortcomings, there are ongo- networks in use. The mechanisms to do this are the ing efforts in the engineering community to develop “Type of Service” identifiers in the IP header as origi- improved methods for defining, identifying, and nally designed, as well as subsequent elaborations such monetizing bespoke network services. These activities as the Differentiated Services (DiffServ) and Integrated largely take place in network standards organizations. Services (IntServ) protocols.19 Three such endeavors show particular promise: However, each of the Internet standards for con- necting application requirements to network services 1. Pre-Congestion Notification, a means of bidding is flawed. The primary flaw is that Quality of Ser- for priority on congested links; vice (QoS) protocols presuppose a trust relationship between the application that specifies a particular 2. BGP Extended Community for QoS Marking, QoS level and the network that carries it out, but in a means of passing application requirements to today’s Internet, the necessary trust relationships only routers; and exist within the boundaries of particular networks; broadband carriers such as AT&T and CenturyLink 3. DiffServ Interconnection Classes and Practice, use DiffServ within their networks to ensure that a means of passing application requirements to voice and video streams are received with the desired Multiprotocol Label Switching (MPLS) Traffic quality. But DiffServ is not generally operational Engineering.22

10 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

None of these is an official Internet standard, but In fact, a blanket exception from the FCC’s ban on each is the subject of ongoing work. While this work paid prioritization (or de-prioritization in return for continues, IntServ has been adopted by 3GPP as a key lower pricing or greater data-volume allowances) could element of LTE, where it serves a vital role in ensuring easily be granted for services that serve the needs of real- voice quality on mobile broadband networks.23 time applications. Such an exception would incentivize the engineering work that needs to be done to place a greater range of services at the disposal of applica- Freeing the Untapped Potential tion and service developers and to encourage network operators to make the necessary investments and agree- To recap, the Internet is a work in progress and proba- ments to operationalize such capabilities. bly always will be, right up to the day it is decommis- As I show in the next section, networks are able sioned. One of its many shortcomings is a poor ability to deliver application data in a much more powerful, to connect applications—especially real-time applica- effective, and efficient way than they have in the past. tions such as conferencing—with network services in The capabilities of networks are constantly expanding, an optimal way. and the technology limits that made default treatment Perhaps because this capability is underdeveloped, attractive in the past are receding. many advocates mistakenly believe that what they see as the status quo—in which virtually all transmissions on the public Internet are made with the same default Case Studies service level—is an ideal state. In fact, it is neither ideal nor, as the advocates are wont to suggest, intended by This section provides examples of mechanisms the Internet’s original designers. The contrary, allowing designed into converged networks that permit them to applications to select differing service classes based on carry diverse applications in optimal ways. Of neces- their diverse needs, was part of the Internet’s original sity, this section contains technical information that design for a good reason. may not be of interest to all readers. A historical nar- In an ideal world, engineers would be free to con- rative, diagrams, and illustrations are included to make tinue the work needed to allow future applications to the section accessible to serious readers who lack tech- gain maximum benefit from future networks. This nical knowledge. might seem like a simple thing to ask, but regulators The main takeaways from these case studies are that have expressed fears that opening the door to innova- network differentiation has been regarded as essential tive combinations of applications and network services to networking standards since the 1970s and that stan- invites abuse. For example, the FCC has argued that: dards bodies agree substantially regarding its implemen- tation. (This is by no means an exhaustive treatment of Although there are arguments that some forms of network-differentiation facilities; for a more complete paid prioritization could be beneficial, the practical treatment, I recommend “Differentiated Treatment of difficulty is this: the threat of harm is overwhelm- Internet Traffic” by the Broadband Internet Technical ing, case-by-case enforcement can be cumbersome Advisory Group.25) for individual consumers or edge providers, and there is no practical means to measure the extent Ethernet. Ethernet began as an experiment performed to which edge innovation and investment would be at the Xerox Corporation’s Palo Alto Research Cen- chilled. And, given the dangers, there is no room for ter (PARC) by a pair of young engineers, Bob Met- a blanket exception for instances where consumer calfe and David Boggs. Metcalfe applied the name permission is buried in a service plan—the threats “Ether Network” to an enhancement to a network of consumer deception and confusion are simply under development at PARC to support Xerox’s Alto too great.24 workstations.26

11 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

Ethernet underwent two iterations at PARC, the first to a shared cable. In fact, each of the three revisions a 1 Mbps, shared coaxial cable network and the second from the Alto Aloha network to 10BASE5 used a dif- a 2.94 Mbps network with an added feature, a working ferent type of cable, so upgrading meant replacing the collision detector.27 It was inspired by ALOHANET, a entire cable plant and all the associated electronics, wireless network invented by Norm Abramson in the such as and interfaces. late 1960s at the University of Hawaii: With Moore’s Law driving the advances from 1 Mbps to 2.94 Mbps to 10 Mbps, it was reasonably ALOHANET consisted of a number of remote ter- clear by 1984 that Ethernet needed to be redesigned.31 minal sites all connected by radio channels to a host Hence, the IEEE 802.3 Working Group chartered a computer at the University of Hawaii. It was a cen- task force in 1984 to devise a “low-cost LAN” to deal tralized, star topology with no channels having multi- with the upgrade problem and the high cost of Ether- ple hops. All users transmitted on one frequency and net installation. This task force ultimately produced the received on another frequency that precluded users 1BASE5 standard known as “StarLAN” that enabled from ever communicating with each other—users Ethernet to run over telephone wire in office settings.32 expected to receive transmissions on a different fre- Following the topology of telephone wiring, quency than the one other users transmitted on. At its ­StarLAN cables terminated at an electronic hub in an peak, ALOHANET­ supported forty users at several office telephone service closet. The network hub was locations on the islands of Oahu and Maui.28 upgradeable to support higher speeds and alternate forms of cabling, such as category 5 unshielded twisted ALOHANET was funded by ARPA Information pair (with higher noise immunity than telephone wire) Processing Techniques Office Director Bob Taylor, and fiber optics. Switches devised for 100 Mbps and who was the assistant director of PARC when the Alto higher speeds are backward compatible with speeds as Aloha network project was initiated.29 Taylor, one of low as 10 Mbps. Current Ethernet standards scale up two program directors for ARPANET, hired Metcalfe to 400 Mbps. The Ethernet standards that followed at PARC. ­StarLAN include: Metcalfe realized that for Ethernet to be success- ful, it needed support from other firms lest it remain a • 10BASE-T: 10 Mbps over category 3 copper cable; proprietary system, so he recruited Digital Equipment • 10BASE-F, -FP, -FB, and -FL: 10 Mbps over var- Corporation and Intel to develop an open standard ious types of fiber optic cable; incorporating broader expertise. Digital had already • 100BASE-T and -TX: 100 Mbps over category 5 produced a very powerful, wide-area network known copper cable; as DECNET, and Intel’s expertise in chip development • 100BASE-FX: 100 Mbps over fiber optic cable; was clearly established. • 1000BASE-LX: 1,000 Mbps over single-mode The multivendor standard, memorialized in the 1980 and multimode fiber-optic cable; Ethernet “Blue Book,” called for a 10 Mbps network • 1000BASE-CX and -T: 1,000 Mbps over cate- considerably more advanced than ­ALOHANET.30 gory 5 copper cable; With one major change to the frame format, Blue Book • 10GBASE-W and -EPON: 10 Gbps wide-area Ethernet became IEEE 802.3 standard 10BASE5, one networks over fiber optics; of the first LAN standards created by the IEEE Stan- • 10GBASE-S, -L, and -E: 10 Gbps local area net- dards Association in 1983. By comparison with the works over fiber optics; PARC Ethernet, Blue Book Ethernet was faster, used • 10GBASE-T and -X: standards for 10 Gbps over a larger and more resilient cable, and was capable of twisted pair copper; transmitting longer messages (or “frames”). • 40GBASE-R: 40 Gbps over fiber optics; 10BASE5 and all prior versions of Ethernet lacked • 100GBASE-R: 100 Gbps over fiber optics; and an upgrade path, as they were passive systems wedded • 400GBASE-SRx: 400 Gbps over fiber optics.

12 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

Table 3. 802.1p Priority Levels

Priority Code Point Priority Level Identifier Usage

1 0 (lowest) BK Background 0 1 BE Best Effort (Default) 2 2 EE Excellent Effort 3 3 CA Critical Applications 4 4 VI Video, Less Than 100 ms Latency 5 5 VO Voice, Less Than 10 ms Latency 6 6 IC Internetwork Control 7 7 (highest) NC Network Control

Source: IEEE Standards Association, “IEEE Standard 802.1D- 2004,” 2004.

The next generation of Ethernet will allow 1 terabit a $2 billion Request for Proposals (RFP) for telephone per second over fiber optics. service equipment for cable networks, which ultimately Along the way to higher speeds, Ethernet standards led to cable’s standing as a major provider of residen- also developed the ability to prioritize selected pack- tial phone service. Less than a year later, it issued an ets. While the various generations of Ethernet offer RFP for devices and operational support for high-speed higher speeds, they do this through enhancements at data services, which ultimately led to the creation of the physical layer, layer one in the standards hierarchy. the Data over Cable Service Interface Specification As layer two, the data-link layer, is undisturbed by these ­(DOCSIS), better known as “cable modem.” upgrades, it is capable of separate development. The By 1996, five modem companies were bidding for IEEE 802.1p task group added QoS enhancements to the cable modem business.35 In March 1997, Cable the Ethernet data-link layer’s “Virtual LAN” feature in Labs announced the standards for the first version of the late 1990s. These enhancements consisted of eight ­DOCSIS, and cable companies were free to replace service classes (table 3).33 propriety cable modems with standards-compliant These service classes are operationalized by the Ether- upgrades.36 The transformation of the cable TV net- net Media Access Control (MAC) sublayer of the data- work from a one-way system only capable of broad- link layer and also by the MAC sublayers of other IEEE casting analog television programs to a two-way digital 802 standards such as Wi-Fi. Increased capacity does system capable of providing Internet service took place at not guarantee low latency, and low latency is an absolute lightning speed, considering the scope of the enterprise. requirement for some applications. The Ethernet QoS From the outset, cable modem was capable of pro- enhancements were designed to be compatible with the viding QoS despite the preliminary state of Internet Internet Engineering Task Force (IETF) DiffServ stan- work in the subject. This happened because the first dards developed at the same time.34 bidirectional application, telephone service, required it. Also, contributors to the DOCSIS standard were aware DOCSIS: Internet Service for the Cable TV Net- of, and in some cases involved in, the development work. Cable companies formed a research and devel- of IETF’s first real QoS standard, Integrated Services, opment consortium known as Cable Labs in 1988 to which began in 1994.37 develop standards for new telecommunications capa- DOCSIS 1.0 was an isochronous MAC protocol bilities for their networks. In 1994, Cable Labs issued running over a 36 Mbps physical signaling sublayer.38

13 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

The system was shared among dozens or hundreds DOCSIS Implementation. DOCSIS suffers from the of users, but its mechanisms allowed each application burden of retaining backward compatibility with ear- used by each subscriber to obtain the type of service it lier usage models for the cable network while add- required despite what others were doing. ing new features. DOCSIS was designed to coexist with the digital cable MPEG Transport (MPT) sys- Isochronous MAC Protocols. While the original Ethernet tem that is still very deeply designed into the cable was an asynchronous system—one designed around system. The overriding design assumption in cable datagrams that appeared at random intervals—mod- networks is still MPEG oriented with limited conver- ern MAC protocols for Radio Frequency (RF) envi- gence between the IP-based Packet-Streaming Proto- ronments tend to be isochronous systems, managed to col (PSP) and MPT. support network streams that produce traffic at regular In its original form, the DOCSIS switch, known intervals and random streams.39 Cable TV was origi- as the Modular Cable Modem Termination System nally a shared antenna for TV reception, and it retains (M-CMTS), encapsulates IP into MPT packets, time RF features. To support video on demand and tele- stamps them twice, and moves them to the user’s cable phony, DOCSIS followed an isochronous path similar modem as MPT, where they are de-encapsulated and to Wi-Max, Wi-Fi Scheduled Access, and LTE. reconstructed into native IP packets. This round-about path adds cost and decreases throughput by creating a Asynchrony and Isochrony are correctly viewed as ser- dependency on specialized MPT equipment for generic vice requirements of network applications: Person- IP datagrams, but it preserves compatibility and makes to-person applications such as telephony and video- for happy customers. conferencing are inherently isochronous applications that produce long-lasting streams of datagrams spaced at regular intervals. Some human-to-machine applica- Network technology is the product tions, such as video streaming, are also isochronous, of a long design cycle and while more interactive ones, such as web surfing, are more asynchronous. Machine-to-machine communi- accommodates major paradigm cation spans a wide range of temporal requirements shifts only incrementally. and can’t be neatly categorized. Wired LANs can accommodate a wide range of applications with relatively simple service capabilities The overall scenario reflects the history of the cable because of their special properties: The user populations system and neither malice nor engineering inepti- on many LANs are extremely small (on home networks tude. We can find similar inefficiencies in mobile cel- especially), packet loss due to noise is negligible, end-to- lular networks that were designed before the onset of end latency is minimal, bandwidth is extremely cheap IP hegemony and in Wi-Fi. Network technology is and abundant, and users have dedicated channels to a the product of a long design cycle and accommodates shared, intelligent device (the network switch) that has major paradigm shifts only incrementally, as previously the ability to exercise considerable control over packet mentioned. streams from a privileged viewpoint: the Ethernet switch As the current role of DOCSIS is to carry IP data- can see the bandwidth requirements of all active appli- grams from both the Internet and proprietary video cations and manage them after the fact, and cable’s servers to the end user, some analysts have long pro- ­DOCSIS has a unique mechanism for making “unso- posed bypassing the traditional DOCSIS M-CMTS licited grants” of bandwidth to streams that exhibit reg- with a more direct path to the Internet through an ular patterns. None of these things is true of the typical Integrated CMTS (I-CMTS).41 This has happened in RF system, however, so engineering for RF networks DOCSIS 3.1, the most recent standard, with an alter- employs isochrony in order to maximize system utility.40 nate path.

14 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

The development of DOCSIS has featured incre- was followed by packet-mode service in the mid-1990s. mental increases in speed: SMS was created by accident, as it used a portion of the network initially reserved for network-management • DOCSIS 1.0: 36 Mbps messages. After it was discovered that the network rarely • DOCSIS 2.0: 40 Mbps used it for its intended purpose, the facility was made • DOCSIS 3.0: 160–1,080 Mbps available for consumer use. • DOCSIS 3.1: 10 Gbps 2G GSM’s General Packet Radio Service (GPRS) was the first general-purpose packet data service offered Speeds above 10 Gbps will require the full replace- over cellular. It provided download speeds as high as ment of coaxial cable with Fiber to the Home (FTTH). 80 Kbps and upload speeds of 20 Kbps, which made In summary, the addition of video on demand it a considerable advance from CDPD. While GPRS and telephone and data service to cable TV networks operated in packet mode, its implementation simply required a massive reconfiguration of the cable net- allocated portions of the network’s time division mul- work. This redesign was essentially the equivalent tiple access (TDMA) circuits on demand. It was thus a of transforming a propeller-driven airplane into a jet hybrid of statistical multiplexing over a QoS-controlled while it was in flight. The new services are continuing network, just as commercial data service over T1 lines to grow in importance and bandwidth, while linear TV was. In other words, QoS-controlled isochronous net- is declining. The DOCSIS data service continues to works are capable of providing asynchronous access offer QoS capabilities compatible with IETF standards. while purely asynchronous networks can provide only limited isochronous services. Mobile Broadband. Cellular networks began as a While GPRS provided limited access to the Inter- Bell Labs idea for car phone service in the 1940s that net, speeds were too low to make smartphones a very was constrained by the FCC’s refusal to make signif- interesting proposition. Subsequently, the 3G upgrade icant spectrum assignments; the agency’s experts saw to both CDMA and GSM networks provided consid- no value in mobile telephony during the 1950s and erably higher data rates, well beyond the speeds associ- 1960s. In the late 1960s, AT&T Bell Labs developed ated with dial-up modems. Before 3G was completed, the 1G Advanced Mobile Phone System (AMPS), hybrid 2.5G systems such as Enhanced Data Rates for which became the first US standard for cellular tele- GSM Evolution (EDGE or EGPRS) and CDMA2000 phony. Motorola’s pioneering work with cell phones in were placed into service. EDGE provided data rates the late 1960s and early 1970s led to the creation of up to 240 Kbps (60 Kbps per slot) on the download handsets for the AT&T AMPS network.42 side, and CDMA2000 1X went up to 153 Kbps. 2.5G While AMPS was an analog system, it was able data service was implemented over isochronous cir- to provide limited data-transfer service by acting as a cuits compatible with 2G norms; it was regarded as a modem. This system was known as Cellular Digital bolt-on upgrade. Packet Data (CDPD), and it provided a maximum 3G was another generational upgrade that did not speed of 19.2 kilobits per second (Kbps), a respectable fundamentally redesign the cellular circuit switched speed for a modem in the early 1990s. CDPD devel- network, even though it did require wholesale replace- opment took place in parallel with the initial work on ment of existing switches and handsets. The distinction Wi-Fi, and the two fields had cross-pollination. between 2G and 3G is less clear than the analog/digital The next step forward, the Global System for distinction between 1G and 2G. Mobile communication (GSM) 2G cellular service, 3G networks exist in two forms: an evolutionary was an all-digital system, and its initial data option was mode that uses the same spectrum as 2G and a rev- a QoS-controlled, circuit-switched offering meant to olutionary form that requires additional spectrum.43 provide fax, Short Message System (SMS) text mes- Evolutionary 3G was the same as 2.5G: CDMA2000 saging, and general data service. This implementation and EDGE. But revolutionary 3G (or “real 3G”)

15 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

utilized new standards for wider (5 MHz) channels: and the next highest after voice is effectively for gam- CDMA2000 1xEV-DO Release 0 and UMTS (also ing. Video calls get higher priority than video stream- known as “TD-SCDMA/UTRA TDD”).44 These new ing, and common Internet applications contend for standards and spectrum assignments supported data resources at the lowest or second-lowest priority. rates from 2.5 to 3 Mbps with wide channels under Hence, LTE mobile broadband relies on IETF QoS ideal conditions. standards created 20 years ago. CDMA2000 in particular began the transition from circuit switching to packet switching at the network’s Wi-Fi. Initial design work for Wi-Fi began in 1989, at back-end, where the wireless network hands packets roughly the same time that CDPD was designed. The off to wired backhaul. The notion of bearers defined first version of the Wi-Fi standards, IEEE 802.11 (with for particular service types emerged in 3G and became no qualifier), was not approved until 2007, because so extremely important in 4G. For our purposes, a bearer many problems had to be solved. But the overall system is a QoS class, effectively a replacement for the QoS cir- architecture was in place by 1992: it called for devices to cuit switching features dropped as the cellular network be connected to a central access point, similar in func- became purely packet oriented in the transition from tion to the StarLAN hub, which connected to devices 3G to LTE. over shared RF spectrum or pervasive infrared light. Current cellular-data networks follow the LTE stan- The access point connected to a wired Ethernet dard formerly known as Long Term Evolution (LTE). backhaul, and the system was initially known as wire- LTE is either 4G or 3.9G, depending on marketing less Ethernet. Veterans of StarLAN participated in the whim. It is significant for two major reasons: LTE con- early definition of Wi-Fi, which is evident in the simi- verges the previously separate CDMA and GSM stan- larities of the StarLAN and Wi-Fi designs. dards, and it fully replaces circuit switching with IP as IEEE 802.11 was a 1–2 Mbps system with access an architectural design choice. As LTE includes sup- to 75 MHz of spectrum in most countries, a consid- port for wider channels, it can scale up to 1 Gbps with erably more generous allotment than the initial alloca- sufficient spectrum, or 40–100 Mbps in more realistic tions for individual cellular networks. Each Wi-Fi radio configurations. channel was 25 MHz wide at a time in which cellular QoS plays a crucial and central role in modern LTE channels spanned a mere 1.25 MHz. Wi-Fi was speci- networks. Circuit-switched QoS is replaced with bear- fied for coverage areas of hundreds of feet, while cellular ers in LTE. Bearers are best thought of as bundles of covered miles. It is unsurprising that Wi-Fi is a faster IETF Integrated Service (IntServ) parameters. IntServ system in most settings. is a very rich system ranging from default, nonguar- The most recent iteration of Wi-Fi, 802.11ac, sup- anteed services to guaranteed delivery of specified vol- ports data rates up to 1.3 Gbps. The increase from umes of data with specified delay and loss rates. Such 1–2 Mbps to 1.3 Gbps is highly dependent on wider systems are too complex to be practical in the whole, radio channels, but some significant engineering hence 3GPP has reduced the complexity to a manage- enhancements have occurred along the way, such as the able level by predefining nine bearer classes. Each LTE frame-aggregation feature added in 802.11n and more connection begins on a default bearer without a guaran- efficient means of modulation and bit coding, such as teed bit rate (GBR) and graduates from there depend- Orthogonal Frequency Division Multiplexing. ing on application needs. The precise levels of delay and Initial Wi-Fi prototypes were developed on Ethernet loss for these nine bearers are listed in figure 5. controller chips such as the Intel 82593. These control- The acronym QCI stands for “QoS Class Identi- ler chips allowed for programmable inter-frame spacing, fier.” Note the similarity between 3GPP QCIs and the which inspired a QoS mechanism for Wi-Fi that was IEEE 802.1p Priority Code Points shown in table 4. In formally standardized in IEEE 802.11e in 2005 (after both schemes, the highest priority is reserved for net- 10 years of work). 802.11e supports two QoS modes: work control or signaling, the next highest is for voice, HCF Controlled Channel Access (HCCA) is based on

16 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

Figure 5. Taxonomy of LTE Bearers

Source: Adnan Basir, “Quality of Service (QoS) in LTE,” 3GPP Long Term Evolution (LTE), January 31, 2013, http://4g-lte-world.blogspot. com/2013/01/quality-of-service-qos-in-lte.html.

Table 4. Priority, Delay, and Loss for LTE Bearers

Source: Adnan Basir, “Quality of Service (QoS) in LTE,” 3GPP Long Term Evolution (LTE), January 31, 2013, http://4g-lte-world.blogspot. com/2013/01/quality-of-service-qos-in-lte.html.

17 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

Table 5. Mapping 802.1D to 802.11e Access Classes

UP (Same as 802.1D AC Designation Priority 802.1D User Priority) Designation (Informative)

Lowest 1 BK AC_BK Background 2 - AC_BK Background 0 BE AC_BE Best Effort 3 EE AC_BE Best Effort 4 CL AC_VI Video 5 VI AC_VI Video 6 VO AC_VO Voice Highest 7 NC AC_VO Voice

Source: IEEE Standards Association, “IEEE Standard 802.11e-2005,” 2005.

IntServ and strongly resembles LTE-Unlicensed;­ the preferences of its higher-layer user to the data-link more common Enhanced Distributed Channel Access layer without prejudice.46 (EDCA) is based on DiffServ and simply provides pri- The most widely used data-link layer for the original ority access for network control, voice, and video over implementation of IP was ARPANET, a system that common Internet use. EDCA provides four service lev- honored two UP levels—one suitable for facilitating els, which Wi-Fi harmonizes with the IEEE 802.1p transfers and the other for interrupting them. Similar priority levels by collapsing 802.1p’s eight levels into priority capabilities existed in the other data-link layers Wi-Fi’s four (table 5). supported by IP: PRNET and SATNET.47 EDCA includes an admission control feature in IntServ was added to the Internet canon in 1994 and which stations desiring access to higher-priority levels is widely used in LTE. DiffServ was added in the 1998, must ask the access point for permission. Stations send in part to overcome IntServ’s deployment complexities. an Add Transmission Specification (ADDTS) request to DiffServ provides incremental benefits from incremen- access points, which the access point may approve, deny, tal deployments. It is widely used inside management or ignore. The Transmission Specification (TSPEC) domains, for Internet Protocol Television (IPTV) and informs the access point of desired characteristics per the VoIP, and occasionally between them. following format (table 6). In the table, S means speci- MPLS is an IP sublayer that provides traffic engi- fied, X means unspecified, and DC means “do not care.” neering and expedited routing by shortcutting the route Consequently, the notion that Wi-Fi is a permission- lookups that are otherwise performed packet by packet. less system is only partially true. Access to the default Once an IP/MPLS router has determined the route priority is permissionless, as it is in LTE, but access to to a given destination IP address, there is no reason higher priorities for bidirectional flows requires explicit to search the database (currently more than 300,000 approval by the access point. Wi-Fi QoS levels are entries) again and again for subsequent packets going important to Wi-Fi ISPs (WISP). to the same network. Like DiffServ, MPLS is primarily used within routing domains but could be extended in Internet Architecture and QoS. The original spec- principle across consenting domains with appropriate ification for IP lacked the robust QoS mechanisms fail-safes and security mechanisms. later specified for IntServ, DiffServ, and MPLS.45 One of the more interesting developments in MPLS The Type of Service capability simply passed the is optical label switching, a battery of techniques that

18 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

Table 6. IEEE 802.11e TSPEC

Source: IEEE Standards Association, “IEEE Standard 802.11e-2005,” 2005. would extend the use of MPLS into the management Another aspect of Internet QoS innovation is the of particular light frequencies (lambdas) in DWDM development of different forms of interconnection. optical systems.48 In principle, MPLS can be used to While the Internet core once consisted of the National manage any channelized system, whether optical, coax, Science Foundation Network (NSFNET) backbone or wireless; MPLS-managed DOCSIS is not out of the and a set of tributaries to it, it now consists of a mesh question and may have considerable benefits. of commercial networks that exchange traffic between

19 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

one another according to commercial agreements. These more evenly distributed than it has been the past. Con- agreements are generally of three kinds: settlement- gestion can now occur in last mile and middle mile free peering, paid peering, and transit. networks before or after they connect to congestion- Settlement-free peering agreements, also known free core networks. Consequently, extensive traffic as settlement-free interconnection, are agreements to management is as important as ever. The uptake of exchange traffic between networks of comparable size, high-bandwidth applications such as 3D Ultra HDTV scope, capacity, and traffic mix without charge. Paid video streaming, latency-sensitive applications such as peering agreements, which date back to the heyday of immersive video conferencing, and gaming by residen- America On-Line (AOL), provide direct connection tial and business users with “fat pipes” in and out of the between two networks for a fee based on traffic imbal- Internet core creates a need for end-to-end QoS man- ance, where the network that transmits more than it agement. It also creates a need for a set of robust and receives pays. well-integrated protocols from the physical layer to the Transit agreements are subscriber/service-provider application layer that are capable of making the Inter- agreements between small and large networks where net’s core TCP/UDP/IP protocols work better across a the smaller network pays. Transit agreements typi- wide range of applications. cally include Service Level Agreements (SLA), theoret- ically binding service providers to certain performance parameters and subscribers to volume parameters. The Wi-Fi was specified for coverage areas typical SLA is volume based at the peak hour of usage, of hundreds of feet, while cellular the so-called 95th percentile. Pathways through the Internet mesh have always covered miles. been less uniform and more controlled than is com- monly believed. They are distinguished from one Congestion is also quite common at the boundary another by both SLAs and peering agreements. SLAs of directly connected networks with no intermediate are carried out in practice by provisioning links rela- backbone. Content delivery networks (CDN) rou- tive to usage: traffic with stringent latency and packet- tinely congest ISP networks, because they shift traffic loss requirements (as specified by contract) is routed according to server load independent of geographic through links that are more lightly loaded. efficiency at the receiving end. A CDN with excess In some cases, high QoS core pathways are imple- network capacity but constrained server capacity may mented as overlay networks that guarantee QoS by serve an end user in New York from a server in Cali- admission control. For example, video-conferencing fornia, leaving the ISP to carry packets over long dis- overlay networks are currently sold by specialized tances. In such instances, ISP users across the country providers such as WebEx, Avaya, and Cisco. In some may experience delay induced by congestion.49 European internet exchange points (IXP), inter- QoS has applications beyond congestion mitigation, domain QoS is provided by carriers who honor each such as service definition. The network-engineering other’s QoS requirements by recognizing agreed-upon community would not have invested so much time and DiffServ markings or BGP Community Attributes. effort in it if it were not incredibly valuable. Korea Telecom has built a premium backbone net- work for the firm’s IPTV products and has engaged in Summary. Data-link layer standards Ethernet, cable negotiations to open it to third-party IPTV services modem, mobile broadband, and Wi-Fi incorporate for a fee. QoS mechanisms that can be operationalized at the While it was once regarded as a truism that conges- network layer by IETF QoS mechanisms IntServ and tion does not occur on the Internet core, the deployment DiffServ. In one sense, this is remarkable because of of extremely high-speed edge networks such as Korea the diverse origins and purposes of the data-link layer Telecom’s system makes congestion standards: some are wired and others are wireless; some

20 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

were created in public fora and others by closed mem- accident: both the Internet and data-link taxonomies bership groups; some rely in private resources and oth- reflect the conclusion that efficient network design in ers on commons; but all share compatibility with IETF a world with diverse applications requires some form standards that were created in advance of or in concert of QoS. with data-link layer QoS standards. In a network of diverse applications, “first come first Despite these differences, designers came to substan- served” is not the appropriate rule; “the greatest good tially the same conclusions about how to organize net- for the greatest number” is superior. This insight is the work resources to meet application needs. This is no fundamental basis of network quality of service.

21 Network Innovation Policy

espite the intricate interplay of Moore’s Law, net- strong consensus prevailed in the mid-1990s around Dwork technology, and applications, it has become the proposition that competition was a better way to common in regulatory circles to regard networks as discipline broadband markets than regulation. This was hostile to innovation. The FCC’s 2015 Open Inter- not simply a political consensus but the collective view net order is a profoundly divisive document that puts a of a very distinguished group of scholars, including aca- white hat on edge services firms and a black hat on net- demics, regulators, and industry figures. work service providers. It attributes nefarious motives The 1995 TPRC proceedings recognized the Inter- to network service providers by stressing their incen- net’s significance in both technology and policy: tives, both real and imaginary, for raising prices and blocking access to edge services without considering The Internet is of great importance: It represents a their counterincentives to keep prices low and utility new integrated approach to the telecommunications high to attract and retain customers.50 industry that raises fundamental policy problems. The FCC’s order posits a looming monopoly in Steady sustained technological advances in comput- wired networks without considering the shift that has ers and electronics have caused a shift from traditional already taken place from wired to wireless networks.51 analog methods of providing communications to dig- We will soon transport more edge data over wireless ital techniques with extensive computer intelligence. technologies than wired ones, if we do not already.52 The Internet was created as a system of interconnected All in all, the Open Internet order displays a deep sus- networks of communications lines controlling com- picion of network innovation. puters. Formally, the Internet is an “enhanced service The disparate treatment of networks and applica- under the Federal Communications Commission tions threatens network convergence. If a variety of (FCC) categories and is exempted from regulation.”53 applications is going to operate on a common network infrastructure, the network needs to adapt to a chang- Thus, the TPRC scholars recognized that the chal- ing mix of applications just as applications leverage new lenges the Internet brings to telecom policy arise from and more powerful network services. And just as the its integrated and competitive nature: relationship of networks and applications in the real world is cooperative, the relationship of regulation to First, the Internet is created by the integration of mul- both networks and applications needs to be more even- tiple networks provided by independent entities with handed and consistent than it is today. no overall control other than a standard for intercon- nection protocols. The Internet represents the fullest expression to date of the unregulated “network of net- How We Got Here works” and is widely expected to be a model for future communications. . . . Network policy has not always been this way. Selected Second, the Internet concludes the integration of papers delivered at the Telecommunications Pol- multiple types of service with substantially different icy Research Conference (TPRC), the leading inter- technical characteristics onto a single network. The national networking policy conference, reveal that a Internet is used to transmit short e-mail messages,

22 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

graphics, large data files, audio files, and (very slowly) The notion that the Internet requires a different pol- video files. . . . icy response than the single-purpose, geographic tele- Third, the Internet and commercial networks con- com monopolies of the past continued through the early nected to it include the integration of the provision of 2000s, but the theme hinted at in the 1996 TPRC— transmission capacity with various degrees of the pro- that the Internet required a new policy framework— vision of information. Past telecommunication poli- gathered strength. But even then, policy choices ranged cies have sharply distinguished between providers of from radical deregulation of the Internet to a prefer- communication capacity and providers of information ence for light regulation that recognized the Internet’s content. . . . In the Internet, and in the expected com- unique character. The idea of subjecting the Internet to munications industry, those dividing lines are blurred a modified form of telephone network or cable network as individual companies provide capacity to transmit regulation was not seriously considered in the 1990s. communications for others and also provide their own At least one paper in the 1996 conference forecasted content. . . . the regulatory arbitrage to come. A group of telephone Under the [Telecommunications Act of 1996] it companies petitioned the FCC to regulate Internet is likely that the entire telecommunications industry telephony in the same terms as traditional telephony, will move more toward the competitive network of so this threat to Internet freedom in the name of the integrated services that are already observed in the preservation of a legacy industry had to be addressed.57 Internet.54 In the early 2000s, the deregulatory consensus began to fracture. The 2002 TPRC proceedings emphasized The 1996 TPRC proceedings reflected a contin- institutional adaptation to the Internet and Inter- uation of the view of the Internet as an integrated, net adaptation to institutions. As the introduction to unregulated marketplace for diverse network and con- the proceedings points out: “Because technologies are tent services. One section of the conference examined embedded in social systems and are understood in this bilateral payments to support network infrastructure. context, responses to new technologies may be as varied One paper in this section proposed a “zone-based cost- and complex as the social systems that incorporate (or sharing plan” in which both senders and receivers con- reject) them.”58 tribute to the cost of transmission.55 Others proposed Rather than accepting the opportunities the Inter- alternate charging plans. net offered in terms of communication and publish- ing, institutional guardians were prepared to pick and Despite the intricate interplay of choose, to name winners and losers, and to accept the Internet with conditions. The advocates for an unregu- Moore’s Law, network technology, and lated or lightly regulated Internet still existed, but they were opposed by regulatory hawks who seemed to fear applications, it has become common the impact the Internet might have on vested inter- in regulatory circles to regard networks ests threatened by disruption, including those of tele- com regulators. Tim Wu devised his network neutrality as hostile to innovation. notion in 2002 against the backdrop of institutional worries about the Internet’s social impact.59 A paper by Camp and Riley addressed the impact of While the policy status quo of the mid-1990s was the Internet on First Amendment jurisprudence, which closely aligned with technical developments in the Inter- has long recognized four distinct media types (broad- net engineering space such as DiffServ and IntServ, net caster, publisher, distributor, and common carrier). neutrality advocated a retreat to a more primitive Inter- The paper argues that network newsgroups combine net that was less powerful, less socially disruptive, and, functions of all four media types and therefore require a most especially, less challenging to regulate. The policy novel approach to policy and regulation.56 retreat was motivated by a host of factors, which would

23 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

require an additional paper to demonstrate fully. The Upon discovering an orchid with a foot-long flower following sections touch the highlights of the retreat. in 1862, biologist Charles Darwin predicted a gigan- tic moth would be found with the ability to pollinate it; the prediction was finally confirmed in 1992. Just Excusing Policy Inequity as moths and orchids coevolve, so do networks and applications.60 The belief that networks are less innovative than appli- But network/application coevolution is not one- cations drives a divide-and-conquer strategy, which dimensional. Networks become not only faster but gives regulators extraordinary control over the business also more reliable, pervasive, and adaptable to partic- models and practices of network service providers in ular application needs such as low cost and controlled return for lax oversight of similar models and practices latency. The coevolution perspective allows us to view by application providers. One example of such incon- the development of new network-service models as sistent regulatory approaches is differential privacy reg- potentially beneficial, while the firewalled perspective ulation, in which carriers’ harvesting of information sees them as only harmful. about users’ web habits is severely restricted but similar practices by edge providers such as Google are not. This view is severely biased. The divide-and-conquer strategy has Networks and applications are different, of course, harmful implications for innovation. just as there are differences within the networks and applications categories. Wired and mobile networks are significantly different and so are content-oriented appli- Analysis of the operation of Moore’s Law provides cations such as video streaming and communication- policy insight. Because Moore’s Law drives innovation oriented applications such as video conferencing. Some in networks and applications, these policy insights are applications provide services to other applications equally valid in both spheres. through Application Programming Interfaces (APIs), Gordon Moore decomposed the magic of Moore’s and other applications use these APIs to serve end users. Law into three elements: “decreasing component size, Facebook and Google Maps are both end-user appli- increasing chip area, and ‘device cleverness,’ which cations and API platforms for other end-user applica- referred to how much engineers could reduce the tions, such as Angry Birds and Zillow. unused area between transistors.” The analogies in But the dynamics of ingenuity, risk, and innova- networks are increasing the number of bits the net- tion are largely the same inside networks as they are in work can carry each second (decreasing bit size), the application platforms and services that rely on net- increasing wire capacity by channel bonding or fiber works. As previously noted, the technology base is built upgrades, and harvesting bandwidth that would oth- in substantially the same way in both spheres: technology erwise go to waste. advances in one sector drive advances in all other sectors. By comparison with the circuit-switched telephone The divide-and-conquer strategy has harmful impli- network, the packet-switched Internet excels at clev- cations for innovation. If networks and applications are erness. Instead of allowing unused capacity to go to fundamentally different, it makes sense to create legal waste, it allows each application to use the network’s firewalls between them, such as the open Internet reg- entire capacity each time it transmits a unit of informa- ulations that are premised on the fear of tacit collusion tion, known as a packet. (or at least parallel behavior) among networks. But if This is an important feature that enables Internet networks and applications are fundamentally comple- technology to serve a wide variety of applications bet- mentary and similar, each can influence the other to ter than a circuit-switched network can. The telephone develop in a more cooperative and constructive fashion network is better in some respects than the Internet for as long as they are not unreasonably restricted. carrying telephone calls, but the Internet is better at

24 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

everything else—from email to web browsing to video prevails in wired residential broadband-service markets streaming. This property motivated the development justifies aggressive regulatory intervention. Law profes- of IETF QoS. As the initial design document for Diff- sor Susan Crawford claims the broadband market is Serv suggests, differential pricing may be necessary to effectively a monopoly: fully implement QoS services: “Service differentiation is desired to accommodate heterogeneous application It may be time for yet another label to enter the lists: requirements and user expectations, and to permit dif- “the looming cable monopoly.” It is gaining strength, ferentiated pricing of Internet service.”61 It is difficult and it is not terribly interested in the future of the to see how the goals of DiffServ can be achieved under Internet. This is the central crisis of our communica- current FCC regulations, however. tions era.62

Indeed, cable companies control 60–65 percent of No one is talking of a dangerous the market for residential broadband services (if wire- smartphone monopoly. less is excluded), while traditional telecom firms and new entrants control the other 35–40 percent.63 The FCC exaggerated this market-share disparity by clev- The application analogy to “decreased component erly redefining “broadband” to mean speeds at or above size” is the windfall to computation requirements that the 25 Mbps level. This move converted a 60/40 mar- comes from faster processors. “Increased chip area” ket into one in which 61 percent of consumers have corresponds to large memory, and “cleverness” comes no choices or one choice for true broadband service.64 about from the ability of applications to marshal However, substantial market shares are nothing extraordinary computation capabilities from internal special in the Internet ecosystem. The 60/40 split and external resources. Policy is wisely unconcerned between cable companies and telecoms is more equal about these application features. than the market-share division for smartphones, in The ultimate rationale for the FCC’s ban on QoS which 83 percent use Google’s Android, 14 percent use in the Open Internet order is the agency’s fear that a Apple’s iOS, and the remainder are split among several QoS market would inherently advantage networks and options.65 Still, no one is talking of a dangerous smart- disadvantage edge services, especially small ones. The phone monopoly. agency has no fear of diverse applications, but it refuses Similarly, even if the FCC’s new definition is valid to tolerate Internet standards and industry practices and only 39 percent of Americans have broadband that would monetize QoS. To understand these fears, it choice, the broadband market is substantially less con- is necessary to examine arguments for differential regu- centrated than the market for desktop operating sys- lation of networks and applications generally. tems: Microsoft has an 88 percent share, and the nearest competitor, Apple, has only 9 percent.66 This is not a perfect comparison, because every American with Arguments for Differential Regulation the money could buy an Apple computer if she wanted; but the fact is she does not. A number of arguments have been made to justify In any event, it takes more than a temporary market- heavy-handed regulation of the networking sector and share disparity to make a dangerous monopoly that light-touch regulation of the applications sector. This harms consumers and innovation. The reason is that section is a high-level survey of these claims. in dynamic and highly innovative markets, mar- ket share does not automatically translate to market The Monopoly Argument. Proponents of restricting power. Take Microsoft, for example. Microsoft domi- the ability of ISPs to allocate resources more efficiently nates the market for common office applications, but generally argue that the near-monopoly condition that it has not been able to leverage its desktop dominance

25 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

into control of browsers or mobile operating systems. but the web has much more than TV reruns, and data In reality, Apple is gaining desktop and laptop share, limits are becoming more expansive.71 In the near and Microsoft is opening its operating systems in future, we might very well see wireless networks chal- unprecedented ways: lenge the dominance of today’s wired networks at very high data rates. Microsoft’s software empire rests on Windows, the So, as much as regulation advocates love to invoke computer operating system that runs so many of the “cable monopoly” narrative, it is neither true in the the world’s desktop PCs, laptops, phones, and serv- present nor likely to be true in the future. ers. Along with the Office franchise, it generates the majority of the company’s revenues. But one day, the The Falling-Behind Argument. ISP critics also argue company could “open source” the code that underpins that US broadband service is falling behind the inter- the OS—giving it away for free. So says Mark Russi- national standard for speed and quality. This argument novich, one of the company’s top engineers.67 does not square with regulatory tactics that not only discourage investment but also fail to encourage com- Abusive monopolists do not generally give products petition; its appeal seems to be a general indictment of away for free. Network services firms utilize this strategy the traditional light-touch regulatory consensus. as well: Google offers free use of its networks at 5 Mbps, The falling-behind argument also does not square and T-Mobile provides free use of its mobile broadband with the facts. As I pointed out in a previous paper, service for up to 200 megabytes of data per month.68 broadband service is faster and more heavily used in These strategies maintain market share at the expense of the United States than in comparable nations.72 Bret market power, because they do not increase profit. Swanson, Christopher Yoo, Roger Entner, Roslyn Lay- Technologies and platforms are fiercely competitive. ton, and Michael Horney have made similar observa- Cable’s high share of the 25 Mbps networking segment tions, because the data is unequivocal.73 has not prevented all major telecoms (AT&T, Verizon, Some, including the New America Foundation’s and CenturyLink) from upgrading DSL networks to Open Technology Institute (OTI), have chosen to speeds four times faster than the old ones or extend- look at very different facts, however. Its Cost of Con- ing fiber service at speeds 40 or 50 times faster than nectivity report series ignores national data on actual the FCC’s 25 Mbps benchmark. Nor is cable standing speeds and prices in favor of advertising claims by still: Comcast has announced Atlanta will soon have small ISPs who serve limited portions of large cities. the option of purchasing 2 Gbps residential broadband These data lead OTI to conclude that Americans are services, the fastest in the world.69 getting a raw deal: “Overall, the data that we have col- The wired broadband market also faces potential lected in the past three years demonstrates [sic] that competition by advanced wireless networks that fall just the majority of U.S. cities surveyed lag behind their short of the FCC’s arbitrary standard today but that are international peers, paying more money for slower certain to exceed it in the near future if the FCC per- Internet access.”74 mits operators access to the necessary spectrum. Many This assertion is transparently false. Ookla’s Net groups globally are actually trending toward the exclu- Index has consistently reported that average down- sive use of mobile networks, and 15 percent of Amer- load speeds in the US are higher than the averages icans today access the Internet predominately from for the G8, OECD, European Union, and APEC mobile networks at home.70 (table 7).75 While it is certainly true that the small screens on Because broad surveys do not support the falling- smartphones do not provide convenient access to the behind claim, some advocates of heavy regulation have whole web, most can serve as mobile hot spots to con- turned to price or adoption figures to support their nect full-size computers. Data limits prevent hot spots claim. As I discussed in G7 Broadband Dynamics, the from serving as portals to streaming video services, quest for gloom among these other datasets also fails.76

26 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

Table 7. Net Index from Ookla Download But this is a misunderstanding of congestion, which Speeds, July 14, 2015 can occur moment to moment even on networks that maintain low average levels of packet loss and delay. One Region Average Download reason for momentary congestion is the bandwidth- Speed, Mbps seeking behavior of the Internet’s TCP protocol, but United States 37.4 there are others. G8 33.4 Expert analysis by the Broadband Internet Techni- cal Advisory Group has shown that video streaming European Union 31.8 services typically deliver information to networks in OECD 31.4 clumps, where packets are delivered back-to-back for APEC 27.6 10 seconds or so.78 Such periods of activity allow serv-

Source: Net Index from Ookla. ers to optimize disk access, but they produce moments of congestion for networks. Following packet clumps, video servers are silent for a period that can be longer The Incentives Argument. As previously noted, the than the clump. FCC offers a very general argument to the effect that network service providers have incentives to abuse cus- tomers and edge providers, but this argument fails in Regardless of the capacity (or speed) most cases because it ignores the effects of competition of broadband networks, moments of and emerging technologies—and because there is sim- ply no evidence of such behavior occurring in the real congestion are unavoidable. world. It is true but trivial that competitive firms have the incentive, and in the abstract may also have the Each clump of packets is stored in memory on the ability, to extract rents from rivals and partners alike. end-user device—a PC, Xbox, Amazon Fire TV, or The question is whether they are constrained by com- TiVo—until it is needed. Video rendering, the pro- petitive forces from doing so. All the evidence indicates cess of converting network packets to video images, is a the answer to that question is “yes.” strictly time-controlled process. Rendering devices typ- ically display a sequence of still pictures 30 or 60 times a second to create the illusion of motion, no more and An Example of Differential Regulation no less. These systems work well as long as they have the next picture on hand while displaying the current The FCC’s Open Internet order bans paid prioriti- one. While having more pictures on hand provides an zation, a means by which real-time applications can insurance margin, the user does not see the difference. reliably work around the congestion caused by other Clumping makes hundreds of pictures available to applications on broadband networks. Regardless of the the rendering device before they are actually needed, capacity (or speed) of broadband networks, moments and it does this at the expense of other activities taking of congestion in the last mile and middle mile are place on the network connection. A Skype video call, utterly unavoidable. for example, also relies on the rendering of pictures in It is not obvious to policy analysts why this should a series, but video-call pictures are not transmitted in be the case because many assume that adding capac- clumps for a very good reason: video-call pictures liter- ity makes congestion permanently disappear: “The net ally do not exist until a fraction of a second before they neutrality debate is technically a choice about how to are transmitted. respond to congestion and packet loss. One solution is Movies are stored on servers all around the web, but to increase capacity in the network to accommodate an video-call pictures are created by a camera and trans- increase in traffic flow.”77 mitted as soon as they are captured. Video-call pictures

27 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

need to synchronize with the sounds captured by the Hence, Lessig’s aspiration was to enforce a single law microphone. They also need to be received by the other on ISPs that would later be named “network neutrality” party within one-tenth to one-fifth of a second of when by his student Tim Wu.80 The major shortcoming of they are created. Lessig’s law is highlighted in Wu’s seminal net neutral- When a household’s broadband connection carries ity paper: a video stream and a video call at the same time, the video stream inevitably degrades the video call. This Proponents of open access have generally overlooked problem can be overcome by the ISP without degrad- the fact that, to the extent an open access rule inhibits ing the video stream, provided the ISP scans the pack- vertical relationships, it can help maintain the Inter- ets that make up the stream and the call and reorders net’s greatest deviation from network neutrality. That them appropriately. In other words, the video call deviation is favoritism of data applications, as a class, should have higher priority than the video stream. Net- over latency-sensitive applications involving voice or work devices have the power to do this today, because video. There is also reason to believe that open access they include circuits that can scan and reorder packets alone can be an insufficient remedy for many of the in real time. likely instances of network discrimination.81 Recognizing and prioritizing activities also takes place within end-user computer systems: Microsoft Wu is saying that treating all packets of informa- Windows and Apple OS X routinely perform these tion the same harms latency-sensitive applications activities on behalf of the various applications that such as gaming and conferencing and helps volume- multitask within our computers. It is not controversial intensive applications such as large file transfer, video when operating systems allocate resources to applica- streaming, and web browsing. Hence, a pure neu- tions according to their requirements, but it is tremen- trality rule only succeeds in promoting application dously controversial (indeed, it is forbidden under the development and use for the “right kind of applica- Open Internet order) when ISPs use the same tools to tions,” those that move large amounts of data and are perform the same tasks within network elements on insensitive to the delay and loss of individual packets. behalf of network applications. This peculiar discon- But Wu’s issue was addressed by the designers of the nect is at the heart of the net neutrality controversy. Internet protocols (Vinton Cerf, Robert Kahn, Jon Pos- tel, Bob Metcalfe, Yogen Dalal, Gerárd Le Lann, and Alex McKenzie) who realized application bias should The Firewall Model of Internet Regulation be avoided. This realization was the reason why IP per- mits applications to specify a desired “Type of Service” Net neutrality is an example of the firewall model of in every packet header.82 The original Internet was a col- network regulation; the primary proponents of this lection of three networks—ARPANET, SATNET, and theory are Harvard Law School Professor Larry Lessig PRNET—each of which permitted applications to iden- and his protégés and followers. Lessig’s seminal state- tify their desired level of service from the network; this ment of the ideal state of affairs was spelled out in a fact is memorialized in the early Internet RFCs.83 The IP now-famous passage in his first book on the laws of specification spelled out eight Type of Service precedence cyberspace: options from most urgent to least, for example.84 As explained much more fully above, the design of Like a daydreaming postal worker, the network simply IP also includes options or delay, throughput, and reli- moves the data and leaves interpretation of the data ability. These options were subsequently redefined by to the applications at either end. This minimalism in the DiffServ protocol but never abandoned.85 Conse- design is intentional. It reflects both a political deci- quently, Lessig’s law and Wu’s reservations about appli- sion about disabling control and a technological deci- cation bias are indicative of faulty understandings of sion about the optimal network design.79 the Internet architecture.

28 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

The Integrated Model of Internet Policy transport layer we find the myriad applications such as the Web, video streaming, and conferencing. Lessig’s error also reflects a simple but profound mis- While these layers are distinct, they are meant to understanding of the functions of layers in network communicate with each other. RTP, for example, is architecture and of the means by which the interactions a provider of services to applications but a consumer among those layers traditionally have been defined. As of the services provided at lower layers. Consumers in depicted in table 1, the task of creating standards for layered protocol models specify options and request the Internet has traditionally been divided among com- services, and providers do their best to perform accord- mittees that operate in parallel. Parallel design supports ingly. This is not unlike the way the real postal network standards modularity, which is generally beneficial to behaves: consumers request a service level and pay for standards upgrades and application diversity. it, and the system does its best to deliver the package Modular standards permit subgroups to work on (without the daydreaming Lessig imagines). discrete problems in parallel, speeding up the stan- Hence, the integrated model of Internet policy dards development process. The Internet includes one does not concern itself with simply permitting or transport layer standard for elastic applications, TCP, banning particular services without regard to their and another for real-time applications, RTP.86 It also utility. Rather, it examines the conditions of sale of includes one network layer standard for internetwork- network services and the veracity of provider claims, ing with small addresses, IPv4, and another for large as regulatory bodies do in most markets. This policy addresses, IPv6. model is more consistent with the Internet’s actual Below the IP layer are many data-link protocols architecture and history than the “mother-may-I” for wired, wireless, and satellite networks. Above the firewall model.

29 Conclusions

ll technology is dynamic, and none changes faster regulators to permit the full implementation of Inter- A than information technology. Moore’s Law drives net standards is a barrier to innovation. advances in networks, devices, and applications, and The FCC’s 2015 Open Internet order is a pol- innovators develop clever combinations of IT systems icy blunder of epic proportions, echoing the agency’s and software into novel applications. The Internet and refusal to allocate spectrum to cellular networks in the its constituent networks are a work in progress and 1950s and 1960s. Rather than reaching back to its always will be. history as an imperious telephone regulator, the FCC Throughout the 1990s, the shared view among net- would be wise to adopt a more humble role that is more work technologists, policy analysts, and regulators pro- respectful of the Internet’s own history and architec- moted Internet convergence: the migration of diverse ture. An expert analysis of Internet standards cannot applications such as the World Wide Web, telephone support the FCC’s rash action. service, cable TV, and mobile applications to a common The Internet is a vital element of the economy and of communication platform, the deregulated Internet. modern liberal democracy generally. It cannot develop This consensus shattered because of market concentra- its full potential to enhance social welfare and quality of tion fears and the difficulty of creating novel regulatory life if it remains crippled by arbitrary constraints. The paradigms for the convergence economy. The current integrated model of Internet policy recognizes the revo- legal status quo, as promulgated in the Open Internet lutionary nature of the Internet and will facilitate Inter- order, maintains deregulation for applications but reg- net convergence. The task of conceptualizing goals and ulates communication networks with traditional tele- expectations for the Internet is more challenging than phone network rules and constructs. simply force-fitting Internet services into legacy mod- The new status quo is a counterproductive expres- els, but it will ultimately yield greater rewards. sion of a failure to comprehend the Internet’s dynamic We need not fear the new applications made possi- character. The technical struggle to develop QoS mod- ble by evolving models of Internet service; to the con- els for the multiservice Internet have been ongoing for trary, the greatest risk to the continued development of 40 years and will probably never be finished. the Internet economy is stagnation. If the Internet is QoS models such as DiffServ are vital parts of ser- not free, all of us shall suffer from lost opportunities for vice provider networks today, and IntServ is an essen- innovation because we will have romanticized the past tial component of LTE mobile networks. The refusal of and grown too comfortable with the status quo.

30 Notes

1. Vaclav Smil, “Moore’s Curse,” IEEE Spectrum 52, no. 4 (April 2015): 26. 2. Ibid. 3. Chris Mack, “The Multiple Lives of Moore’s Law,” IEEE Spectrum 52, no. 4 (April 2015): 30–33. 4. Cade Metz, “IBM’s New Carbon Nanotubes Could Move Chips beyond Silicon,” Wired Magazine, October 10, 2015, www.wired.com/2015/10/ibm-gives-moores-law-new-hope-carbon-nanotube-transistors/. 5. “Researchers First to Create a Single-Molecule Diode,” Phys.org, May 25, 2015, http://phys.org/news/2015-05-single- molecule-diode.html. 6. Sebastian Anthony, “Beyond Silicon: IBM Unveils World’s First 7nm Chip,” Ars Technica UK, July 9, 2015, http:// arstechnica.co.uk/gadgets/2015/07/ibm-unveils-industrys-first-7nm-chip-moving-beyond-silicon/. 7. Martin Blanc, “Taiwan Semiconductor Confirms Release of 7nm Processors by 2017,” Bidness Etc, May 31, 2015, www. bidnessetc.com/44100-taiwan-semiconductor-confirms-release-of-7nm-processors-by-2017/. 8. Robert F. Service, “Breaking the Light Barrier,” Science 348, no. 6242 (June 26, 2015): 1409–10, www.sciencemag.org/ content/348/6242/1409?related-urls=yes&legid=sci;348/6242/1409; and Anne Morris, “ITU Outlines 5G Roadmap towards ‘IMT-2020,’” FierceWireless:Europe, June 22, 2015, www.fiercewireless.com/europe/story/itu-outlines-5g-roadmap-towards-imt- 2020/2015-06-22. 9. This observation has exceptions: some applications, such as search, domain name service, the web, and social networks, are both application and infrastructure because they offer services to other applications through application program interfaces (API). Such “infrastructure applications” are enablers of additional applications: Periscope would not exist without a Twitter to run on, for example. 10. Brett Swanson, Moore’s Law at 50: The Performance and Prospects of the Exponential Economy, American Enterprise Insti- tute, November 2015, www.aei.org/publication/moores-law-at-50-the-performance-and-prospects-of-the-exponential-economy/. 11. Nicolas Mokhoff, “Semi Industry Fab Costs Limit Industry Growth,” EE Times, October 3, 2012, www.eetimes.com/ document.asp?doc_id=1264577. 12. Adam Cohen, The Perfect Store: Inside eBay (Boston: Little, Brown and Co., 2002); and Evan Carmichael, “Business Ideas: 3 Business Lessons from Pierre Omidyar,” www.evancarmichael.com/Business-Coach/4492/Business-Ideas--3-Business-Lessons- From-Pierre-Omidyar.html. 13. Katharine Gammon, “What We’ll Miss about Bill Gates — A Very Long Good-Bye,” Wired Magazine, May 19, 2008, www. wired.com/2008/05/st-billgates/. 14. Nicholas Carlson, “At Last—The Full Story of How Facebook Was Founded,” Business Insider, March 5, 2010, www. businessinsider.com/how-facebook-was-founded-2010-3#we-can-talk-about-that-after-i-get-all-the-basic-functionality-up- tomorrow-night-1. 15. Rick Broida, “Use SpeedTest to Help Diagnose Internet Problems,” PC World, April 23, 2013, www.pcworld.com/article/ 2036299/use-speedtest-to-help-diagnose-internet-problems.html. 16. Trevor Gilbert, “The Problem with Dumb Pipes,” Pando Daily, February 27, 2012, http://pando.com/2012/02/27/the- problem-with-dumb-pipes/. 17. Hal Singer, Three Ways the FCC’s Open Internet Order Will Harm Innovation, Progressive Policy Institute, May 19, 2015, www.progressivepolicy.org/publications/policy-memo/three-ways-the-fccs-open-internet-order-will-harm-innovation/.

31 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

18. Richard Bennett, Designed for Change: End-to-End Arguments, Internet Innovation, and the Net Neutrality Debate, Infor- mation Technology and Innovation Foundation, September 2009, https://itif.org/publications/2009/09/25/designed-change-end- end-arguments-internet-innovation-and-net-neutrality; and Richard Bennett, Remaking the Internet: Taking Network Architecture to the Next Level, Research Program on Digital Communications, Time Warner Cable, Summer 2011, www.twcresearchprogram. com/pdf/TWC_Bennett_v3.pdf. 19. Jon Postel, “RFC 791: Internet Protocol,” DARPA Internet Program, September 1981, https://tools.ietf.org/html/rfc791; S. Blake et al., “RFC 2475: An Architecture for Differentiated Services,” Internet RFC, December 1998, http://tools.ietf.org/rfc/ rfc2475.txt; and R. Braden, D. Clark, and S. Shenker, “RFC 1633: Integrated Services in the Internet Architecture: An Overview,” Internet RFC, June 1994, http://tools.ietf.org/rfc/rfc1633.txt. 20. Akami Technologies, Akamai’s State of the Internet: Security Report, May 19, 2015, www.stateoftheinternet.com/resources- web-security-2015-q1-internet-security-report.html. 21. S. Bellovin, “Security Requirements for BGP Path Validation,” RFC Editor, August 2014, www.rfc-editor.org/rfc/rfc7353.txt; and P. Mockapetris, “RFC 882: Domain Names: Concepts and Facilities,” Network Working Group, November 1983, http://tools. ietf.org/rfc/rfc882.txt. 22. P. Eardley, “RFC 5559: Pre-Congestion Notification (PCN) Architecture,” Network Working Group, June 2009, www. rfc-editor.org/rfc/rfc5559.txt; Th. Knoll, “BGP Extended Community for QoS Marking,” Inter-Domain Routing Working Group, January 22, 2015, https://tools.ietf.org/html/draft-knoll-idr-qos-attribute-15; and R. Geib and D. Black, “DiffServ Interconnection Classes and Practice,” TSVWG, March 9, 2015, https://tools.ietf.org/html/draft-ietf-tsvwg-diffserv-intercon-01. 23. Syed Ahson and Mohammad Ilyas, IP Multimedia Subsystem (IMS) Handbook (Boca Raton: CRC Press, 2009), www.crcnetbase.com/isbn/9781420064612. 24. Federal Communications Commission, Report and Order on Remand, Declaratory Ruling, and Order in the Matter of Pro- tecting and Promoting the Open Internet, February 26, 2015, https://apps.fcc.gov/edocs_public/attachmatch/FCC-15-24A1.pdf. 25. Broadband Internet Technical Advisory Group Inc., “Differentiated Treatment of Internet Traffic,” October 2015, www.bitag.org/documents/BITAG_-_Differentiated_Treatment_of_Internet_Traffic.pdf. 26. Bob Metcalfe, “Ether Acquisition” (memo, Palo Alto, California, May 22, 1973), http://ethernethistory.typepad.com/papers/ ethernetbobmemo.pdf. 27. Ross McIlroy, “History,” Ethernet, 2003, www.dcs.gla.ac.uk/~ross/Ethernet/history.htm. 28. James Pelkey, “4.10 ALOHANET and Norm Abramson: 1966–1972,” in Entrepreneurial Capitalism and Innovation: A History of Computer Communications 1968–1988, www.historyofcomputercommunications.info/Book/4/4.10- ALOHANETNormAbramson-66-72.html. 29. Butler W. Lampson. A History of Personal Workstations, ed. Adele Goldberg (Addison-Wesley, 1988), 291–344. 30. Digital Equipment Corporation, Intel Corporation, and Xerox Corporation, “The Ethernet: A ; Data Link Layer and Physical Layer Specifcations,” 1980, http://research.microsoft.com/en-us/um/people/gbell/Ethernet_Blue_ Book_1980.pdf. 31. I was the vice chair of the StarLAN task force in 1984–85. 32. Urs von Burg, The Triumph of Ethernet: Technological Communities and the Battle for the LAN Standard (Stanford, Califor- nia: Stanford University Press, 2001). 33. Institute of Electrical and Electronics Engineers et al., 802.1D: 2004: IEEE Standard for Local and Metropolitan Area Net- works Media Access Control (MAC) Bridges, Institute of Electrical and Electronics Engineers, 2004, http://ieeexplore.ieee.org/ servlet/opac?punumber=9155. 34. K. Nichols et al., “RFC 2474: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers,” Internet RFC, December 1998, http://tools.ietf.org/rfc/rfc2474.txt. 35. Cable Television Laboratories Inc., “Five Modem Makers’ Systems Considered for Cable Data Specifications,” September 23, 1996, https://web.archive.org/web/20021021205140/http://www.cablelabs.com/news/pr/1996/1996_09_23.html. 36. Cable Television Laboratories Inc., “Cable RF Specification for High-Speed Data Finalized,” March 16, 1997, https://web.

32 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

archive.org/web/20021021193617/http://www.cablelabs.com/news/pr/1997/1997_03_16.html. 37. Braden, Clark, and Shenker, “RFC 1633: Integrated Services in the Internet Architecture: An Overview.” 38. Cable Television Laboratories Inc., “Data-Over-Cable Service Interface Specifications: Converged Cable Access Platform: Converged Cable Access Platform Architecture Technical Report,” June 14, 2011, http://cablelabs.com/specifications/CM-TR- CCAP-V02-110614.pdf. 39. “Isochronous” roughly means “at the same time.” The name addresses applications with regular and predictable traffic loads while running that may commence or terminate at any time; telephone calls have this property. 40. Bennett, Remaking the Internet. 41. Michael Cookish, “Video over DOCSIS,” Communications Technology, November 1, 2008, www.cable360.net/ct/video/ Video-over-DOCSIS_32357.html. 42. AT&T, “Testing the First Public Cell Phone Network,” AT&T Archives, June 13, 2011, http://techchannel.att.com/play- video.cfm/2011/6/13/AT&T-Archives-AMPS:-coming-of-age. 43. International Telecommunication Union, “What Really Is a Third Generation (3G) Mobile Technology,” www.itu.int/ ITU-D/tech/FORMER_PAGE_IMT2000/DocumentsIMT2000/What_really_3G.pdf. 44. Ibid. 45. Braden, Clark, and Shenker, “RFC 1633: Integrated Services in the Internet Architecture: An Overview”; Blake et al., “RFC 2475: An Architecture for Differentiated Services”; and E. Rosen, A. Viswanathan, and R. Callon, “RFC 3031: Multiprotocol Label Switching Architecture,” Internet RFC, January 2001, http://tools.ietf.org/rfc/rfc3031.txt. 46. J. Postel, “RFC 795: Service Mappings,” Internet RFC, September 1981, http://tools.ietf.org/html/rfc795; and Nichols et al., “RFC 2474: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers.” 47. Postel, “RFC 795: Service Mappings.” 48. See, for example, Zuqing Zhu et al., “RF Photonics Signal Processing in Subcarrier Multiplexed Optical-Label Switching Communication Systems,” Journal of Lightwave Technology 21, no. 12 (December 2003): 3155–66, http://ieeexplore.ieee.org/ xpl/login.jsp?tp=&arnumber=1263734&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all. jsp%3Farnumber%3D1263734. 49. This section repeated from Bennett, Remaking the Internet. 50. Federal Communications Commission, “2015 Open Internet Order.” 51. Ibid. 52. Cisco Systems, “The Zettabyte Era—Trends and Analysis,” May 2015, www.cisco.com/c/en/us/solutions/collateral/ service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html. 53. Gregory L. Rosston and David Waterman, eds., The Internet and Telecommunications Policy: Selected Papers from the 1995 Telecommunications Policy Research Conference (Mahwah, New Jersey: Lawrence Erlbaum Associates, 1996), 1. 54. Ibid., 3–4. 55. David D. Clark, “Combining Sender and Receiver Payments in the Internet,” in Gregory L. Rosston and David Waterman, eds., Interconnection and the Internet: Selected Papers from the 1996 Telecommunications Policy Research Conference (Mahwah, New Jersey: L. Erlbaum Associates, Publishers, 1997). 56. L. Jean Camp and Donna M. Riley, “Bedrooms, Barrooms, and Boardrooms on the Internet,” in Gregory L. Rosston and David Waterman, eds., Interconnection and the Internet: Selected Papers from the 1996 Telecommunications Policy Research Confer- ence (Mahwah, New Jersey: L. Erlbaum Associates, Publishers, 1997). 57. Robert M. Frieden, “Can and Should the FCC Regulate Internet Telephony?,” in Gregory L. Rosston and David Waterman, eds., Interconnection and the Internet: Selected Papers from the 1996 Telecommunications Policy Research Conference (Mahwah, New Jersey: L. Erlbaum Associates, Publishers, 1997). 58. Lorrie Faith Cranor and Steven S. Wildman, eds., Rethinking Rights and Regulations: Institutional Responses to New Com- munication Technologies (Cambridge, Massachusetts: MIT Press, 2003). 59. Tim Wu, “Network Neutrality, Broadband Discrimination,” Journal of Telecommunications and High Technology Law 2

33 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

(2003): 141, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=388863. 60. Edward J. Valauskas, “Darwin’s Orchids, Moths, and Evolutionary Serendipity,” Chicago Botanic Garden, February 2014, www.chicagobotanic.org/library/stories/darwins_orchids. 61. Blake et al., “RFC 2475 - An Architecture for Differentiated Services.” 62. Susan P. Crawford, “The Looming Cable Monopoly,” Yale Law & Policy Review, Inter Alia, June 1, 2010, http://ylpr.yale. edu/inter_alia/looming-cable-monopoly. 63. Leichtman Research Group, “About 385,000 Add Broadband in the Second Quarter of 2014,” August 15, 2014, www. leichtmanresearch.com/press/081514release.html. 64. Federal Communications Commission, “2015 Broadband Progress Report and Notice of Inquiry on Immediate Action to Accelerate Deployment,” February 4, 2015, www.fcc.gov/encyclopedia/archive-released-broadband-progress-notices-inquiry. 65. IDC, “Smartphone OS Market Share 2015, 2014, 2013, and 2012,” www.idc.com/prodserv/smartphone-os-market-share. jsp. 66. StatCounter, “Top 7 Desktop OSs from Oct 2014 to Oct 2015,” StatCounter Global Stats, October 2015, http://gs. statcounter.com/#desktop-os-ww-monthly-201410-201510. 67. Cade Metz, “Microsoft: An Open Source Windows Is ‘Definitely Possible,’”Wired , April 3, 2015, www.wired.com/2015/04/ microsoft-open-source-windows-definitely-possible/. 68. Emil Protalinski, “Google Reveals Fiber Plans for Provo, $30 Construction Fee,” Next Web, August 15, 2013, http:// thenextweb.com/google/2013/08/15/google-reveals-fiber-plans-for-provo-starting-with-free-5-mbps-internet-for-all-after-30- construction-fee/; and Cameron Summerson, “T-Mobile Announces Uncarrier for Tablets, 200MB of Free Monthly LTE Data for All Compatible Devices,” Android Police, October 23, 2013, www.androidpolice.com/2013/10/23/t-mobile-announces-uncarrier- for-tablets-200mb-of-free-monthly-lte-data-for-all-compatible-devices/. 69. Comcast, “Comcast Begins Rollout of Residential 2 Gig Service in Atlanta Metro Area,” April 2, 2015, http://corporate. comcast.com/news-information/news-feed/comcast-begins-rollout-of-residential-2-gig-service-in-atlanta-metro-area. 70. Aaron Smith, “U.S. Smartphone Use in 2015,” Pew Research Center, April 1, 2015, www.pewinternet.org/2015/04/01/ us-smartphone-use-in-2015/. 71. Sprint, “Cell Phones, Mobile Phones & Wireless Calling Plans from Sprint,” accessed July 15, 2015, www.sprint.com/ ?ECID=vanity:landings/unlimitedfamily/index.html#!/. 72. Richard Bennett, G7 Broadband Dynamics: How Policy Affects Broadband Quality in Powerhouse Nations, American Enter- prise Institute, November 2014, www.aei.org/wp-content/uploads/2014/11/G7-Broadband-Dynamics-Final.pdf. 73. Bret Swanson, Internet Traffic as a Basic Measure of Broadband Health, American Enterprise Institute, November 20, 2014, www.aei.org/publication/internet-traffic-basic-measure-broadband-health/; Christopher S. Yoo,U.S. vs. European Broadband Deployment: What Do the Data Say?, Center for Technology, Innovation and Competition, June 2014; Roger Entner, Spectrum Fuels Speed and Prosperity, Recon Analytics LLC, September 25, 2014, http://reconanalytics.com/wp-content/uploads/2014/09/ Spectrum-Fuels-Speed-and-Prosperity.pdf; and Roslyn Layton and Michael Horney, Innovation, Investment, and Competition in Broadband and the Impact on America’s Digital Economy, Mercatus Center, August 12, 2014, http://mercatus.org/publication/ innovation-investment-and-competition-broadband-and-impact-america-s-digital-economy. 74. Danielle Kehl et al., The Cost of Connectivity 2014, New America Foundation, October 30, 2014, www.newamerica.org/oti/ the-cost-of-connectivity-2014/. 75. Ookla, “Global Download Speed,” July 2015, www.netindex.com/download/. 76. Bennett, G7 Broadband Dynamics, 7. 77. Ben Scott, Stefan Heumann, and Jan-Peter Kleinhans, Landmark EU and US Net Neutrality Decisions: How Might Pending Decisions Impact Internet Fragmentation?, Global Commission on Internet Governance, July 2015, www.cigionline.org/ publications/landmark-eu-and-us-net-neutrality-decisions-how-might-pending-decisions-impact-internet. 78. Broadband Internet Technical Advisory Group, Differentiated Treatment of Internet Traffic, October 2015, www.bitag.org/ documents/BITAG_-_Differentiated_Treatment_of_Internet_Traffic.pdf.

34 ARRESTED DEVELOPMENT: HOW POLICY FAILURE IMPAIRS INTERNET PROGRESS RICHARD BENNETT

79. Lawrence Lessig, Code: And Other Laws of Cyberspace (New York: Basic Books, 1999). 80. Wu, “Network Neutrality, Broadband Discrimination.” 81. Ibid. 82. Alex McKenzie, “INWG and the Conception of the Internet: An Eyewitness Account,” IEEE Annals of the History of Com- puting 33, no. 1 (January 2011): 66–71; Louis Pouzin, The Cyclades Computer Network: Towards Layered Network Architectures (New York: North-Holland, 1982); and James Pelkey, Entrepreneurial Capitalism & Innovation: A History of Computer Commu- nications 1968–1988, 2007, www.historyofcomputercommunications.info/index.html. 83. Postel, “RFC 795: Service Mappings”; and Postel, “RFC 791: Internet Protocol.” The exemplary mappings are Network Control, Internetwork Control, CRITIC/ECP, Flash Override, Flash, Immediate, Priority, and Routine. 84. Postel, “RFC 795: Service Mappings.” 85. Blake et al., “RFC 2475: An Architecture for Differentiated Services.” 86. H. Schulzrinne et al., “RFC 3550: RTP: A Transport Protocol for Real-Time Applications,” Network Working Group, July 2003, https://tools.ietf.org/html/rfc3550#section-6.4.

35 About the Author

Richard Bennett is a visiting fellow in the American Enterprise Institute (AEI) Center for Internet, Communications, and Technology Policy, where he stud- ies and specializes in public policies affecting networks, network regulation, and innovation. He has developed networking products and standards for 35 years.

36