INFORMATION TO USERS

This manusuipt has beerr reproduced from the miaotih master. UMI films the text directly frwn the original or copy sukni. Thus, some thesis and dissertation copies are in Ferface, while athem may be from any type of computer printer.

The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, cdored or poor quality illustrations and photographs, print Meedttrmugh, U~tanclardmargins, and improper alignment can adversely affect mprodudon.

In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted- Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.

Oversize materials (e-g., maps, drawings, charts) am reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps

Photographs included in the oiiginal manuscript have been reproduoed xerographically in this copy. Higher quality 6- x W Mack and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI direto order.

Be11 CL Howell Information and Learning 300 North Zeeb Road, Ann Arbor, MI 48108-1346 USA 80@521-0800

Craig McTaggart

A thesis submitted in conformity with the requirements for the degree of Master of Laws Graduate Department of the Faculty of Law University of Toronto

@ Copyright by Craig John McTaggart 1999 National Library BibliothMue nationale 1*1 ofCanada du Canada Acquisitions and Acquisitions et BibliographicServices servicesbibliographiques 395 Wellington Street 395. rue Weningtm OttawaON K1AON4 OctawaON KlAON4 Canada Canada

The author has granted a non- L'auteur a accorde melicence non exclusive licence allowing the exclusive pennettant a la National Library of Canada to Bibliotheque nationale du Canada de reproduce, loan, distn'bute or sell reproduire, peter, distribuer ou copies of this thesis in microform, vendre des copies de cette these sous paper or electronic formats. la forme de microfiche/film, de reproduction sur papier ou sur format electronique.

The author retains ownership of the L'auteur conserve la propriete du copyright in this thesis. Neither the droit d'auteur qui protege cette these. thesis nor substantial extracts fiom it Ni la these ni des extraits substantiels may be printed or otherwise de celle-ci ne doivent etre imprimes reproduced without the author's ou autrement reproduits sans son permission. autorisation, Governance of The 's Infrastructure: Network Policy for the Global Public Network

Craig McTaggart 8.A. (lions.) (Queen's), LL.B. (Western Ontario), of the Ontario Bar

Master of Laws Faculty of Law University of Toronto 1999

Abstract: The Internet's unprecedented characteristics are products of the unique compound of governance forces to which its underlying infrastructure is subject. This idkstructure presents issues significantly different to those relating to the content passing over it. in the evolution of the Internet into a global public communications network. its physical infrastructure and content layers have changed dramatically, while its technical infhstructure has not. Three scenarios for the continued evolution of the Internet suggest that the governance forces which support this infrastructure may no longer be capable of maintaining the Internet as an open public network. The Internet should continue to be governed in the public interest according to the principles of universal interoperability and interconnection, non-proprietary standards, protocols and networks, and unity. Trends in the Internet and experience in telecommunications suggest that hture governance structures will need to be globally authoritative in order to effectively protect and advance these fhdamental values. I would first like to thank Professor Hudson Janisch for his enthusiastic and dedicated supervision over the past year. His constant support and encouragement made the transition &om practice to academics most enjoyable. Professor Arnold Weinrib graciously served as second reader-

I also wish to acknowledge the generous financial support of Rogers Communications Inc., as recipient of the E.S. Rogers Graduate Scholarship in Communications Law at the University of Toronto, and The University of Western Ontario Faculty of Law and the Ontario Law Foundation, as recipient of their Graduate Fellowship.

Graduate Secretary Julia Hall made the whole process run smoothly from start to finish and deserves gratitude for her caring shepherding of the LL-M. Class of '99.

Finally, for the abiding love and support of Kim Rogers, Ph-D.. I am grateful and indebted, in more ways than one. Thank you for understanding (and tolerating) the demands of the practice and study of law. I1 . GOVERNANCE ...... 8 I11. GOVERNANCE OF THE LNTERNET'S INFRASTRUCTURE ...... 11 A . IDENTIFIERS AND ROUTING ...... 12 I IPNumbers ...... 12 2 Domain Names ...... 13 3. Management of the Name and Number Spaces ...... 21 4 . and the internet Assigned Numbers Authority ...... 23 5 The Inrernet Corporationfor Assigned Names and Numbers ...... 25 B . PROT~COLSAND STANDARDS...... 33 I TCP/IP...... 33 2. The Internet Engineering Task Force ...... 37 C . PEERING AND INTERCONNEC~ON...... 42 I . Internet "Backbones"...... 42 2. Peering and Transiting...... 43 D . TRADITIONAL GOVERNANCEFORCES ...... 45 IV . THE GLOBAL PUBLIC NETWORK ...... 50 A . THE INTERNET ...... 50 B . Is IT GLOBAL?...... 51 C . 1.5 IT A NETWORK?...... 53 D . IS IT PUESLlC? ...... 57 I . Public and Private Networh ...... 57 2 . The Societyfor Worldwide Interbank Financial Telecommunication ...... 59 3 . The Public Switched Telephone Network ...... 61 4 The Internet ...... 70 E . THE GLOBALPUBLIC NWOR); ...... 82

V . PUBLIC GOVERNANCE ...... ,...... 84 A . THE EVOLUTION OF THE ~JERNET'SGoVELNANCE STRUCTURES...... 84 B . PUBLICVALUES ...... 88 C . DOES THE INTERNET NEED 'GOVERNANCE' AT ALL ? ...... 96 D . CONTENTvs . INWTRUCTURE ...... 101 E . IMPLICIT AND EXPLICITNORMS ...... 103 Vlt . THREE SCENARIOS FOR THE INTERNET'S FUTURE ...... 108 GENERALTRMDS ...... 108 OPENSCENARIO ...... Ill Market Evolution ...... I11 Identifiers and Routing ...... 113 Protocols and Standards ...... 114 Peering and Interconnection ...... I14 FRAGM~DSCENARJO ...... 1 IS Markt Evolution ...... 115 Identrjiers and Routing ...... 116 Protocols and Stanabrdr ...... 117 Peering and Interconnection ...... 120 CLOSEDSCENARIO ...... 122 Mmker Evolution ...... I22 Identifies and Routing ...... 125 3. Protocols and Standards ...... I2 7 4. Peering and Interconnection ...... I30 VII . WHAT PRINCfPLES SHOULD INFORM THE GOVERNANCE OF THE INTERNET'S INFRASTRUCTURE? ...... 131 "OPS"...... I31 Universally Inreroperable...... 132 Universally Inrerconnected ...... 137 Non-proprietary (Srandads and Prorocols)...... I42 Nun-proprietav firVerworks) ...... 144 Uncjied ...... I48 THE PUBLIC ~JTEREST...... 155 IMPLICATIONS OF THE SCENARIOS FOR INTERNET GOVERNANCE ...... ,. 161

CLOSED...... -...... 174 GOVERNANCEALTERNATWES ...... 175 Market Forces and Social Goals ...... 175 The Role of Competition Lmv ...... 177 Global Network Policy ...... I78 The Need For Globu1A uthoriry...... 183 Coherent Policy and Control...... I84 IX. CONCLUSION...... 187

LIST OF ACRONYMS AND GLOSSARY ...... 192 BIBLIOGRAPHY ...... 200 mether the [electric] light is being used for brain surgery or night baseball is a matter of indtxerence. It could be argued that these activities are in some way the "content" of the electric Zighr. since they could not exzst without the electric light. Bisfact merelv underlines the point that "the medium is the message" became it rj- the medium that shapes and controls the scale and form of human association and action, The content or uses of such media are as diverse as they are ineflectzral in shaping the form of human association. Indeed. it is on& too ypical that the "contenl " of any medium &[inchur to the character of the medium!

This thesis considers the Internet as a communications medium, The Internet carries many different types of content. but it is its underlying infrastructure which "shapes and controls the scde and form of human association and action" possible on the Internet, Yet just as McLuhan warned over thirty years ago. the character of the content on the Internet can blind us to the character of the medium itself: Unfortunately. most of the popular and Legal commentary about the Internet to date focuses on content and, to more or less degree, assumes the medium. This tendency has prevented the development of a complete understanding of the Internet and the legal issues which it presents.

Underneath the diverse and apparently uncontrollable layers of Internet content and applications is one technical infrastructure. While the physicai infnstructure and content and application layers have matured and become almost completely commercialized, the technical infrastructure which binds them together has remained virtually unchanged since the Internet's non-commercial era. These systems have simply been '*scaled up" to accommodate the tremendous expansion of the Intemet in the past ten years. As the content and application layers expand, the significance of the underlying infrastructure increases. Yet that infrastructure remains remarkably obscure among the public and even Internet users. Very little is known about how the Internet works or who makes it work. There is a popular misconception that the Internet is just "there." and that it is not run by anyone. As we shall see, this is both right and wrong on different levels. While no single entity runs the Internet, many different individuals and institutions do. While many accounts stop there, they do not

t M. McLuhan, Understanding Media.- The Ertemiom of Man (Toronto: Signet, 1964) 24. tell the whole story. What is missing is an understanding of the governance forces and structures which bind those autonomous actors together-

Just as the Internet does not look like any other communications medium that we have seen before. it is governed by a complex compound of governance forces which is equally unfafniliar. Unfortunately. these forces are poorly understood and often dismissed as. inconsequential or subservient to the technical characteristics of the network itself, or worse, fused with them. This study aims to provide a more complete understanding of the compound of governance forces which sustain the Internet- It is often forgotten that computers only do what humans tell them to do. The Internet functions the way it does for certain identifiable reasons. not by chance. Its governance structures are neither incidental nor neutral. Not only is the medium the message (or the interesting story) of the Internet, but its governance is the message of the medium, These forces determine what the Internet looks like, what can be done with it, and by whom.

Given the tremendous importance which the Internet already appears to have for global communications and commerce. it is no longer sufficient to simply assume the Internet's infrastructure and ignore its governance. We must understand who makes the decisions about how the Internet works. and appreciate the values which have heretofore guided those decisions- We must do so because the Internet is changing- This academic network, designed for non-commercial communication. is currently being transformed into a platform for global commerce.

Before it became a marketing channel. a virtud casinoktock market, and an electronic post ofice for the industrialized world. the Internet was used for much more constrained purposes by researchers at universities and scientific research facilities. This relatively narrow community's computers were linked by dedicated research networks, initially the ARPANET and later NSFNET.' both United States government funded and directed research networks. While the very earliest users of ARPANET were primarily involved in defence-related scientific research. the NSFNET community was much more broadly representative of the (primarily American) scientific, and later more general,

2 ARPA stands for the United States Department of Defense Advanced Research Projects Agency. The name of this agency was changed to DARPA around 1970, but the network continued to be known as ARPANET. The "NSF' in NSFNET refers to the United States National Science Foundation. academic communities. The goals of NSFNET were described in 1986 by some of the primary individuals be hind its development, as follows:

Our vision of this network is of a vast network of networks interconnecting the scientist's local advanced graphics workstation environment to other local and national resources. Through the network. a researcher will be able to ... collect and analyze data without concern for where the tools, programs or models reside. [...I Our vision is of a network integrating the computer resources available and presenting these resources to the user as a single interactive system .'

These features are precisely what have made the Internet so valuabie and cherished by users of all kinds today. The Internet is indeed a vast network of networks which functions (to the user's eye) as a single interactive system. Linking computers together seems like the natural order of things today, but it was not always thus.

In a remarkably short time. networking has become what computing is all about. and the Internet has become the world's computer. Sun Microsystems. one of the fm manufacturers of "mini" and 3nicro" computers. uses the simple yet powefil motto, "the network is the computer." The idea that the value in computers comes from linking them together, not from the computers themselves. hrther bears out the maxim that the medium is the message. Computers themselves are not particularly interesting or valuable on their own any more. Rather, it is what they can do together that creates exponential value and attracts our interest. Further. while there are many different networks. of all shapes and sizes, all over the world, the Internet is thought of as the one network to which they a11 connect.

The Internet is now frequently proclaimed to be the most significant technological advance of the late twentieth century. surpassing even electricity and the steam engine. It is presented as a single global platform. the vehicle for massive creation of wealth for all. Wired magazine is one of the most breathless promoters of this view. As Kevin Kelly exulted in the September. 1999 issue:

How many times in the history of mankind have we wired the planet to create a single marketplace? How often have entirely new channels of commerce been created by digital technology? When has

3 D.M. lennings, L-H. Landweber. 1.H. Fuchs. D.J. Farber & W.R. Adrion, "Computer Networking for Scientists," Science 23 1 (22 February 1986) 943. quoted in D. lennings, "Foreword" in N. Randall, The Soul ofthe internet: Net Go&, Netkens and The Wiring o/The World (London: International Thomson Computer Press, 1997) at ir. money itself been transformed into thousands of instruments of investment? It may be that ar [his particular moment in our history, the convergence of a demographic peak, a new globai marketplace, vast technological opportunities. and financia! revolution will unleash two uninterrupted decades of growth.' (emphasis in original)

While the power of connecting the entire world is undeniable, the Internet has one significant limitation. It works extremely well as a communications medium, but not so well as a commercial medium, unless, of course. all that is asked of it is unrestricted communication. The liberal flow of information is generally a positive feature in markets, but the Internet just might offer too much information. [t reduces information barriers for buyers, but those barriers are often what maintain the profits of sellers. Indeed. it has apparently proven difficult for businesses to make money on the Internet. A June, 1999 newspaper advertisement by Hewlett Packard expresses the frustration of the business community with the Internet's wide-open nature:

The Internet, which has transformed business. is now undergoing its own transformation. 1t-s not just about websites. It's not just about selling products. Now it-s about selling actions. These actions. or e- services. are being created by companies just like yours. And they're not being given away- In other words, there is money to be made, Beginning now.'

It should not be surprising that there has been Memoney to be made on the Internet until now. The Internet was designed on principles diametrically opposite to those which suit profit-oriented networks. such as cable television. The statement that the Internet "is now undergoing its own transformation" confuses content (in this case transactions) with the medium itself, but nonetheless succinctly demonstrates the urgent need to understand what makes the Internet what it is. As the commercialization of the Internet continues, its infrastructure is being transformed. yet its governance structures are not maturing at nearly the same pace. The significance of these structures is often underestimated, with the result that the potential impact of their loss is not Mly appreciated.

4 K. Kelly, "The Roaring Zeros" Wired (September 1999) 15 I.

5 Financial Posr ( 15 June 1999) C8-C9. The Internet's governance structures have produced a fhdamentally open and accessible communications environment. However. trends in the evolution of the Internet suggest that its virtuous technical characteristics are neither immutable nor determine the mode of its governance. Rather. these characteristics are being comprmnised on many fionts, in ways which the traditional governance bodies are powerless to stop. The dominant conception of the Internet as a private phenomenon prevents many Internet participants from appreciating the hdamentally cooperative nature of the Internet, and the collective stake which we all have in the way it develops.

We need to conceive of the Internet as an essentidly public network and begin to develop consistent network policy to guide its evolution. The shared technical infrastructure of the Internet makes it a global public network. Its traditional governance structures have constituted a unique form of public governance, and served the public interest in the development and operation of the Internet. If the Internet evolves in certain directions, its governance structures will also have to evolve to continue to be able to protect the public interest. Certain trends can already be discerned- While the technical infrastructure employed by commercial Internet participants has thus far been the same one used since the Internet's earliest days. tremendous investment is currently being put into efforts to change that technical intiastructure to make it more suitable for profit-oriented activity. These trends run counter to the basic design philosophy ofthe Internet. and signal dramatic changes in the contours of its governance. If we do not have a principled basis on which to insist that the values which have guided the Internet to date continue to be expressed in its underlying infrastructure, there may no longer be an Internet as we presently know it-

In the pages which follow some of the traditional forces and structures of Internet governance will be examined to gain an appreciation for the values which have underlain them. We will then consider three different scenarios for the future development of the Internet, based on presently-observabie trends in the Internet industry. The basic principles of network policy which existing governance forces and structures have used to guide the network's evolution will be identified and catalogued. It will be asserted that for the public interest to continue to be served in the Internet environment, these basic principles must continue to guide decisions about the underlying infrastructure, whoever might make these decisions. These principles will then be applied to the hypothetical scenarios in an attempt to understand the range of issues which may arise, and the subtie ways in which they could engage the public interest. which itself is quickly becoming a matter of global concern-

Instead of treating the Internet as an entirely unique phenomenon, this study aims to place it in the context of other communications media, not for the purpose of trying to apply their governance structures to it, but to appreciate the basic issues which must be addressed in any public communications network environment. Indeed, the example of the new generation of local telecommunications networks which prevails in Canada and the United States (and soon much of the world thanks to multilateral trade agreements) is instructive. The new values of telecommunications governance are very different fiom those which many people (and academic \\-titers) still associate with telecommunications regulation. Closed networks are everywhere being replaced by open networks. Open networks are generally governed by market discipline, yet also require cooperation and oversight at the levels of technical infrastructure and the interconnection of physical infrastructure. The dominant themes of new public network governance are very similar to those which have implicitly guided the lnrerner to dme.

Ironically, as the telecommunications network's governance becomes more like that of the Internet. its physical infrastructure is being converted from its traditional circuit-switching model to the Internet's packet-switching model. Led by voice-over-the- Internet, there is a tremendous effort underway to make the Internet capable of carrying any kind of communications service. However. its current technical characteristics are not suited to provision of the kind of reliable. controlted service which industrial applications require. Internet industry research and development is now aimed at making the Internet more like a telecommunications network. that is. more controllable.

As the trend towards introducing technical control continues. the degree to which the Lnternet is controlled. and not uncontrollable, will become more and more apparent. The issue of what principles guide the exercise of this control in its many forms will similarly rise in importance. As the Internet and existing telecommunications systems merge, the complex access-related issues faced in the teiecommunications context will become more and more relevant in the Internet environment. There is no reason to believe that the Internet will be any easier to maintain as a public network than the telephone network. Some combination of the principles which have always underlain the Internet's governance and those of the new public telecommunications network will likely serve the hture Internet best.

Closely related to the need for a recognition of the Internet as a public network is the need to more explicitly integrate public authority into its governance. In fact, public authority may be the only viable substitute for the unique public governance which has maintained the Internet in the public interest to date. The global nature of the Internet's infrastructure suggests the need for effective governance on a global scale. Just as governance does not dways equal government. this global governance need not take the form of traditional inter-governmental bodies- However. the support of territorial sovereigns will be necessary to ground any kind of global governance effort. Indeed. to ensure that whatever governance bodies develop are capable of effectively asserting and advancing the public interest in the Internet environment. the involvement of public authorities may be essential.

This thesis by necessity mcludes a significant amount of description of the technical and governance features of the Internet. While the profile of the Internet continues to rise in the public eye, what lies "behind it remains obscure. For this reason, a brief explanation of how the Internet works at its most basic level is provided. This area, like the computer field generally. is very jargon-intensive. Internet matters seem to be dominated by acronyms. and this tendency makes the subject even less accessible tbr the uninitiated. For this reason. a list of acronyms and glossary is provided at the end of this volume. The reader may find frequent reference to these aids necessary. particularly given the number of commonly-used acronyms which begin with the letter "I". There are, of course. many other important aspects of the Internet's infrastructure and governance which cannot be covered in this study. While reference to some of these other matters is made in the footnotes, the reader should be aware that only selected topics and bodies are described herein. In many cases. suggestions for further reading are not provided because there simply has not yet been any non-technical research undertaken.

Not only is the Internet itself relatively young, but academic research relating to it, particularly in the legal field. is in the very early stages of development. To the extent that the Internet will continue to present fascinating research opportunities. including entire areas which cannot yet even be predicted. this thesis starts at the "bottom" and seeks to understand the Internet and its governance at their most basic level. We will therefore begin with a definition of governance and then proceed to describe the ways in which the Internet is currently governed.

In this thesis, "governance-' is used to describe the people. institutions, des, and principles which made the Internet what it is and influence its evolution. McLuhan's maxim, "the medium is the message." impels us to ask who controls the medium, what "shapes and controls its scale and form." Rather than a need to find such control, this exploration of that question is motivated by a desire to understand the unique form of infrastructure management found in the Internet context, an environment which otherwise gives the illusion of being uncontrolled and uncontrollable.

"Governance" has several meanings. The Oxford English ~ictionary~ definition of the noun '$ovemance" is. in part. "controlling. directing. or regulating influence; control, sway, mastery." The transitive verb "to govern" is defined (in part) thus:

To rule with authority. especially with the authority of a sovereign; to direct and control the actions and affairs of (a people. a state or its mern bers). whether despotically or constitutionally: to rule or regulate the affairs of (a body of men. corporation)...

Government has come to be associated with the state, but the Greek root kuber-nao, to steer or to act as a pilot or helmsman, demonstrates both governance and government's origin in the generic idea of guiding. Governance has particular meanings in law. many of which have less coercive and more communitarian nuances,

The term governance is perhaps most familiar to lawyers in the context of corporate governance. Corporation statutes in Canadian and the American jurisdictions define the nature of a corporation. and the respective minimum rights and obligations of the people associated with a it. and. importantly. of the corporation itself. Perhaps the second and third-most basic elements of Ontario corporate law (beyond the hdarnental decree that corporations are persons) are that a corporation is managed by its directors in the best interests of the corporation itself: Corporate governance is therefore not an example of self-

6 The Oxford English Dicrionary (2d). Vol. VI (Oxford: CIarendon Press. 1989). governance because the corporation is distinct from its shareholders. its directon, and its officers and employees. A shareholder group might govern itself by way of a shareholder agreement, however. and an employee might belong to a trade union. which would have its own form of governance.

Bodies like corporations and trade unions have codifications of governance in the form of constitutions and constating documents. Corporation statutes are very much a part of the constitution of corporations. despite corporations being generally considered purely private phenomena Universities present something of a contrast. While Canadian universities are creatures of the state. their internal governance presupposes significant independence from it. Within universities. faculties have their omgovernance structures and codes. Other institutions, such as churches and hospitals, and associations, such as political parties and professional groups. all have their own particular internal governance structures and codes. Institutions and people within them have particular rights and obligations, which are in turn guided by particular rules and principles. The touchstone of governance is some degree of authoritative infTuence. Governance is rhe way a community manages relationships among its members.

Governance is also used in other contexts. International organizations are said to constitute international governance, bodies whose members are states (and. in some cases, what are popularly referred to as --international corporations").' The United Nations and its agencies are examples. International trade regimes like the World Trade Organization (WTO) are also described as a form of international governance. Treaties govern the ocean bed and the space surrounding Earth. There is no government of space or the ocean bed, but there is governance.

Another use of the word is in the phrase "market governance." Economists refer to markets as a form of governance, and in the context of deregulation. as a substitute

7 See, for example. R Mansell, "Network Governance: Designing New Regimes," in R. Mansell & R S ilverstone, eds.. Communicarzon By Design: The Politics of information and Commzcnicafion Technologies (Oxford: Oxford University Press, 1996) 187 and M, Zacher, Governing Global Nerworkr (Cambridge: Cambridge University Press, 1996). for governance in the form of state regulation.' The discipline of the market leads actors to behave in certain ways based on information relating to the market. While the actions of individual participants usually have little direct influence on a market, the collective actions of many autonomous participants can greatly "influence" a market. Common practices, or norms, can take on the character of governance. if they are lasting and widely adhered to. Norms can act as conditions of entry to markets and communities. and can define the limits and experience of markets and communities. We will return to the role of norms in the governance of the Internet. but for now it is important to note that governance need not proceed fiom one source, bur rather can he eflected by the uncoordinated yet common actions of many autonomors members of n cornrnzmify.

Yet another use of governance is in the context of "self-governance," which itself takes on many forms. In Canada and the United States there are long traditions of self- governance of professions. such as engineering and law. Investment dealers are similarly bound by a code of practice which has the force of law. These examples of self-regulation, which all have roots in statute and are subject to continuing public oversight. have been aptly characterized as "supervised self-regulation."9 Other forms are industry codes of conduct and company-specific codes. neither of which have effect outside of their particular communities. As we shall see. self-governance and self-regulation are phrases used very commonly in the Internet policy arena and each use can be thought of as occupying a place along a continuum of more and less state involvement, particular with respect to authority and enforceability. We will therefore return to this idea of self-governance as well.

Governance thus encompasses a range of forces, from statute to custom, tiom "public" to iprivate." Governance does not mean government. although government is a form of governance. Governance can be a substitute for government. or it can be effected by many governments acting together. Governance can be public. in the sense of state action, based on statutes or regulation, and it can be private. such as the rules of a private club, for

8 See, for example. C. Weare, "Organizing [nteropenbility: Economic Institutions and the Development of Interoperability," in G.W. Brock & G.L Rosston. eds.. The hternet and TeIecomm~inicarions Policy: Selected Papersfiom the 199.5 Telecommunications Polij Research Conference (Mahwah, NJ: Lawrence Erlbaum Associates, 1996) 14 1 at 16 1.

9 M. Priest, "The Privatization of Regulation: Five Models of Self-regulation" (1997-98) 19 Ottawa Law Review 233. example. Governance can also be a blend of public and private forces. as with corporate governance. While the affairs of a corporation are in the hands of its directors. they are also governed by statutes which impose restrictions and obligations on the corporation with respect to certain matters which have an impact on individuals outside of the narrow community of the corporation itself, such as securities regulation. The operation of a not-for- profit club attracts little interest fiom the state. while the operation of a municipality is subject to far more detailed rules. and its "directors" are subject to a particular kind of "market governance." the democratic process. This thesis will seek to understand what is "public" and what is "private" about the Intemet. and specifically whether the actions of a particular community of autonomous "private" individuals can take on the character of governance, and become a "public" matter within that community and beyond.

Section I11 describes some of the people, institutions. rules. and principles which have contributed to the governance of the Internet's infrastructure. After gaining a basic understanding of the existing forces of governance which have an impact on the lnternet irse[f; we will proceed to ask: What people. institutions. rules. and principles should contribute to the governance of the Internet's infrastructure? According to what principles and on what authority? How might the Internet's governance structures need to evolve to keep pace with the evolution of the Intemet itself?

GOVERNANCE OF THE INTERNET'S INFRASfRUCTURE

The Internet requires universally-adopted systems of identifiers, standards, protocols, and interconnection to achieve the remarkable feat of near-instantaneous global communication. Without these crucial elements of technical and physical infrastructure, there would only be disparate, unconnected networks. or even solitary computers -- the very state of affairs which prevailed before the internetworking revolution. Connecting millions of computers around the world. which '-speak different languages, is an astonishing technical achievement. and now that it is commonplace, is little appreciated. Tremendous research, experimentation and coordinated action were invested by many different entities to make the Internet possible. Far from arising organicatly or randomly. the systems which shaped the Internet were invented by idcnlifable people to serve specific goals, and the communities in which they were developed were always subject to a complex compound of governance forces.

A. IDENTIFIERS AND ROUTING

Internet communication is dependent on individually-unique identifiers, and globally-comprehensive lists of those identifiers. to operate. If a computer or network is not on the appropriate lists, it does not exist on the Internet. While any list of Internet addresses can theoretically be used by any particular network. certain lists are preferred because they are comprehensive and ensure end-to-end routing among all networks. For this reason, authoritative master lists are maintained which can be referred to by those network administrators who wish that their networks both reach, and be reachable by, as many other networks as possible. The management of these lists is a tremendously influential role. One of the many paradoxes of the Internet is that it is extremely decentralized, yet requires expert management of common resources to operate.

1. IP Numbers

The internet's enabling protocol10requires that each computer connected to a particular network employing it have a unique "-addressvto identify it and distinguish it fiom all other computers on the same network. The "IP" part of TCP/IP, Internet Protocol, defines a field of numerical addresses which identify each computer directly connected to the Internet. These numbers are referred to as "IP addresses" or "IP numbers." While the logic of the IP number space is not relevant for present purposes,1Lit is important to note that without an IP number, or a connection to a host which can temporarily assign one, it is not possible to participate in the Internet.

The allocation of portions of the IP number space for the Internet has almost always been coordinated by the Internet Assigned Numbers Authority (IANA),'~but the

pppppp 10 See "TCPAP," Section II1.B. 1. below.

I I A detailed explanation of the IP address system can be found in C-Semeria, Understanding If Addressing: Evevthing You Ever Wanted to Know, . Note: All Internet addresses in this thesis are current as of September 17, 1999.

12 See Section III.A.4. below. actual assignment of specific blocks of numbers has been delegated to regional Internet registries (RIRS).'~Due to the fbndarnental importance which IP addresses have for the routing of Internet transmissions. and the fact of their being a theoretically finite resource, their assignment and management must be centrally coordinated for the benefit of all Internet users.

IP Number management has been relatively uncontroversial throughout the Internet's history." Until recent years. the store of numbers available in the 32-bit IP number space was considered more than the Internet would ever need. The stunning growth in both individual use of the Internet and the number of electronic devices arrayed in IP networks, however, has made the exhaustion of the 32-bit IP number space a much more likely possibility. The next incarnation of the number space is now being implemented with the gradual transition to the IPv6 protocol (fkom the current IPv4) and its 128-bit address space.'' However, IP numbers are only of particular interest to network administrators. Most users do not navigate their way around the Internet or send e-mail using IP addresses. This is because layered on top of IP addresses are domain names.

2. Domain Names

When the size of the IP address field was much earlier expanded fiom 8 bits, the numbers became awkward to use and hard to remember. Network administrators began to use ASCII characters (roman characters as recognized by computers) to represent

13 The registries are: the American Registry for Internet Numbers (ARM) , Rkseaux IP Europkens Network Coordination Centre (RIPE NCC) , and Asia Pacific Network Information Center (APNIC) . RIPE NCC and APNIC predate ARM by several years. UntiI April 1996, the assignment IP addresses in Canada was performed by the University of Toronto.

I4 The exception is the imposition of route aggregation rules to keep routing tables from becoming unmanageably 'wide,' by means of the Classless Inter-Domain Routing (CIDR) protocol. See: .

15 The initial assignments of numbers in the IPv6 number space to the RIRs were announced by LANA on July 14, 1999. See

The Internet address "law.utoronto.ca" describes three distinct levels of domain names. Reading from right to left (from the general to the specific), ".ca" is the top- level domain ("TLD"), "utoronto" is the second-level domain ("SLD7) and "law" is the third- level domain ("3LD"). in addition to .ca, there are approximatety 250 other TLDs. The vast majority of these TLDs are, like .ca. "country code" TLDs ("ccTLDs'3. which generally correspond to a List of two-letter abbreviations for nations of the world maintained by the International Organization for Standardization (ISO)." By far the most recognized TLDs, however, are ".corn,'' ".nety' and ".org.'' These are known as the generic TLDs (or "gTLDsY')

and were created by IANA along with ".edu," ".gov," "mil," and ".info in 1983." The ccTLDs and gTLDs. and the hierarchical address tree branches below them. comprise the name space used by the Internet ("DNS").

a) Domain Name Resolutrr'on

The Internet's technical infrastructure employs in many areas what is referred to as a distributed, hierarchical architecture. Because the complete list of all TLDs, SLDs, 3LDs and so on would comprise billions of unique addresses, the address tree employs separate lists at each level. Thus. when an lnternet user types "~aw.utoronto.ca" into the

16 See K. Hafher & M. Lyon, Where Wkards S~qvUp Late: The Origins ofzhe lnternet (New York: Simon & Schuster. 1996) at 252-253-

17 RFC 1591, I. Postel, "Domain Name System Structure and Delegation" (March 1994). - RFC stands for "," the standard format for Internet technical documentation. The RFC series was edited by Jon Postel until his death in October, 1998. An updated list of IS0 Standard No. 3 166 country-code abbreviations is available at: 4p:Nrs.arin.net/netinfo/iso3 166-count~-ycodes>.

18 RFC 920, J. Postel & J. Reynolds, "Domain Requirements" (October 1984).

If the address is not in the ISP server's own local list (or "cache") of addresses which its clients have requested before. then it will look at the address piece by piece and ask certain other computers on the Internet which house specific lists whether such a location exists and, if so, where. Perhaps the single most important list of domains is the "root zone," which literally describes what TLDs, or initial set of branches off of the -'Wof the DNS "tree" exist. The authoritative Iist of gTLDs and ccTLDs defines which domains exist in the Internet's current root system, and therefore. of course, which do not- Certain reference implementations of the software required to use the DNS are used by most ISPs to automate the look-up process. the most common being the Berkeley Internet Name Domain (BIND), which is available free from a non-profit corporation called the Internet Software Consortium (Isc)-l9

Once the user's ISP's computers confirm the existence of the domain to which the user wants to send mail or at which a desired Web page is located, the domain name is converted to its IP number equivalent, and the packets containing that message or request are "sent out" to be transported to their destination. Each packet has an address "label" on it (or "header") denoting its final destination- However. each packet does not travel in a continuous path to its destination. Rather. it is shunted dong £?om network to network, taking an unpredictable path from one to the next. This process is known as routing and forwarding. Concern over the size of tables which contain all possible destinations dictates that smaller lists be distributed throughout the network and that they be consulted in a logical, hierarchical order. to establish the complete end-to-end path which a particular packet must travel.20 Complex routing protocols are employed by the software runniag in the

19 See chttp9/~r~~.isc.org/view.cgi?/products/BMD/index.phtml~.We will return to the ISC in Section IV.D.4 below-

20 There are several "traceroute" services on the Internet which display the many "hops" which a packet might take between a remote server and one's own. The results are often surprising, and demonstrate the degree to which Internet communications are independent of geographical and political boundaries. See, for example, the list of traceroute services linked at . routers to maintain the integrity of the routing tables both with respect to each other (to avoid packets being sent in endless loops) and with respect to the network as a whole (to avoid the failure of individual packet transmissions).

Since packets are bounced across the network fiom one router to another, until their final destinations are reached. the cooperation of many different, unrelated networks are involved in the end-to-end routing and forwarding process. This is one of the most remarkable features of the Internet- The practice of networks carrying packets fiom unrelated networks, bound for still other networks. flies in the face of traditional network industries. h the transmission of natural gas or electricity, for instance, the loaning of "packets" onto networks is strictly controlled and compensated for. While individual electrons or gas particles are not itemized and tracked, the quantity and location of inputs and outputs are very closely monitored. The reason. of course, is to facilitate accurate billing for both the value of goods delivered and the delivery service itself. Contracts and pricing schedules govern every stage of the transmission process, at every level of distribution. Further, fms in these industries tend to be closely regulated, both with respect to pricing and terms. and also physical interconnection. for safety and competitive reasons. This state regulation constitutes one of the layers of the governance of these networks.

Telephone networks, of course, provide another example of very formalized routing and forwarding arrangements. Contracts and public regulation govern the transmission of telephone calls at almost every level. the exception, of course being private branch exchanges within large institutions and corporations. Complex financial arrangements between carriers govern the interconnection of networks and the exchange of trac among them, Calls are routed logically (compared to the seemingly random routing of IP packets) and billed according to length and "distance." The current Internet architecture, however. is defined by the seemingly chaotic practice of everybody routing and forwarding everybody else S packets, without any quality control or performance promises at all.

While there is no formal arrangement between networks (at the routing level") to perform this "public service." what binds them together is the common use of certain software protocols and reference to certain authoritative lists. As noted above, no

2 1 As there is at the physical peering and interconnection level. See Section 1II.C below. individual network knows where each all other networks are. It only knows where to start getting that information. This is a good example of the "distributed" nature of the Internet. No one entity is in a position to control the transmission of any particular packet across the Internet. Within private local area networks (LANs), the network administrator is in complete control of intra-LAN transmissions. from the IP addresses of all connected resources to the topology of the machines themselves. On the Internet, however, a particular LAN is just one of many, each of which is similarly locally controlled but incapable of doing anything other than sending packets to another network for it to (hopefully) send on to the next. The routing practices of the Internet are hdamentally cooperative phenomena. Instead of detailed hierarchical and bilateral legal and financial arrangements such as those found in other network industries. the traditional routing arrangements of the Internet are anonymous and voluntary. The reason is clear when one considers the origins of the Internet.

The ARPANET was fbnded and directed by the United States Defence Department as both a research project in itself and a way of reducing program costs by allowing researchers at funded facilities to share computing resources located at other fhded facilities. There was absolutely nothing commercial about the ARPANET. nor its primary successor, the NSFNET. There was no sense that routing should be closely monitored and billed for, not just because it was very difficult to do with precision. but because the purpose of the network was communication. not profit. Thus what might be termed traditional economic concerns simply did not enter into the design of the ARPANET and NSFNET's routing systems, nor almost any other aspect of the early networks.

The initial research facilities whose networks comprised the ARPANET and NSFNET routed and forwarded each other's packets because they conceived of their local networks as part of something larger. something which they considered beneficial to be part of. As the Internet grew and the hardware required to continue participating in these networks and subsequently the Internet became bigger and more expensive. these institutions kept pace because of the network effect: the value of being connected was steadily rising as more and more other networks joined. Value. of course, meant access to academic resources, and membership for a university's faculty in an ever-growing online world. where e-mail lists and electronic journals were beginning to redefine academic publishing. particularly in the sciences. These networks offered comprehensive connectivity to the entire Internet, and continued to do so as the Internet became commercialized, despite rising costs and complexity of management.

The value of connectivity to the entire lnter~remains suflcient to motivate rhe operators of autonomous commercial networks to route each others' puckers, since they implicitly expect that all other networks will route their packets. too. There are no wide- scale formal arrangements or financial relationships which support Internet routing, only the common interests of participating networks in comprehensive connectivity. While detailed identification and monitoring of packets is still quite difficult (and long considered unnecessary because packet transit was considered '-too cheap to meter"). it is possible- Should an e-mail from Toronto to Auckiand cost more than to Chicago? E-mail has always been considered distance insensitive. like all Internet communications. Once one has paid for a connection, one can '-travel" just about anywhere for no additional cost. The implicit agreement of thousands of separate network operators to route each others' packets for fiee (as a separate matter from physical interconnection) is invisible because it has just dways been this way-

It has been this way, it is suggested. because that was the paradigm which governed the Internet's non-commercial predecessor networks, not because it is the nsst economically efficient. In fact. much of the early work in Internet economics has been aimed at reconciling the economic ideal that prices be matched to costs with the tremendous success of flat-rated and non-usage sensitive pricing in the ~ntemet." As we shall see. the existing model is beginning to be tested in many ways. particularly in the context of peering and interconnection. where more economically rational financial arrangements have already begun to be imposed-

Routing relationships among networks are far less formal than interconnection relationships because the many networks involved in routing particular packets are hctionally anonymous and both geographically and logically remote. However, since network-to-network routing (as opposed to end-to-end routing) is directly related to network

- -- - 22 See J.K. MacKie-Mason & H.R. Varian. "Pricing the Internet" in B. Kahin & J. Keller. eds., Public Access forhe Infernet (Cambridge. MA: MIT Press, 1995) 269 and L-W. McKnight & J.P. Bailey, eds., Internet Economics (Cambridge, MA: MIT Press. 1997). particularly J.K. MacKie-Mason and H.R Varian, "Economic FAQs About the Internet," at 27. and D.D. Clark, "Internet Cost Allocation and Pricing," at 2 15. interconnection, changes in peering and transiting links would also affect the chain of packet routing. Any derogation from the commitment of autonomous network operators to route each others' packets would have significant impact on the way the Internet works. The embrace of interconnectivity for its own sake has created the expectation among users that being "on the Internet" means being abIe to reach any other part of it, with no additional (or "long distance") charge.

Even though they are divided and distributed as described above, the various lists of Internet locations (below the root zone level) are still quite large. To ease the load which would fall upon cne authoritative list of domain names, there are thirteen identical name servers located around the ~orld.~Of the 13. the "A" root server is by far the most important. The other root servers contain only copies of its zone files. which they regularly download to remain current.

The "A" root server was maintained by Network Solutions. Inc. (NSI) under its Cooperative Agreement with the United States government for six years." However, control over the "A" root server was transferred to the Internet Corporation for Assigned Names and Numbers (ICANN) in summer. 1999. Control over the "A" root server, which at its most basic level can be thought of as simply a data file with 250 or so entries, constitutes nominal control over the Internet. However. several factors mitigate against the indiscriminate exercise of this power. If those who controlled the "A" root server were to add or remove TLD zones at whim, the administrators of the largest commercial networks

23 See E. Rony & P. Rony, The Domain ~VameHandbook: High Stakes and Strategies in Cyberspace (Lawrence, Kansas: R&D Books. 1998) at 66ff. and .I.Murai, Presentation to ICANN Root Server System Advisory Committee. May 26, 1999, Berlin. Germany,slide no. 5: ~http://cyber.Iaw.harvard.edu/icann/berlin/archivdjun-murai-pres/sldOO5.htm~.

24 See Network Information Services Manager(s) for NSFNET and the NREN: MTERNIC Registration Services Cooperative Agreement No. NCR-92 18742 between National Science Foundation and Network Solutions, Incorporated [sic], dated January I. I983 (hereinafter "Cooperative Agreement"), ~http~l~,networ~olutions.comfnsWagreement~index.hanI~.The agreement was initially made by the US. National Science Foundation. but responsibility for its administration was transferred to the US. National Telecommunications and Information Administration (NTIA), in the Department of Comerce, in 1997. would likely simply refer to other name servers running archived copies of the old root zone, should they not agree that the changes should be made.

Changes to the root therefore require the recognition of many independent network operators. many of whom do not consider themselves obligated to refer to any particular source for root zone information. That being said. the service provided by NSI has generally been considered the ''official" version of the root zone, to which network operators have almost universdly pointed their name servers. As part of the ICANN process discussed below, there are efforts undeway to formalize the relationships between the various name server operators, most of whom remain voh.nteers at government and academic institutions, carryovers from the Internet's non-commercial era."

A very high level of implicit agreement about the way the Internet should work, bolstered by regular e-mail "meetings." essentially serves as the unwritten constitution of the root server system. All but three are located in the United States (the others are in Sweden, Japan, and the United Kingdom). such that routing of most of the world's Internet traffic depends on the operation of root servers in the United States. The fact that the root server operators have continued to perform their tasks in the commercial era just as they did in the non-commercial era gives the impression that the root server system is innocuous, even unimportant. The high level of cooperation and shared values among the surprisingly small group of individuals who operate the root server system has kept disagreement and controversy to a minimum. However, just as the importance of this largely voluntary work should not be discounted, the uncontentious operation of the root server system should not be assumed to be immutable-

The most significant aspect of control over the root zone in the short term has been control over the introduction of ncw top-kv~'1domains. Opinions vary widely on whether new TLDs are needed. and if so. what they should be, who should control them, who should be able to register names in them. and on what terms. The longer that NSI's gTLDs z5 In RFC 20 10, B. Manning & P. Vixie, "Operational Criteria for Root Name Servers" (October 19%), . the authors write: "Historically, the name servers responsible for the root (".") zone have also been responsible for all international topleveI domains (iTLDs, for example: COM. EDU, MT,ARPA). These name servers have been operated by a cadre of highly capable volunteers, and their administration has been loosely coordinated by the NIC (first SMC and now InterNIC). Ultimate responsibility for the correct operation of these servers and for the content of the DNS zones they served has always rested with the IANA." are the only gKDs (although several ccTLDs are commercially operated like gTL~s):6 the more entrenched its dominant position in the "market" becomes. The past three years of Internet governance debate has been largely focused on this issue of adding new TLDs to the "official" root zone. IANA and NSI have been sued." global organizations have been formed,28 agreements almost made.'9 and then ail subsequently overtaken by events. The United States government's assertion of control in June, 1998 changed the reform process fiom one centered in the Internet community to one directed by the United States Department of Commerce.

The management of the DNS has attracted far more attention than that of the IP numbers which they overlay. due to their greater mnemonic and commercial value. When the network consisted of only universities. research centers and defence contractors, disputes over the names of computers were apparently either rare or solvable by personal means, Even after what we now know as the gTLDs were introduced. there were few conflicts because so many possible names were available. Unfortunately, just as the initial designers of ARPANET thought that an 8-bit address space would provide all the numbers the network would ever need, the designers of the DNS could not have imagined the stress which would ultimately be placed on the name space. Formal efforts to reform the DNS have been underway since at least June, 1996. when Jon Postel (who will be hlly introduced shortly) circulated the first draft of his proposal for the institutionalization of his bction and the addition of new TLDs." The -*DNS mess." or the "DNS wars." as this long, fractious process is known in the Internet community. has brought the questions: "who runs the

26 See, for example. The .W Corporation (of Toronto. Ontario) . which registers names in the ".tv" ccTLD, under an agreement with the government of Tuvalu. and TJ Network Services (of Fresno. California) . which administers the ".tj" ccTLD with the permission of the former Soviet republic of Tajikistan.

27 See Rony & Rony. supra note 23 at 546ff. -Image Online Design: Rival Registry Sues IANA."

28 The primary one being the InternationaI Ad Hoc Committee (IAHC) .

29 Generic Top Level Domain Memorandum of Underscanding(STLD-Mow,

30 See Rony & Rony, supra note 23 at 522E Internet?" and "by what authority?" into the public consciousness, albeit a very narrow portion of the public. The root cause of the DNS wars was a seemingly innocuous outsourcing contract made before the word "Internet" was even part of the public vocabulary.

On January 1. 1993, the mundane task of maintaining the root zone and the lists of gTLDs was assigned to Network Solutions. ~nc.~'The NSF retained NSI to perform registration services for the research networks which it funded, and, among other things, gave it a five-year monopoly over the registration of SLDs in the international or generic TLDs (known earlier as .-iTLDs" but now universally as .-gTL~s").32 It was only later that the Cooperative Agreement was amended to allow NSI to charge a fee for the registration and renewal of names in the gTL,~s.33From there. NSI took its government-granted monopoly and ran. In the summer of 1997. just as the United States government was announcing that it did not intend to renew NSt's monopoly. but rather to see to it that NSI have ~orn~etitors~~ NSI made an initial public offering of a small minority of its shares." NSt reported profit of USDS4.8 million in the first quarter of 1999. on net revenues of USDS38.1 million? The mundane task of registering domain names is now very big business. and many others have been clamoring for a piece of the action since roughly 1995. After considering the historical governance structure of the domain name system. we will return to the reform process which this phenomenon spurred.

3 1 There were other contractors which are not relevant for present purposes. The reader is referred to the excellent history of the DNS and of the Internet's infrastructure generally in Rony & Rony, supra note 23 at Chapter 4 - '*FoIlow the RFCs: The Development of the Domain Name System (DNS)."

32 See Cooperative Agreement. supra note 24.

33 See NSF Cooperative Agreement No. NCR-92 18742. Amendment 4. ~http://www.nenvorksolutions.com/nsf/agreement/amendment4.hunl~.

34 The White House. A Frameworkfor GIobaI Elecrronic Commerce (July 1. 1997), .

35 NSI is a subsidiary of American defence contractor Science Applications International Corporation (SAIC). See SAIC, "SAIC Subsidiary Profile: Network Solutions, Inc.." ~hnp:Nwww.saic.com/company/subsidiaries/I~~

36 See Nehvork Solutions. Inc., News Release. "Nehvork Solutions Announces Record 1999 First Quarter Revenue and Earnings" (April 22. 1999). . 4. Jon Postel and the Internet Assianed Numbers Authority

Considering the broad constituencies which rely on them. the Internet name and number spaces have been remarkably stable since they were first used as the basis of what we now know as a global communications network. An ofien underestimated reason for this stability is a remarkable continuity in the personalities leading the network's informal governance structures. For the entire history of the Internet, including the eras of the ARPANET and NSFNET, the DNS and several other key coordinating hnctions were overseen by Dr. Jonathan Postel.

Jon Postel began his senice to the ARPANET in 1968 as a 25-year-old University of California at Los Angeles (UCLA) graduate student. Whereas Vinton ("Vint") Cerf will likely be known to Internet history as a co-inventor of TCP/IP and tireless evangelist for commercial internetworking. Jon Postel will be remembered as the Internet's "beneficent dictator." who placed the interests of the network above all other considerations. By his constant presence and the trust which the Internet community placed in him, Postel held unique power over the Internet. power which he maintained by not abusing. David Clark. an MIT professor and former Chair of the Internet Architecture Board (IAB), described Postel and his hction as follows in a I997 interview:

...the person who was making all these judgmental administrative decisions for the Internet since its beginning is Jon Postel. who is the embodiment of something more formally called the Internet Assigned Numbers Authority. [ANA. PeopIe are sometimes amused - "Oh. it must be this grand thing catled IANAI - when they want to eet a number. and in fact Jon is this guy with long hair and a beard and sandals who lives overlooking Marina del Rey. and you call him on a telephone and he says. "Well, you want some numbers. yeah." That's the internet [Assigned] Numbers Authority. It is as public a service and as non-profit as you can imagine. it's just this guy.'7

The personal influence of Postel over the Internet. and particularly the DNS. should not be underestimated. Not only was he involved in the creation of the all-important authoritative lists of IP numbers. domains. and addresses. but he maintained them with almost unchallenged moral authority. His employer for most of his career. the Information Sciences

37 Transcript of DiaIogue between J. Zimin and D. Clark, Harvard Law School. Cambridge, Massachusetts (I October 1997),

Postel's stewardship of the master list of master lists, the root zone, put him squarely at the centre of the controversy over whether to add new TLDs to it. This fascinating exchange fiom Jonathan Zittrain's October, 1997 in-class interview with David Clark at the Harvard Law School demonstrates Postel's formal authority over such matters and the principles which guided his approach to them:

Clark: ... So the top-level domain table is maintained by Jon Postel. So if you want to do .bus - Zittrain: I've got to talk Jon Postel into it. Clark: You have nvo choices: one. you can just start a whole new naming structure which has nothing to do with mine. but then the question is how would your computer ever find it- The other is you go to Jon Postel and you say. Tan I have .bus?" Zittrain: And he says - Clark: No. {Iuzrghter; Zittrain: I say. "I'll give you $I million," Clark: And he says, "I 'rn not moved by money. It doesn't make sense for the good of the Internet. 1 think the answer is no." Zittrain: Now this is extraordinq. How many people knew this before today's cIass? [audience]: Well. what if the man dies? {laughter) Zittrain: Does he have heirs ... (inaudible)? {laughter,' Clark: You see. this is why the contractual structure becomes so interesting, because it is in a very technical sense a memorandum of understanding which has possibly now expired between an institution of the government, the National Science Foundation, and a private sector, non-profit institution. the Information Sciences Institute. Technically speaking, the responsibility for carrying out the IANA is an institutional responsibility.

38 A.M. Rutkowski. "US DOD [Internet] Assigned Numbers [Authority]*, Nehvork Information Centers (NICs), Contractors. and Activities: known detailed history" [sic] ( 1W6), . Zittrain: Whoever fills his sandals at the institute.-. Clark: That's right, But we all know that-I mean. quite apart from the contractual structure. we are tremendously dependent on what has been twenty years of incredibly good judgment from Jon, and yes. if he fell in a hole or decided he wanted to take up Zen or something, I don't know what we would do. [..,1 now with the world being so commercial, if we lost the personal stature he brings to this, I honestly don't know what we would do. I mean, we would have to do something that's much more either convoluted, Machiavellian or public, and I don't know which one that would bee3"

Jon PosteI7s death following heart surgery on October 16. 1998. just over one year after this interview, forced precisely the kind of choices which Clark alluded to in his last senten~e.~' His description of the options as being "something that's much more either convoluted, Machiavellian or public" turned out to be an extremely apt characterization of the different approaches to life after Postel. and of competing visions for Internet governance more generally. Since Postel's death, the focus of those competing visions has been the Internet Corporation for Assigned Names and Numbers (ICANN).

5. The Internet Cornoration for Assianed Names and Numbeq

ICANN was created at the direction of the United States Department of Commerce, National Telecommunications and Information Administration (NTIA) in October, 1998. ICANN was intended to be the vehicle by means of which the "Internet community'' would take over the United States government's responsibility for the management of Internet names and addresses.'"

39 ZittrainfClark Transcript. supra note 37,

40 See K.N. Cukier. "The Internet Loses Its Head" CVall Srreer Journal (October 22. 1998) A22. See also RFC 2468, V. Cerf, "1 Remember IANA" (October 1998).

41 The nature of. and basis for. the United States' "responsibility" or jurisdiction over the Internet's technicai infrastructure is itself a contested issue. We will return to this point in Section VIII.D.4, below. The White Paper

When it appeared in September. 1996 that his proposd for DNS reform would not achieve the degree of consensus which would be required to implement it, Jon Postel requested the (ISOC)" to organize a more broad-based committee to attack the issue. That process produced a proposal which. once again, the "Internet community" rejected." This time, critics charged that the proposal was tainted by the involvement of governrnentd institutions: did not give enough weight to commercial interests; and constituted a 'Yhefl" of the Internet by one or more of: big government, big business, or non- Americans. In late 1997, the NTIA asserted control of the process.

The NTIA, the United States Department of Commerce agency explicitly directed by the White House to "privatize*' the DNS, released its initial policy proposal (known as the *-GreenPaper.') in January. 1998. and solicited comments." The NTIA, led by Senior White House Advisor Ira Magaziner. produced its definitive statement on DNS policy in June, 1998. in what is known as the -White The White Paper enthusiastically embraced the ideal of self-regulation for the Internet's infkastructure. Those interested in creating the international non-profit corporation which the White Paper called for formed another ad hoc organization. the International Forum on the White Paper (IFWP).~~The IFWP held several meetings in the summer of 1998 at which contributors debated details of the proposed entity's structure and governance. However. it appears that just as many IFWP members felt that they were on the verge of consensus on just the kind of new entity which

42 See

43 See gTLD-MoU, supra note 29.

4d United States Department of Commerce. A Proposal to Improve Technical Management oflnrernet Names and Addresses: Discussion Drufi 1/30/98 (30 January 199%)(hereinafter "Green Paper"), ~http://~v~vw.ntia.doc.gov/ntiahome/domainname/dnsdrft.htm~.

45 United States Department of Commerce, hfanagemenr of Internet Names and Addresses (5 June 1998) (hereinafier "White Paper"),

6) Amendment II and dne RemgnWon oflUWN

While Postel's Washington lawyer was drafting the corporate documentation to create the new-IANA, the NTlA was beginning to Iay the legal groundwork for competition in gTL.D name registration. NSI's monopoly and its contract to operate the primary root server were extended to September 30, 2000 to allow "NewCo," as new-IANA was obliquely referred to at the time. to prepare to take over the United States' re~~onsibilities.~~Amendment 1 1 to the Cooperative Agreement also provided for the "deveIopment, deployment and licensing by NSI of a mechanism that allows multiple registrars to accept registrations for the generic top level domains (gTLDs) for which NSI acts as a registry.'"g NSI was to develop a "Shared Registry System" (SRS) on a given timeline and license it to "Accredited Registrars" which NewCo would choose at a Iater date.

The new-IANA proposal for NewCo. which it proposed be a California non- profit corporation called the Internet Corporation for Assigned Names and ~umbers,'~ prevailed, and ICANN was endorsed by the Department of Commerce on November 25, 1998.~'That memorandum of understanding called for the parties to work together on a "DNS Project":

47 See G. Cook "A Shadow Government: Clinton Administration To Establish Public Authority (New IANA Cop)To Run Internet" The COOK Report on Internet (November, 1 998 - Extra Edition), ~httpY/~~v~.~~~krepon,com/seIlout.hunP.

48 See NSF Cooperative Agreement No. NCR-92 18742. Amendment 1 1 (hereinafter .-lmendment I I), ~httpY/www.nenvorksolutions.com/nsf/agreement/amendment1I .html>.

50 See .

51 See Memorandum of Understanding Benveen The U.S.Department of Commerce and Internet Corporationfor Assigned Names and Numbers. (November 25, 1998) (hereinafter "NTIA-ICANN MOLT'),~http~/www.ntia.doc.govlntiahomddomainnmdicmn-memomdum.h~~~ In the DNS Project, the Parties will jointly design, develop, and test the mechanisms. methods. and procedures that should be in place and the steps necessary to transition management responsibitity for DNS functions now performed by, or on behalf of. the US. Government to a private-sector not-for-profit entity?'

ICANN was formally recognized as that private-sector not-for-profit entity on February 26, 1999.~~ICANN7s purposes. set out in its Articles of Incorporation, are:

(i) coordinating the assignment of Internet technical parameters as needed to maintain universal connectivity on the Internee (ii) performing and overseeing functions related to the coordination of the Internet Protocol C;IP") address space; (iii) performing and overseeing functions related to the coordination of the Internet domain name system ("DNS"), inctuding the development of policies for determining the circumstances under which new top-level domains are added to the DNS root system; and (iv) overseeing operation of the authoritative Internet DNS root server system ."

However, [CAW'S first task involved opening up the .corn. .net and .org TLD registries to competition. So important was it for the NTIA to speed the end of NSI's monopoly that this task was begun while ICANN had only an interim board of directors and no representative membership structure. Efforts to identify such a structure continue.

The Shared Registry SyJem (S')

[CANN and NTIA announced the names of the first five competitive registrars on April 2 1. 1999." The five --testbed registrars" were to perform a sixty-day test of NSI's Shared Registry System, before a hrther group of twenty-nine more Accredited Registrars were to join them. As far as the stock market was concerned. NSI's future

52. See [bid.. section B: "Purpose."

53 See Letter from J. Bechvith Burr. United States Department of Commerce, to David Graves, Network Solutions. incorporated [sic] (February 26. I999),

54 Section 3, Articles Of Incorpontion Of Internet Corporation For Assigned Names And Numbers (As Revised November 2 1, 1998),

55 See ICANN, News Release, "ICANN Names Competitive Domain-Name Registrars" (April 2 1, 1999), . earnings were not significantly threatened by the introduction of competition. NSI's shares surged 52% on the news that NSI would be allowed to charge prospective competitors USD$10,000 for access to the SRS. USD$18 for each new registration and USD$9 per year for each rene\~al.'~

The SRS scheme was immediately criticized for failing to introduce effective competition and for imposing a burdensome contract with NSI on aspiring alternative registrars.57 A group of Internet professionals called the Boston Working Group, which had submitted a competing NewCo proposal to the NTIA. offered this view in an April 23, 1999 news release:

[TJhe so-called "new competition" is nothing but an agreement to resell entries in NSI's registration database at a price regulated by the US government. Prior to the [CAW plan, hundreds of registrars were already reselling NSI names. at a higher price.SB

Boston Working Group member Milton Mueller of the School of Information Studies at Syracuse University continued:

[The NTIA] seems to be imposing a cost-plus, utility regulation model upon the core functions of the Internet. I don't understand why NTIA is opting for price regulation when it could simply open the market to new players and allow customers to have real alternatives. Besides. NTIA lacks the experience, the competence, and the Iegal authority to engage in economic regulation of Internet name services.59

The SRS testbed period was subsequently delayed by a month until July 24, 1999, but even as OF the date on which it was originally scheduled to end, only one of thirty- seven companies accredited at the time had actually begun registering gnDs in competition

------56 "NSI skyrockets on fee collection decision." CNET Newsscorn(April 21, 1999).

57 See Regiswar License and Agreement, ~http:N~v.ntia.doc.gov/ntiahorne/domainn~199.htm>.

58 See Boston Working Group, News Release. "BWG Disputes ICANN's Domain Name Competition Claims" (April 23, 1999), . with NSI.~' Concurrent with the woes of the SRS. ICANN has been assailed fiom all sides as going too fix. not going far enough. overstepping its authority and violating its own by- laws. 1CANN-s announcement of its intention to levy a USD$I charge on each SRS registration to hdits operation was attacked as -.taxation without representation."6' On June 15, 1999 the pitched battle with NSI over control of the gTLD registry came out in the open as ICANN's chair. Esther Dyson. publicly accused NSI of dragging its feet to avoid competition?' Chairman of the United States House of Representatives Commerce Committee. Tom Bliley (of Virginia -- NSI's home state). spearheaded a campaign against ICANN in Congress. His Committee held public hearings into ICANN on July 22, 1999.~~ Making matters even worse was the fact that by midJuly. 1999. lCANN had run out of the start-up money which major Internet companies had donated to it." and was forced to "'pass the hat" once agak6'

At time of writing. ICANN's hture is unclear. Set up at the behest of the NTIA and propped up both publicly and privately by NTIA official J. Beckwith (E3ecky) Burr. ICANN was to fulfill the lofty designs of the White Paper for open. transparent, and accountable private-sector management of the DNS. Not surprisingly, ICANN is finding that managing a transition from monopoly to competition is no easy task. The only supporter which ICANN seems to have left is the United States government. the same entity which started the Green Paper process specifically so it could get out of Internet governance. The

60 K. Kaplan. "Web Power Struggle Delays New Domain Name System" LA Times-corn(34 June 1999), .

61 NSI most vocally characterized the fee as a ta.. so as to amact the attention of the United States Congress. See C. Macavinta "ICANN running out ofmoney" CNETNew.com (7 July 1999). chttp-J/news.cnet.com/news/O-1005-200-344529.htrnP.

62 ICANN, "Esther Dyson's Response to Questions.- letter from E. Dyson to R. Nader and J. Love, Consumer Project on Technology (1 5 June 1999). .

63 See House Committee on Commerce. News Release. "Bliley Blasts ICANN Management of Domain Names: Questions Authority To Levy Domain Name Tax" (22 June 1999), and E. Wassermann, "Congress Ready to Weigh In on ICANN" The Indusrry Standard(15 July 1999), . The hearing was titled "Domain Name Privatization: Is ICANN Out of Control?".

64 C. DueMarsan. "New Net domain name authority out of cash" Computerworld ( 15 July 1999)- ~http://www.computenvorld.com/home/news.nsWa1U9907154icann>.

65 J. Simons, "Internet-Address Firm Receives 7 Loans But Says It Still Needs % 1 Million More" Wall Street Journal (23 August 1999) 85.3. process has been officially described as being "consensus-based." but it is now clear that there is anything but consensus about 1CANN.s existence, let alone any of its policies.

The worldwide debate over the addition of new TLDs has been raging for several years and pits trade-mark holders. entrepreneurs, individual users. domain name rights coalitions, and NSI against each other. ICANN has been placed by the United States government in the middle of this fray and is expected to be the ultimate authority over the DNS and other key elements of the Internet's technical Wtructure after September 30, 2000. when the NnA-NSI Cooperative Agreement expires. "Opening up" the existing gTLDs will appear easy compared to the task of introducing new TLDs. At least with ICANN's first task, everybody (except NSI) agreed that NSI's monopoly had to end. Given the degree of consensus which had existed around that issue. it is remarkable that ICANN appears to have so few supporters.

In view of the breadth of its proposed responsibilities, and the relatively narrow portion of which it has attempted to act on thus far, things will probably not get any easier for ICANN. By making substantive decisions before it has established its membership structure or heId elections for its board of directors, ICANN may have already used up any goodwill which the Internet community may have beerl willing to give it That community, mainly administrators of networks of a11 sizes. has always been highly distrustful of people or bodies claiming authority over them. especially bodies related to government. Skepticism about ICANN's ability to prevail over this rancorous group was expressed by Robert Shaw, the International Telecommunication Union's (ITU) internet policy advisor and a veteran of the International Ad Hoc Committee (IAHC) and IFWP processes, in a presentation to the ICANN interim board of directors in Brussels in November, 1998. The points on Sbaw's last slide from that presentation need no elaboration:

Enjoy these days - they are your best!

0 Prepare for firestonn once substantive decisions are made Don't start at ground zero, most of the hard work has been done by others with arrows in their backs... . Wish you luck ...w

66 R, Shaw, "Public Policy Issues," Presentation to ICANN interim board of directors. Brussels, Belgium (25 November 1998), ~hctp~lpeople.itu.int/-shaw/presentation~m~els-icmn.ppt~. ICANN has not yet attempted to exercise powers based on more than one of its four enumerated areas of responsibility. Of these four, the first warrants particular attention. "Coordination of the assignment of Internet technical parameters as needed to maintain universal connectivity on the Internet" may sound fairly dry and technical, but these words are actually a very strong statement of network policy. Unfortunately, they may not have been intended that way. There may be a gap between the traditional Internet community conception of what Postel and IANA did. and the gravity of the responsibility of maintaining universal connectivity on the Internet While the assignment of Internet technical parameters may have been relatively manageable for Postel to coordinate, the vastly enlarged group of stakeholders in the commercial Internet suggests that this task may become more and more difficult-

As we shall see, we can no longer assume that universal connectivity will always be a feature of the Internet. so charging any entity with responsibility for "coordinating the assignment of Internet technical parameters as needed to maintain universal connectivity on the Internet'' is remarkable. The words likely capture the goals of most Internet stakeholders, and certainly accurately characterize what Postel aimed to do throughout his personal stewardship of the network. However, precisely what those words mean. and specifically whether they ground mandatory or merely hortatory powers, will likely be the root of significant continuing discord among those stakeholders.

The significance of this power may only be appreciated as time passes (if ICANN survives long enough to exercise it). Most of the users and businesses which depend on the Internet either do not think about the infiamucture changing, or assume that if it does change, it will be for the better. The trouble is knowing what "for the bettei' means. The technical parameters of the Intemet determine its capabilities. In a very real sense, they restrict the range of technical abilities of the network and therefore the range of services and activities which can run on top of it. The stability and low profile of the people and institutions which have traditionally coordinated the Internet's identifier and routing systems created the impression that these essential management tasks more or less take care of themselves and are inherently fiee of controversy. The DNS mess highlights this misconception. and is Likely only the first of many technical inkstructure controversies. And, in any event, determining what -'for the better" means with respect to the management of the evolving Internet is a fbndamentally political process, not merely technical.

6, P~o~ocolsAND STANDARDS

As a computer network. the Internet is defined by the software protocols which its constituent parts use in common. There are hundreds of protocols pertaining to all aspects of the Internet's operation. and many more are under development in attempts to improve the Internet's security. fimctionality and privacy- The protocols and standards which define the basic level of functionality of the Internet today were developed in an environment in which the science of computers and communications took precedence over commercial or even explicit public service considerations. This environment produced a remarkably open. accessible. and bctional system which has now been converted to commercial use- As a brief exploration into the process by which Internet standards and protocols are set will demonstrate. there ure identz9able reasons why the internet displrrys such ztnprecedenred degrees of openness. accessibility, andjimctionality- While this process may be "botrom-up. " it is anything brtr unprincipled or private.

No other element of the Internet's infrastructure is as essential to its operation as the Transfer Control Protocol/Internet Protocol (TCPAP) suite of communications protocols. Developed by Internet "pioneers" Robert Kahn and Vinton Cerf in the mid 1970s, what became known as TCPm is a set of simple rules for the exchange of communications transmission^.^' Specifically, it provides for the orderly exchange of '-packetsw of data. Instead of a constant stream of electrical impulses (or ones and zeros) which comprise a telephone call, Internet transmissions are short chunks of ones and zeros (or bits) called packets. Packet-switched networks do not ho Id transmission lines open throughout a conversation between two terminals, as do switched telephone networks. Rather, outgoing transmissions are divided into packets, sent out onto the network one by one, where they travel whatever path is most efficient at that particular moment, and are reassembled at the

67 V.G. Cerf & R-E. Kahn, "A Protocol for Packet-Network Intercommunication" IEEE Transactions on Communications Technology (May 1974) 627. receiving terminal, TCP/IP is the common set of rules which allows this process to work smoothly.

After Cerf and Kahn's initial design for TCPAP was published. it was adopted and experimented with by university computer science departments, beginning in the United States, but thereafter around the world. The strength of the early TCPAP protocol suite was in its simplicity, and it quickly became the preferred communication protocol of the international computer science community. Meanwhile, IS0 and many national standards bodies were developing another model for "internetworking," known as open systems interconnection (OSI)? TCPm was considered by those supporting the OSI reference model as merely an academic toy and not fit for a real. global data network. Compared to the massive global infrastructure of the telephone system, the ARPANET was considered no more than a quaint experiment. However. the standardization of ARPANET on TCPm on January 1 1983 (from the antiquated Nehvork Control Protocol WCP) which had served the ARPANET to that point) set off an explosion of network development which nobody in the commercial or regulatory world foresaw. TCP/IP is still the technical "glue" of the Internet and, as we shall see. is now in the process of revolutionizing the telecommunications industry as a whole.

TCP/IP is not just another technical specification. It has fbndamentally different properties to previous communications protocols. It was designed not by commercial telecommunications hardware vendors. but by government-funded academic researchers, not for commercial purposes. but for collaborative scientific purposes. This was a discrete environment and group of individuals whose goal was to develop a particular kind of network. as an end in itself: This passage from the Brief History of the Internet, co- authored by several of the Internet's "founding fathers," highlights the basic design goal of the ARPANET:

The original ARPANET grew into the internet. [...I The Internet as we now know it embodies a key underlying technical idea. namely that of open architecture nehvorking. In this approach, the choice of any individual network technology was not dictated by a particular network architecture but rather could be selected fieely by a provider and made to intenvork with the other networks through a meta-level "Internetworking Architecture". [...I While there were other limited

68 Some information in this paragraph is drawn from Hafiter & Lyon, supra note 16 at 246-25 1. ways to interconnect different nehvorks, they required that one be used as a component of the other, rather than acting as apeer of the other in offering end-to-end service.69 (emphasis in original)

The goal of developing an "internetworking architecture" was not to maximize its profit- generating potential. nor to create nor preserve the hegemony of any particular standard, company, or nation. Nor was it to develop a weapon of war, despite early networks' hding out of defence budgets.70 Their goal was to create a flexible, versatile, and above all else simple archifecrurefor internetworking. This goal has its practical expression in TCP/TP.

While the mythology of the Internet suggests that TCPLP was chosen by the autonomous operators of the many independent commercial and institutional networks which comprise the modem Internet. history does not bear out this claim. First, commercial networks did not create the Internet. Rather. they interconnected with existing networks and adopted their protocols- Those protocols have been scaied significantly to accommodate the massive influx of commercial networks to the Internet, but they have not been replaced. Second, TCP/IP did not spontaneously become the sole protocol of those early networks. One of those networks. CSNET (for "Computer Science Net~ork'')~was hded by the NSF and supported several protocols. Another. and unquestionably the most important, was the ARPANET. , the co-inventor of TCP/IP, explains how TCP/IP came to be the sole protocol for the ARPANET:

The TCPnP protocol adopted by DOD a few years earlier was only one of many [networking] standards. Although it was the only one that deaIt explicitly with internetworking of packet networks, its use was not yet mandated on the ARPANET. However, on January 1, 1983, TCP/?P became the standard for the ARPANET, replacing the older host protocol known as NCP. This step was in preparation for the ARPANET-MILNET split.. which was to occur about a year later. Mandaring rhe rtse of TCP/IP on the ARPANET encouraged the

69 B.M. Leiner, V.G. Cerf, D.D. Clark, R-E. Kahn, L. Kleinrock, D.C. Lynch. J. Postei, L.G, Roberts & S. Wolff, "A Brief History of the Internet (Version 3. I). ~http://www.i~oc.org/interne~isto~hrief.h~1>.

70 There is a widely-held but inaccurate belief that the Internet was designed to survive a nuclear attack. Not only is it not true with respect to the Internet such considerations did not motivate ARPANET's developers either. See Hafner & Lyon, supra note 16 at 10. addition of local area network and also accelerared the growth in numbers of users and networks." (emphasis added)

Kahn understates the phenomenon: standardization on TCP/IP triggered an explosion of interconnection. Haher and Lyon describe the event this way:

As milestones go, the transition to TCP/IP was perhaps the most important event that would take place in the development of the Internet for years to come. Afler TCP/IP was installed, the network could branch anywhere: the protocols made the transmission of data from one network to another a trivial task."

The network is still branching today. The fundamental simplicity and power of TCPAP internehvorking is one of the primary factors in the Internet's success. Kahn indicates that it was only after the ARPANET was standardized on TCP/IP that the protocol was embraced by hardware vendors. since the mushrooming of IP networks on university campuses had created demand for mass-production TCP/IP-enabled equipment:

By the mid- I98Os. industry begati offering commercial gateways and routers and started to make available TCPKP software for some workstations. minicomputers, and main frames. Before this. these capabilities were unavailable:-- they had to be handcra3ed by the engineers a&each site.'' (emphasis added)

Not only did the computer and telecommunications equipment industries not invent TCPAP. but they were committed to business models which did not follow the design philosophies of universal interoperability and low-level simplicity. Brian Kahin and Bruce McConnell offer another view of the transition to TCPAP and its significance:

A watershed decision during the mid-1980's was NSF's choice of the TCPIIP protocol rather than a proprietary protocol or X.25. As Mandelbaum and Mandelbaum observe: "It led almost directly to the establishment of the system of specialized private academic networks we have today [I 9921, rather than to reliance by the academic and research

7 1 R.E. Kahn, "The Role of Government in the Evolution of the Internet" (1994) Vol. 37. No. 8 Communications of the ACM 15 at 17. n Hafner & Lyon, supra note 16 at 249.

73 Kahn, supra note 7 1 at 18. community on the public. commercial networks that are the mainstays of the business world."7J

Both the design of TCP/IP and its adoption by the non-commercial Internet community were fhdamentdly non-market phenomena. The Internet's protocois and standards were developed in an institutional context. for explicitly non-commercial purposes. and only later adopted by commercial interests. The operation of a communications network in the best interests of the network irse(f: for the benefit of its users, is a findamentally direrent proposition fi-om the operarion of a network in the besf interests of its owners, for the purpose of making a profit. An unprecedented collaboration among many people all over the world. without expectation of profit, produced the basic protocols and standards on which the commercial Internet of today is built. It is to the field of Internet standards that we now turn, to add another element to the unique compound of forces and institutions which have contributed to the governance of the Internet.

2. The Internet Enaineerina Task Force

The primary Internet standards-development body is the Internet Engineering Task Force (IETF). The term "standards-setting" is not an appropriate way to describe what the IETF does. The IETF is a forum through which standards are proposed. experimented with, and then. if they meet the collective approval of the membership. put forward as standards which equipment vendors and networks are fiee to adopt or not adopt." The IETF does not impose standards. In fact. it is not even technically accurate to describe the LETF as an agency or membership organization at all. The IETF only strictly exists when its "attendees" are convened in one of its ui-annual meetings.76 Beyond those meetings are the

74 B. Kahin & B. McConnell, "Towards a Public Metanetwork: Interconnection, Leveraging, and Privatization of Government-Funded Networks in the United States," in E. Noarn & A. NiShuilleabhain. eds.. Private Networks Public Objectives (Amsterdam: Elsevier Science, 1996) 307 at 3 15, quoting R. Mandelbaum 8: P.A. Mandelbaum, "The Strategic Future of the Mid-Level Networks," in B. Kahin, ed.. Building the Informarion Infastmccrure (New York: McGraw-Hill, 1992) 62 at note 6.

75 See RFC 2026, S. Bradner, "The internet Standards Process" (Revision 3) (October 1996),

76 RFC 171 8, IETF Secretariat & G. Maikin. "The Tao of the IETF: A Guide for New Attendees of the Internet Engineering Task Force". . ongoing activities of many different working groups, which communicate primarily by e- mail. By the time a proposed standard is discussed at an IETF meeting. it has essentially been "voted on" by the members of the working group, who give their comments, propose different ways of doing things, and ultimately agree to elements of standards through extended e-mail discussions.

While there is a good deal of mythology about the IETF, no doubt owing to its rather remarkable achievement of cooperatively developing standards in a fiercely competitive industry. certain basic characteristics can be observed. The first is that around the time of its founding in 1986. the ETF was essentially a very small group of like-minded engineers who shared a common interest in developing one interoperable TCP/IP network Its fmt meeting, in San Diego in January. 1986, had fifteen attendees.77 A brief description of the early IETF is found in "The Tao of the IETF":

The Internet Engineering Task Force is a loosely self-organized group of people who make technical and other contributions to the engineering and evoIution of the lntemet and its technologies. It is the principal body engaged in the deveIopment of new Internet standard specifications.'

That generai description continued to apply to its attendees until the mid-1990s, when meeting attendance exceeded two thousand people. the vast majority of whom were representatives of hardware manufacturers and sohare developers in the Internet industry. David Clark, the first Chair of the Internet Activities Board (IAB) (which later became the Internet Architecture Board), one of several obscure "parent" organizations of the LETF, has described the early IETF community as follows:

t once said. we had a war inside the - you know, this is a very personal community - we had a war and lynched our leaders because they made a mistake. and so we threw them all out. We had a real purge. and I was asked after that to give a calming talk, which was a little weird, and so 1 invented a saying in trying to describe our community. both its strengths and weaknesses, and I said. "We reject kings. presidents and voting. We believe in rough consensus and a running code." Somebody made that into a T-shirt and printed out a thousand copies. It is true that any time anybody stands up and offers n Ibid., at "Humble Beginnings."

78 Ibid. to leave, this particular community kills them first and then asks why, because if you want to leave, you must have self-interest, and therefore we don't trust you.79

The catch-phrase, "we reject kings, presidents and voting - we believe in rough consensus and a running code" has come to represent not only the "IETF way" of doing things, but is also thought to be the "Internet way" more generally.

Yet as deliberative bodies go. one which rejects voting and relies on rough consensus does not sound particularly democratic- The IETF worked in the non-commercial and early commercial eras for at least two reasons- First, its membership was, on an individual level, remarkubljr hornogenorrs. The members may have come from universities, computer manufacturers. and sofbvare companies, but as electrical engineers or computer scientists they shared a very similar mental discipline and professional values. Engineers have a way of collaborating to achieve workable solutions. without reference to external reasons which might stand in the way. such as their employers being competitors.

Second the leadership of the IETF habeen rernarkabZy stable throughout its existence, even through the commercial era. David Clark chaired the Internet Architecture Board from roughly 1980 or 1983 until 1989" (there are conflicting views of just when the IAB began). The current chair of the [AB, Brian Carpenter, has held that role since 1995, with only three other chairs between Clark and himself. one of whom was the ubiquitous Vint ~erf."These individuals enjoyed the trust of the Internet community. and derived their "authority" from that trust. Many continue to hold honorific titles as Ynternet leaders" or "pioneers" to this day. However. their unique power is entirely dependent on recognition by the Internet technical community.

The IETF of today looks quite different from its early days, and yet still seems to accomplish the same goals using almost the same guiding principles. In his 1996 RFC

- - 79 ZittrainKiark Transcript supra note 37.

80 There are conflicting views as to just when the LAB began. See WC1 160, V. Cefi, "The Internet Activities Board" (May 1 990). .

81 B. Carpenter. "What Does the IAB Do. Anyway?" .. titled "Architectunl Principles of the Internet." [AB Chair Brian Carpenter poses the question: "Is there an Internet architecture?" and answers it this way:

Many members of the internet community would argue that there is no architecture. but only a tradition, which was not written down for the first 25 years (or at least not by the IAB). However. in very general terms. the community believes that the goal is connectivity, rhe tool is the Internet Protocol. and the intelligence is end to end rather than hidden in the network- " (emphasis added)

The statement that "the goal is connectivity" is uninteresting in the context of the early Internet community, but given that the IETF is now composed almost entirely of representatives of commercial hardware and software companies, this goal takes on more significance. The IETF is now a MI-fledged industry standards body, not an academic association sharing the results of obscure experiments. The attendees at lETF meetings are there as representatives of *e many companies seeking to profit from the commercial Internet. The success of the modem [ETF depends on its members continuing to value connectivity as its own reward. above all other considerations, including the profits of their respective employers. This makes the IETF a remarkable kind of standards body, indeed.

While the personal power of certain individuals still holds the IETF together, a much more important force is the commitment to open, common standards instead of closed. proprietary standards. There are. of course, other models on which to develop standards. In the computer industry. perhaps the most non-consensus-based approach is that by which Microsoff has developed the Windows operating system for personal computers. Microsoft has made its money selling its sohvare. not giving it away. Microsoft has, of course, licensed Windows widely and offered developers access to the points where their software can be made to operate with Windows (known as "handles" or Application Programming Interfaces (APls)).

The manner in which it has done so. however, has been the subject of considerable comment in the United States Justice Department prosecution of Microsoft for

82 RFC 1958, B. Carpenter. ed., "Architectural Principles of the Internet" (June 1996) at 2, . allegedly abusing its dominant position in the personal computer operating system market" There have been allegations that Microsoft has been somewhat less than forthcoming with APIs hrdevelopers of competing programs. Even more to the point for the IETF, internal Microsofi documents suggest that Microsoft recognizes the thieat which collaborative standards-making organizations like the IETF pose to its business rn~del.~'' Cleat& the vitality of the IETF &pen& on its ability to act as a bulwark against proprietmy standards and keep the techniccd "plrmbing" of the Internet in the public domain.

Major projects are currently underway to make the Internet capable of treating different types of traffic differently. reflecting the different response time and bandwidth requirements of the various types of services which the Internet can carry. such as e-mail, voice. and video." Perhaps the most important initiative for the development of the Internet into a suitable substitute for existing means of telecommunication is the attempt to assure reliable quality of service (QoS). QoS is based on the segregation of types of traffic and enables usage-based pricing and billing for such services. These are significant departures fiom the generally flat-rate model which currently prevails. We will return to the QoS movement again as it well illustrates the kinds of changes which are currently being planned to the Internet's basic architecture. The case of peering and interconnection fiuther demonstrates the potential Friction between traditional design philosophies and modem commercial imperatives.

83 An excellent archive of information and documents relating to the United States Department of Justice anti-trust investigation of Microsofi is maintained by Lawrence Lessig and Jonathan Zittrain of the Berkman Centre for Internet and Society at Harvard Law School. at

85 See B.S. Biggs, "Microsoft's Fears" Tech Web News (1 9 November 1998). ~http://~vww.techweb.com/internet/cotu.We will consider these internal Microsoft documents in Section VI.D.3.

85 See, for example. IETF Differentiated Services (di ffserv) Working Group Charter. . c. PEERINGAND INTERCONNEC~ON

1. Internet "Backbones"

Internet transmissions beyond the user's connection to a local area network or to the user's ISP are usually depicted as taking place in a "cloud'' or inexplicable "ether." While the actual transmission path is certainly mysterious, owing to the unpredictable and flexible nature of packet switching. it is possible to describe the basic facilities which comprise the cloud. Just as the Internet name and number spaces are organized hierarchically. the various networks which form the Internet also communicate with each other in hierarchical t'ashion. It is usehl, although not technically accurate. to picture pipes connecting users to networks. bigger pipes connecting small networks to medium-sized networks. even bigger ones connecting medium-sized networks to larger networks, and then those large nehvorks co~ectingto each other. However, as opposed to a tree structure, these connections more closely resemble a mesh of interconnecting networks of a11 different sks, provided by many different entities. and connecting at various places to each other. For this reason the '-backbone" metaphor can be misleading. Many different companies and institutions provide the links between many different networks. Access at the lower levels of the hierarchy is predominantly provided over telephone company tines of varying capacities, while access at the higher Ievels is provided by specialty high-capacity carriers. some of which are affiliated with telephone companies. such as MCIWorldCom's UUNet, and Sprintlink, while others are independent. such as Verio and PSINet-

The broadening of the NSFNET beyond its initial community of American universities led NSF in 1993 to begin to slowly privatize it by encouraging the development of regional nehvorks in the United States (to which national and regional nehvorks in other countries interconnected) and creating privately-operated Network Access Points (NAPS)at which commercial networks would interconnect with NSFNET and the regional networks. Commercial Internet providers had already created the Commercial Internet Exchange (CIX) in 1991 to exchange traff?c amongst themselves. By 1995 there was no longer a United States government-funded backbone as this task had been taken up by commercial carriers providing service on commercial terms. Beyond the actual connection of physical elements of the various networks comprising the Internet. interconnection of nehvorks is about the exchange of traffic. Several historical features of packet-switched nehvorks make the exchange of td~cover them very different fkom that on telephone networks. Transmitting packets does not involve opening a specific circuit which can be metered in minutes for the duration of a call, as with a telephone call. Packets are merely sent out to the network (and on to other networks) in short bursts and over connections which are always "on.-. There is no identifiable "call" and no metering of the packets that are transmitted- It is theoretically possible to meter and analyze packets to see where they are headed and how many of them constitute one discrete transmission, but it has always been considered that that process would be far more costly and time consuming than simply sending the packets along. For this reason (among others, no doubt), Internet use has not typically been subject to usage-based fees. Similarly, early on. at least, different networks simply accepted each others' packets and terminated them or transited them, without much attention to the resources used to perform the service. It was worthwhile because everybody did it and the result was that a packet from one network could get to its final destination without incurring any additional charges beyond that of initial network access.

Where networks are roughly equal in size in terms of physical connections with each other and volume of packets sent. terminated and transited. then they are referred to as "peers." As the commercial Internet developed there began to be a few very large networks and many smaller networks. and the traffic patterns among them no longer fit the pattern of peer-to-peer relationships. This had never been a concern in the ARPANET and NSFNET environments because government funding precluded the need to consider balance of tM~cflows. The largest networks were quite willing to exchange roughly equal volumes of traffk with each other. instead of metering and billing each other. and just call it "even." Smaller networks, however. have been forced to pay for the privilege of larger networks terminating and transiting their outbound traffic.

There appear to be very competitive markets for -'backbone" service (md with it peering and interconnection services). in Canada and the United States at present- However. the power which the larger carriers have over the ISP industry as a whole, by virtue of their control over the highest-capacity transmission lines, is beginning to attract attention to the peering and interconnection segment of the Internet industry. These issues become more and more of a concern with each merger. horizontal and vertical. in the global telecommunications industry. However. in September 1998, the United States Internet Service Providers' Consortium (ISPK) told the United States Federal Communications Commission (FCC) that its members were satisfied peering could be dealt with as a private matter among ISPs large and small, and required no regulatory intervention at that time:

ISPs have traditionally viewed peering as a necessary function of network operations. not as a profit-making activity. As the Internet evolvest however. peering agreements and relationships are changing. Public exchanges, which allow many ISPs to share connections and peer with each other, have become overloaded. Private exchanges and peering points are becoming increasingly common. The fundamental aspect of each peering agreement, however. remains the contract between the ISPs to share data. ISPK sees no reason for the FCC to intervene in these private business relationships for now?

Recent research in the areas of settlements and peering suggests that the early model of "bill and keep." flat-rate pricing, and undifferentiated '-best effort" senice is unlikely to survive the Internet's commercialization. Sanford Marble of Bell Communications Research (Bellcore) (now Telcordia) has noted that the implementation of QoS protocols will require Internet-wide coordination on a much larger and more formal scale than has ever been attempted. Since different networks will become increasingly dependent on each other to provide end-to-end service. in the absence of equitable fmancial settlements. conflicts of interest among competing network providers could significantly harm interc~nnectivit~.~'

Rob Frieden has suggested that as the Internet is pushed ever closer to replacing the existing telecommunications paradigm. these conflicts may raise many of the

86 See Comments of the internet Service Providers' Consortium concerning the deployment of advanced telecommunications capability, submitted to US, Federal Communications Commission Common Carrier Bureau Docket No. 98-146 (Section 706 Notice of Inquiry proceeding) (14 September 1998), .

87 S. Marble, "The Impacts of Settiernem Issues on Business Evolution in the Internet'' (~6~ Telecommunications Policy Research Conference. Washington, D.C., 5 October 1998). ~h~p~///www.si.umich.edd-prie/tprc/abstra~ts98/m~le.PDF~. same public policy issues which have been the subject of traditional telecommunications regulation in the United States for many years. particularly if Internet access is included in universal service goals.88 Frieden notes that:

In advance of legislative and regulatory responses to the Internet's maturation. ISPs already have revised their interconnection and settlement agreements to reflect a hierarchical infrastructure more akin to the telecommunications industrial structure than a flat and democratic "network of networks." Many ISPs now offer the functional equivalent of telecommunications services and they have implemented a financial settlement system that accounts for the use of each other's facilities for .'transitingw t1affic.8~

The implications of hierarchical peering arrangements and settlement fees will be considered in greater detail in Section VI.C.4 below. and will depend. of course. on the way the commercial Internet industry develops in the hture.

The bodies and individuals discussed above (and a few even more obscure ones) have served as the Internet's primary governance structures to date. Tbese coordinating bodies and arrangements. ranging from government contractors. to professional organizations. to private agreements. to interco~ectednetworks. are the forces which created the Internet as we know it today. They comprise a irniqtie cornpolrnd of governance which is cerrainly not government. bur neither is ir self-governance. The Internet is a fimdamentally cooperutive enterprise. as this excerpt from a recent informational RFC titled "Answers to Commonly Asked -New Internet User' Questions" recounts:

88 R. Frieden. "Without Public Peer: The PotentiaI Regulatory and Universal Service Consequences of Internet Balkanization" (1 998) 3 Virginia Journal of Law & Technology 8.

89 Supra note 88 at para 68. Who Runs the Internet? No one, The Internet is a cooperative effort among Internet Service Providers (ISPs). software companies, voIunteer organizations. and a few facilities that tie the whole thing together?

As governance should be conceived of as the way a community manages relationships among its members. the way that this "cooperative effort" has been managed constitutes the governance of the Internet- No one entity dominates it, although several can be identified which contribute more than others. Rather than being ungovemed, the many constituent parts of the Internet's infrastructure are all governed in different ways by different combinations of influences-

On one level. the Internet "happens" merely because the administrators of unrelated networks of all sizes happen to configure their computers the same way, use the same software. and refer to the same lists of network elements. However. as the people, institutions. rules and principles which we have examined demonstrate, there is much more to the story than this seeming serendipity. A very high level of coordination. pIus a certain of degree of central planning, is necessary to make the Internet work. Coordination, to be sure, often takes the form of completely independent activity by autonomous network operators, but as the value of being connected to the Internet has grown, the importance of following the dominant patterns of activity has grown commensurately. Focusing on the independent actions of independent actors gives the impression that the Internet only exists because those independent actors decide to configure their networks a certain way. However, a more compIete picture is gained when one appreciates the significance of the common resources, policies, and assumptions which the operators of most of the Internet's networks share. The way that these resozirces are managed. the policies which network operators implement in a coordinated fashion, and the asszimptions which those network operators share all contribute to the governance of the Internet 's infrastrucrzcre-

The technical decisions which defined the Internet, such as those to expand the IP number space. to overlay a domain name system. to move from manual updating of routing tables to an automatic system. to award a monopoly over gTLD name registration to

90 RFC 2664, R. Plzak, A. Wells & E. Krol, "FYI on Questions and Answers: Answers to Commonly Asked 'New Internet User' Questions" (August 1999)- . a private company. to keep Web standards non-proprietary, and to exchange traffic in a usage-insensitive manner. among many others. can now be appreciated as policy decisions, not merely technical decisions. In some cases. the decision-makers may not have considered them policy decisions at the time, while in others the decision was explicitly one of policy, one of the most significant. for example. being the imposition of TCPflP as the governing protocol of the ARPANET in 1983. These decisions were made over a period of more than thirty years in the context of a very different nehvork environment to that of today. While the early days of the Internet were dominated by academic and public institutions, its current era is almost entirely dominated by commercid entities.

The legacies of the twin goals of universal interoperability and universal interconnection have been underestimated as factors shaping the characteristics of today's Internet. While they were the overriding imperatives behind the internetworking experiments of the ARPANET and NSFNET eras. they continue to guide the Internet's development into the commercial era. Not only do we use the same name and number spaces as the early networks, but in many cases the same institutions and, importantly, the same people still oversee many aspects of the network' s technical infrastructure. The remarkable continuity of individual personahties and roles in the technical community demonstrates that while the Internet has moved into a new era. its technical infnstructure has not changed to nearly the same degree. This community of individuals considers certain values so fundamental that they have been elevated by some authors to the level of culture. In their 1996 chapter, "The Self-Governing Internet." MIT researchers Sharon Eisner Gillett and Mitch Kapor (the latter being a founder of Lotus Development Corporation and co-founder of the Electronic Frontier Foundation) had this to say about interoperability:

Where technology ends. cultural values begin as coordination mechanisms. Two deeply held cultural values make the collective Internet work. The first is that interoperability is sacrosanct-it is the defining characteristic of the internet. The second is that to achieve interoperability, protocol implementations must be conservative in what they send and liberal in what they receive-!'

9 1 S.E. Gillett & M. Kapor. "The Self-Governing Internet: Coordination By Design,'' in B. Kahin & I.H. Keller. eds.. Coordinating rhe lnterner (Cambridge. MA: MIT Press, 1 997) 3 at 1 6. This community also. at a personal level. has traditionally displayed a predominantly anti-commercial attitude. David Clark tells us that among the early LETF community. desire for personal gain could get a leader expelled. While Jon Postel may have represented a more extreme view. he was virulently anti-commercial and was concerned only with the best interests of the network itself. Beyond the explicit restriction against commercial traffic on the NSFNET under its Acceptable Use Policy (AUP)?' the IANA and IETF both tried to keep commercial concerns out of their activities. This has created an interesting paradox in the [Enbecause its leaders are now almost all employees of major players in the Internet industry. David Clark explains:

...there are guidelines. there are documents that are written, that are

uguidelines for these meetings. and they say, "You leave your corporate allegiance at the door and always wear your badge, so somebody can come up to you and look at it and say, 'Oh.' I mean, we don't worry about McDonald's. we worry about [Cisco], which is a corporate player that dominates this entire - and in fact. I would point out that the person who runs the [IETF], the chairman of the [IETF], is an employee of [Cisco], and he was eIected by the group. They weren't scared. ...he is held - you know, like Caesar's wife - he is held under such scrutiny. that he has to be above reproach, and he is. Nobody's ever complained that [Cisco] used their market share to hvist that meeting"

The broad-based trust enjoyed by Fred Baker. a Cisco employee who serves as Chair of the IET. like that of Brian Carpenter of IBM. Chair of the IAB. and the late Jon Postel of IANA, might be said to be the "glue" that holds these governance bodies together. Personal relationships remain very strong among the leaders of the technical community, while the reputations of those leaders make the thousands of independent network operators who have never met them (nor each other) to trust their judgment and implement their recommendations.

Ir is rtnciear whether ICANIV will ever szrcceed in gaining the twt of the Internet technical cornrnzmiry. without the strong personal character of Jon Postel to fall back on (Postel likely would have headed [CANN's technical crew had he not died before his

92 See note 206 infru.

93 ZittrainlClark Transcript, supra note 37. plan for "new-IANA" came to fition). This community is still small enough that it can rely on the personal reputations of certain individuals to back up their claims to authority. David Clark fiuther describes Postel:

So Jon is one of the few people with stature, because he's been around so long and his credentials are so impeccable. He's never made a judgment that looked as if it smacked of self-interest or idiocy- And so, yeah, the issue of literally replacing him is a fascinating one.94

The number of individuals with this kind of moral authority will, it can be expected, continue to decline as the commercialization of the Internet continues, What will replace this authority? Whose vision will be substituted for that of the pioneers? Without widely-trusted leaders, can the technical community organize itself to fiuther the interests of the network? The balance of rhis fhesis is concerned with the way die Internet's governance sft-uctures will need to evolve in response to the evoltttion of rhe ?nternet, rvhich will now be driven by jirndamenfally dzflerent imperafives from those which guided its creation and early evolution.

The highly-respected authors of the collaborative "Brief History of the Internet" identi@ the forces at play in the commercialization of the Internet in their closing paragraph:

The most pressing question for the future of the internet is not how the technology will change, brrt how the process of change and evolution itseff will be managed. As this paper describes, the architecture of the Internet has always been driven by a core group of designers, but the form of that group has changed as the number of interested parties has grown. With the success of the Internet has come a prol&fierdon of stakeholders - stakeholders now with an economic as rvdl as un intellectual investment in the network. [, ,,] If rhe lnrernet stumbles. ir will not be because we lackfor technology, vision. or nlotivation. Ir will be becazse we cannot set a direction md march collective[?:inro thcfur~re.~'(emphasis added)

The choice of the word "collectively" is telling because it demonstrates the attitude of the early stakeholders towards their network: it was a collective effort, for collective benefit.

94 ibid.

9s Leiner, el a/..supra note 69. The most pressing question for the kture of the Internet ifselfis, indeed, "how the process of change and evolution itself will be managed---

Almost dl of the "e-businesses." "e-products,'' and "e-services" whose advertisements fill the pages of newspapers. and now even television and radio, refer to "The Internet." There is a prevailing sense that --The Internet" is something which exists, which can be used to provide and procure services. and which can even form the basis for entire businesses and communities. Businesses (in any field) are urged to "get on the Web" or fall behind. IBM claims to be able to turn almost any business into an 'e-business."96 New banks have been founded with no branches. only Web sites9' Sun Microsystems asks, "What can we dot com for you today ?-Y- 8 The Internet is consistently presented as a stable, exciting, and new environment. ready to be built on,

These marketing schemes and slogans all assume that there is and will continue to be an Internet, Like much of the legal writing about the Internet. these messages all assume the Internet's underlying infrastructure. They refer to activities on the Internet, not to the Internet itself. As we have seen. the technicai and physical infrastructures of the Internet are anything but stable or immutable. In fact, they are extremely malleable and vulnerable to changes in the informal, decentralized power structure which sustains them. Now. for many commentators. this is a "feature. not a bug," to put it in Web-speak. This flexibility and lack of central control is for many a defuhg feature of the Internet, to be celebrated, not decried. However. we should not conclude Erom the apparent lack of comprehensive centralized control that all that is left is the private activities of those who connect to the Internet. We need to recognize the lnrerner as a global, public network, before

% See chttp://www.ibm.com/e-business/whatis.phtm I>.

97 See, for example. Citizens Bank ofCanada. chttpj//www.citizensbank-ca/>and Wingspan Bank.com, . Both are in fact divisions of naditiona1 financial institutions.

98 Sun Microsystems ran a marketing campaign for electronic commerce services in 1998 employing the slogan: "We're the dot in dot corn: What can we dot com for you today?" See ~http://video.sun.corn/vod/dotcom~ we can consider what ne~orkpolicy principles should inform its governance, To that end, we must parse the very value-laden phrase --global public network.''

While the Intemet is often portrayed as a global phenomenon. the reality is much less global. "Cyberspace" is inherently global, to the extent that it is nowhere in particular, but the Internet's reach is practically limited by the reach of telecornmunications networks which can carry it. This means that while the Internet is theoretically accessible from anywhere in the world, the quality of access to it varies widely, with most use being concentrated in those countries with the most advanced telecommunications networks. Further, the language of most of the Internet's content and technical infrastructure is English, while the language of most of the world's population is not. Bella Mody notes that more than 97 percent of all Internet host computers are concentrated in the 29 Organization for Economic Cooperation and Development (OECD)member nations, which together contain less than one-quarter of the world's 5.6 billion people.99 At the same time, the threequarters of the world's population who live in 129 developing countries contain less than 3 percent of

all Internet host computers. 'O0

When we refer to the Internet as a global network, this distinction between the global nature of --cyberspace" and the privileged nature of high-quality telecommunications service must be borne in mind. There is a great deal of research being done on access to the Internet around the world. and the importance of accommodating the interests of the large proportion of the world's population which have never used if in its governance. 'O'

99 B. Mody, "The Internet in the Other Three-Quarters of the W~rld.~in Institute for Information Studies, The Promise ofGfobal ~Vencorkr.Annual Review of rhe Instirutefor Information Studies (Queenstown, MD: The Aspen Institute, 1999) 69.

100 Ibid.

10 1 See, for example. B. Petra~ini& M. Kibati. "The Internet in Developing Countries" (1 999) Vol. 42, No. 6 Communications of the ACM 3 1.

While the United States continues to dominate the Internet in almost every way (for instance. nearly two-thirds of host computers are located within the United States), this dominance continues to decrease as more and more networks and users from around the world join in. It is important to note that much of the popular and academic writing about the Internet takes the Silicon Valley area of California as the standard for Internet accessibility, while many rural areas of the United States enjoy nowhere near that level of service options, let alone most other regions of the world- While issues of physical access to the Internet as a social god are beyond the scope of this thesis. it is important to keep in mind that the goal of making the Internet a truly global comrnunications platform requires that people of the entire world to have meaningful access to it. This thesis refers to the Internet 3s the "global public network" out of a belief that the Internet can, indeed, serve as a truly global network, but with the recognition that much more earth-bound realities of economic disparity and infrastructure development presently act as barriers to that goal.

lo2 See L. Landweber & Internet Society, "[nternational Connectivity Map" (Version 16) ( 15 June 1997), ~ftp:Nfip.cs.wisc.edu/connectivitytytabte/verson6.bmp>.

103 See Director's Message. 44" Meeting of the Internet Engineering Task Force, 15- 19 March, 1999, Minneapolis. M innesota ~http~lt~~v,ietf.org/proceedings/99rnar/40I .html>.

IM See Internet Society, Babel Project, ~http~///www.isoc.org:8080/index.en.hml~.A major aim ofthis project is to develop ways to address the technical infrastructure in characters other than the American Standard Code for Information Interchange (ASCII) characters used by the English-language computing community. Perhaps the next most controversial question is whether the Internet is a network, that is. whether it is something which can be identified and labeled as a unified, distinct phenomenon, At one level the Internet is just computers, wires. and software - independent resources which happen to interoperate. not at the direction or by the design of anyone in particular. It is quite true that. when one focuses on the computers and people at the "edges" of the Internet. and even the wires in the "middle." there does not appear to be anything that one can identify as being "The Internet-'. There are no central offices which switch Internet traffic like voice traffic. There is nowhere one can go and see "The internet," One could go to IS1 at the University of Southern California or NSI's facilities in Herndon, Virginia., and look at servers and data files which are the highest levels of the Internet identifier and routing systems. but they would not look much like the operations centre of a global computer network.

There is. indeed. no single central point to the Internet. No one person or entity can be said to control it. As Brian Carpenter cleverly noted in RFC 1958. "nobody can it of?f.'.lO' The Internet truly is. on one level. only an abstraction, a gestalt, which can only be perceived by stepping back, as it were. and looking at the whole. This whole has been referred to as a "meta-network" and as a "network of networks." The key word, of course. is network. i\ simple definition of a network is provided in a set of basic networking training materials produced by NSI:

A network is a group of hvo or more computers. connected together through a physical infrastructure. that are able to communicate and exchange information because they agree to use soflware that observes the same set of rules. or protocol.'M

The Internet is certainly a group of two or more computers. in fact, as of July. 1999, there were estimated to be over 56 million host computers connected to the Intemet (which does

105 RFC 1958, szrpru note 82 at 4-

106 Network Solutions. Inc.. " 15 Minute Series: What is a Ne~ork?~(1996), available on the Calgary Public Library Web site at ~http~/public-libnry.calgary.ab.ca/train/~vewiew/network.h~n~. not include personal computers which access the Intemet through host computers).'07 In a way those computers are connected together through a physical infktstructure. and in a way they are not. The global "telecommunications system" (which itself is something of an abstraction). on which the Internet "rides-" links those computers together, but only at the physical level. Since the Intemet exists at the protocol level (and up), it is considered distinct fiom the physical transport layer below it. Common adherence to protocols and network administrative practices are ~vhutlink computers together over the Internet. more than a physical relationship. Computers in a corporate or institutional LAN share a physical infixstructure. while the "metanetwork (which is often conceived of as being "above" them all) only share a technical infrastructure.

Network operators may choose to employ protocols and lists of resources other than those used by most other Internet participants if they perceive value in doing so, not because they have to. nor because any set of protocols and resources is more "official" than any other. This voIuntary nature of Internet participation is often held up as evidence that no one controls the Internet. certainly not the way a telecommunications network is controlled. Yet for all the unofficial status of the protocols and resources which make the Intemet work. they have become powerful de fmo standard^.'^' the price of deviation fiom which is effectively exile From the Intemet - not a particularly appealing proposition, personally or commercially.

An alternative view is presented by Anthony Rutkowski. an Internet consultant who has held various senior positions in the public and private sectors and has written widely on Internet legal and business issues. In a June. 1999 presentation to the United States Federal Communications Bar Association in Washington. D.C.. Rutkowski declared emphatically that the Internet is not a network, but is rather '-a means of discovering and sharing networked-based [sic] resour~es."'~Aside from the somewhat contradictory use lo' Internet Software Consortium. "Internet Domain Survey. July 1999: ~http://~nvw.isc.or~dsview.cgi?domainsu~l.Comparable figures from previous years give an idea of the rapid growth of the Intemet: July, 1999: 56.2 18.000: July, 1998: 36,739.000: July. 1997: 26.053.000: July. 1996: 16.729.000 (on adjusted host count basis). lo' See, for instance. the BMD and Realtime Blackhole List (RBL) products distributed (for free) by 's ISC, at notes 19 and 144 respectively. infra.

109 A.M. Rutkowski. "Internet Law and Policy: Current state of the art" (Federal Communications Bar Association. Washington, D.C. 9 June 1999) (hereinafter "Current State of the Art"). of the word "network" in this definition, Rutkowski's statement does pose an interesting challenge. Other commentators have sought to make the same point by saying that "there's no 'there' there." The challenge, of course. is thut ifthe Internet does not exist, or at least is not a network, how corrld it need network policy?

To say that the Internet is a means of discovering and sharing network-based resources is like saying that airplanes are a means of discovering and sharing airports. While certaidy true, it ignores much more that is going on. much more that is essential for airplanes to get to their destinations safely and eficiently. This is not to say that the Internet is like an aviation network nor that it should be governed like the airline industry. Rather, it is to emphasize that Focusing on the tangible elements of networks at their "edges" obscures the important coordinating institutions and ~les"in between" them, which make them work together efficiently, In an earlier article. Rutkotvski offered this definition of a network:

A network is an interoperating array of information objects whose prime function is to allow the sharing of information and information processes among multiple objects.' lo

This more balanced definition highlights the multifaceted nature of a network, as "an interoperating array of infonnation objects.-- He hrther explained this simple definition by suggesting the diversity in possible types of information objects:

An informarion obpcr is simply a discrete, definable information function that can be used or acted upon. Basic service elements can be regarded as information objects. A computer tile can be an information object. So if you create a network, you are simply establishing a know-n structured relationship among information objects - an architecture - through which the objects can interoperate.' ' '

Rutkowski offered central office telephone switches, anonymous File Transfer Protocol (FTP) sewers, and infonnation files as hrther examples of information objects- Based on these very usehl definitions of network and inforrnation object, the Internet appears to be very much a network- It most certainly encompasses many different kinds of interoperating

110 A.M. Rutkowski. "A Tiuonomy ofNetworks: Is It Public or Not?", in Noam & NiShuiIleabhain, supra note 74: 1 (hereinafter "Tauonorny") at 4. information objects. whose prime function is to allow the sharing of information and information processes among multiple objects. Far fiom not being a network, the Internet might be the ultimate network. and this is certainly the way it is characterized by most other observers. Rutkowski himself has referred to "computer networks like the Internet" on other occasions. '

The various aspects of the Internet's technical infrastructure which we have examined are all different kinds of information objects which contribute to the sharing of information and information processes by means of the Internet. The DNS, routing tables, and TCP/IP protocol suite are all such information objects. As are the computers on which they run, and the routers which employ them. Along with yahoo!'^ database. everyone's e- mail boxes, and innumerable other information objects of all kinds, these unrelated elements together form a network, That network is referred to by most as "The Internet-"

Rutkowski's 1999 assertion that the Internet is not a network, but rather a means of discovering network-based resources. runs counter to his 1996 definition of a network. His more recent statement focuses on the information objects available on the Internet, but obscures the information objects which reliably and universally facilitate their discovery. The Internet's identifier and routing systems, uniform standards and protocols, and peering and interconnection arrangements described above. are examples of such resources. These infrastructure-related information objects facilitate access to the content- related information objects with which Internet users are more familiar, Though they each have different roles, they are all information objects which together comprise a network. An important difference. though. is that while there are many Web sites and search engines and the like on the Internet- there is only one technical infrastructure. The elements described above are shared by all who wish to participate in the Internet. The technical infrastructure is therefore perhaps the most important set of information objects on or in the Internet, because it facilitates all the others. The next question. then. is whether the Internet can be described as a "public" network.

I I2 A.M. Rutkowski. "Factors Shaping Internet Self-Governance" in Kahin & Keller. supra note 9 1: 92 (hereinafter "Factors") at 99. Is IT Pueuc?

Many of the characterizations of the Internet which suggest that it is not a network also suggest that it is not public. The vast majority of computers which hold the various information objects on (e-g. Web sites) and even in the lntemet (e-g. routers) are certainly privately-owned. The high-capacity lines which corporate networks lease fiom telephone companies are not for the use of the general public, they are for the use of the company paying for them. Anthony Rutkowski also noted in his June. 1999 presentation that the network-based resources related to the Internet are "overwhelmingly privately owned, operated. and controlled." He continued: --Internet winners are those "portals" attracting the most customers.""' For Rutkowski. these facts make the Internet itself a private matter, not a public network. even though port& are a matter of content. not infkamucture. or medium. Again. Rutkowski's own earlier writing is both usehl and seemingly contradictory.

1. Public and Private Networks

In his 1996 chapter. --A Tzuonomy of Networks: Is It Public or Not?" Rutkowski posits that the modem networking world is so new and complex ("virtual reality riding on top of a web of glass") that old ways of classifying networks, particularly as being public or private. are no longer useful, He offers his definitions of network and information objects to help analyze the different elements of these complex. interconnected networks. To structure such an analysis. he provides a set of five properties relevant to determining whether a given network is a public network.' '" These indicia are described as follows, in part (while private '-can simply be regarded as whatever is left. i.e., non-public"):

Who provides it? In other words, who makes the information available? If it is a public body that makes it available, or a non- public body operating under an obligation established by a public body, then the object can be said to be at least partiaIly public. Under old legal regimes this property was very important. CVIlo can nccas it? In other words. who can effect communication with the object. [f this can be done anonymously, i-e., by anyone, then the object can be said to be at least partially public.

I I3 Rutkowski. "Current State of the Art," srrpru note 109.

I I4 Rutkowski. "Taxonomy." supra note 1 10 at 5-6. FVho owns it? In other words, who has title. This can involve ownership of real physical property, or of intellectual property. If a public body owns the object, or if it is in the public domain, the object is at least partially public. Who controls it? This is one step beyond access. It involves giving the object instruction if it is involved in an information process; or moving, or altering it if it is pure information. Once the object is accessed, what can be done with it? CVlto pays fir it? Information objects and their array in networks have associated economic costs. if those costs are borne by or othenvise undenvritten by public bodies, the object may be described at least partially as public,

Rutkowski then suggests a two-step analytical process for determining whether information and communication net\-orks are "public":

First the network architecture must be examined and be parsed into an array of information objects. Each one of those objects must then be esamined in light of face properties: who provides it. who can access it, who owns it. who controls it. and who pays for it, On the basis of the combined agregate of all the results, it is possible to say that the object has a certain *.public index figure.*'5

While this analytical framework is usehl (and will be used to analyze three different kinds of networks below). an important caveat should be made. By focusing on individual information objects. one risks "missing the forest for the trees." Rutkowski suggests that it is essentially impossible to characterize most modem networks as public or private in any kind of consistent or definitive way. This is quite true. but one should not lose sight of the network in the attempt to characterize each of its constituent elements. The phrase "interoperating array of information objects" implies the existence of a means of interoperating - a common technical infrastructure. an "architecture for internetworking." Thus, in looking ar parriczdar network we wiII pcry special attention to rhe technical infi-ustructzms which Jucilitctte the intcropemtion of the various information objects arrayed on them. We may find rh~trrhe technical i@rcts~rzrctzrreohjects. as opposed to content or transaction o bjecrs, weconsiderec~ diflerently. md consequently managed dtflerently .

We will now move to an examination of the network architectures of three networks: SWIFT. the PSTN, and the Internet. We will parse each network into at least some of its many constituent information objects. and ask Rutkowski's five questions of those objects. with an eye to characterizing the networks as public or private.

2. The Societv for Worldwide Interbank Financial Telecommunication

The Society for Worldwide Interbank Financial Telecommunication (SWIFT) is a bank-owned cooperative that supplies secure payments messaging services to financial institutions around the world.'l6 These excerpts from SWIFT'S Web site provide Mer description:

S.W.1.F.T. supplies secure messaging, interface software and 24- hour global support to about 6.500 financial institutions in 184 countries. In 1998. S-W.I.F.T.'s global network carried over 900 million messages. The average daily value of payments messages on the S.W.I.F.T. network is estimated to be above USD 2 trillion. S.W.I.F.T. helps its customers reduce costs. improve automation and manase risk- Today. in addition to its 2,848 member banks live on the network, S.W-1.F.T- users include sub-members and participants such as brokers. investment managers, securities deposit and clearing organizations, and stock eschanges. I17

Who provides it?

As a cooperative. SWIFT is a service which the world's major financial institutions jointly provide for their collective benefit. Aside from the membership of those of its members which are state banks (such as the United States Federal Reserve Bank and the Bank of Canada). SWIFT is not a public body and is not required by any public body to provide any particular service to any particular user. The most significant information object in the SWIFT network is the coded messaging system which transmits information about large-value hnds transfers among members. There are. of course. other information objects involved. Telecommunications services are likely provided in the form of leased data Lines

116 See -=www.swift.com/>.

"' . The information in this paragraph was obtained from "S. W.I.F.T. at a glance."

&I Who wnaccem it?

Only those financial institutions *gamed membership in SWIFT may access the information objects identified above. On1 y members can effect communications with these objects. and very strong security measures are employed to ensure this restricted

access. Transmissions are encrypted. By its very nature. SWIFT is anything but accessible by the pubIic.

Who owns it?

The SWIFT messaging system. and all of its associated technical infrastructure and intellectual property. is owned by SWIFT. which is owned and controlled by its member banks (note: not including its broader class of member financial institutions). The telecommunications service elements of the network are, of course. provided on networks owned by many different service providers (and many other service providers may be involved in any given international message). The end-user terminal elements are likely owned by the respective members. '"one of the core information objects comprising SWIFT are government-owned. although they may be subject to the oversight of national laws with respect to privacy. ' "'

4,) who cont.it?

SWIFT is controlled by its member banks through a board of 25 directors. The Board Audit Committee and independent internal and external auditors ensure oversight and governance of security controls- Individual users are permitted to send messages by means of it, but are not likely permitted to alter its architecture or functions. which are most likely within the control of S W[FT itself. The users can address it. then. but not modifL it. The

-. 118 Unless the terminals are proprietary to SWIFT and therefore can only be 'rented' by the membership. Like many of the details of the SWIFT network, this information is simply not publicly available.

119 The "SWIFT at a glance" page says: "Central banks and other regdatory authorities oversee our ability to provide a service that meets the highest standards of reliability and security," supra note 117. other, non-core, information objects described above are under the control of their respective owners, who allow them to be used as part of SWIFT, but, of course, likely retain control over their modification.

Who payr ibr it?

SWIFT is paid for by its members. Many of those members are likely state central banks, but they are members qua central banks. not governments. SWTFT has not likely received any direct government funding. Individual members are likely responsible for arranging their own telecommunications access services, and buying and maintaining their own user terminals. Likewise, they pay for the data processing required on their ends of the clearing, netting and settlement transactions which the network facilitates. These computer systems might be considered fbrther information objects on the SWIFT network.

Since almost no aspect of the SWIFT network can be described as "public" using Rutkowski's public indicia whatever particular public index figure should be attached to it, that number would probably be extremely low.

3. The Public Switched Tele~honeNetwork

When thinking about the Public Switched Telephone Network (PSTN), it is important to distinguish between the "old world" and the "new world of telecommunications in Canada and the United States. Before the introduction of competition in local and long distance telephony. and particularly before monopoly telephone companies were required to allow the attachment of -'foreign" (i-e. non-telephone company) equipment and the interconnection of other networks to their own, PSTNs were monolithic, unified enterprises. The monopoly phone companies owned all of the physical elements of their networks, including the phone in users' homes. and contro.olled all of its technical elements, right down to the symbols found on the dials of those phones. Ijnot for public regulation, an analysis of the information objects comprising the PSTN of the pre-competition era would likely conclude that they were predominantly private.IZ0 Pre-divestiture AT&T in the United States and Bell Canada are exampies.

The PSTN is. of course. subject to detailed public regulation in Canada and the United States. While the character of that regulation has recently changed dramatically, from a focus on the fairness of retail rates in the monopoly era to the fairness of wholesale rates and interconnection terms in the competitive era, extensive regulation still governs the telecommunications industry in both countries. The current goal of telecommunications regulation is the creation and support of competitive markets for telecommunications services of all kinds, at every level. This sea change in regulatory approach is informed by the belief that competition, not regulation. is the best way to serve the pzbk interest in the provision of telecommunications services.

The fimdamental challenge for regulators arising from this change in approach has been to preserve the positive features of the monopoly era in the transition to the competitive era. One of the most difficult matters has been to find ways to preserve the functionality and seamless operation of the telephone system during the unprecedented move from a unified network under the control of one entity, to a diverse nehvork operated by (theoretically, at least) many different entities. This process has required the creation of new technical interfaces and corporate structures to provide for the sharing of many technical and physical elements of the PSTN. The PSTN has thus changed from one unified network into a network of interconnected networks. which together provide the same end-to-end services provided by the former monopoly carriers. but in competition with each other. While many of the public goals of telecommunications regulation remain, the entities actually providing the services have changed dramatically. It is this model of the new, competitive PSTN on which analysis must be based.

There are, of course, many different information objects involved in the PSTN, and this brief analysis cannot cover them all. Different objects will therefore be used

I20 Except where the monopoly telephone company in question was state-owned, such as the tefephone utilities of the Canadian prairie provinces. or a cooperative, like the many rural telephone cooperatives which flourished throughout Canada and the United States in the early and mid-20" century. to illustrate different points. Several come to mind. For instance. consider the different information functions performed by telephones. voicemail boxes. telephone books, central office switches, and CCS7, the common channel signaling system.

Telephone sets are no longer provided exclusively by the telephone company. They can be bought at department stores, and now computers can bction as telephones. The information fimction which the telephone serves is the conversion of voice conversations into electrical impulses (which are usually hrther converted into ones and zeros by digital switches), and vice versa. Since telephones can take many forms and are obtainable in many ways, the information function which they sewe cannot be categorically described as public or private. However. the service. the impulses which telephones work with. are provided by telecommunications carriers. In Canada and the United States. telecommunications carriers are predominantly privately-owned but operate under obligations established by public bodies. The information fhction of voice signals. would therefore likely be classified as being at least partially public on Rutkowski's indicia.

Similar considerations would apply to voicemail boxes. telephone books, and central office switches. While the particular regulatory requirements applicable to each are different, they are all provided by bodies operating under obligations established by public bodies. An interesting example. though. is provided by CCS7. CCS7 is a suite of software protocols used by local exchange carriers to set up. control. and end telephone calls. It can do much more, too. like facilitate access to voicemail services and other value-added services. Most importantly, CCS7 allows networks within larger networks to communicate on the same footing, to signal each other and offer each other services in a consistent manner. This information object, or "discrete. definable information function." is not provided by anybody in particular. but rather is a particular set of protocols adhered to in common. Other protocols could theoretically be used by individual networks, but those networks would not be able to participate in the larger network to the same degree as networks adhering to the standard protocols.

While CCS7 is not provided by any particular body (although in practice the computers of the incumbent local exchange carrier most likely "drive" the system used by ail local exchange carriers in a particular area). it certainly operates under obligations established by public bodies. While internal technical matters like signaling and call processing did not interest regulators in the monopoly era (partially because these methods of manipulating calls did not exist then). they are absolutely essential to the viability of competitive local exchange markets. Cornpetiton must be able to integrate their services with those of the incumbents, and it all has to appear seamless to callers, In Canada, for instance, the Canadian Radio-television and Telecommunications Commission (CRTC) mandated the interconnection of CCS7 signaling networks among local exchange carriers as part of the transition to competition in local service."' Returning to the basic question, "who provides it" then, it would appear that no one in particular provides the CCS7 information object, rather it is a common practice among local exchange camers. a practice imposed by a public body. CCS7 is part of the larger shared technical infrastructure of the iocal telecommunications environment, and can also be said to be at least partially public. We will return to CCS7 and the telephone system's technical infrastructure below.

In a sense. anyone with a teIephone can effect communication with other telephones and the various information objects associated with it. such as directory assistance service or tele-conference services. The directory assistance databases of a telephone company cannot be directly accessed by the user. but can be accessed by other local exchange carriers, because the databases of all local exchange carriers constitute another publicly-mandated shared element of technical infrastructure. Even the statement that anyone with a phone can obtain directory assistance information from an operator (who often operates a computer which does the actual talking) assumes that one has access to both a telephone and dialtone-

A telephone company's customer database is another relevant information object, but is certainly not accessible by the public. Only those personnel and computers within the phone company can effect communication with that object. although there are other databases of customer information which local exchange carriers share to facilitate the ordering and billing of services. Access to these databases is broader that within the

121 Locuf Cornpetifion,Telecom Decision CRTC 97-8,May 1, 1997 (hereinafter "Local Competition Decision"), at para. 35. ~http://w~vw.crtc.gc.ca/en~telecom/decision/~997/d978.txt> (hereinafter "Local Competition Decision"). companies themselves, but would probably still not be considered available to the public. The core information object though. basic voice telephony service. is an information function which has been and remains to this day heavily regdated with the goal of making it accessible to the public- While it would likely be considered public even without these obligations, because it is offered to the public, access-oriented regulation has given basic telephone service an even more public character. These reasons were likely some of the most significant in Rutkowski's decision to assign central office telephone switching objects a high public index figure in his analysis of the public telephone network.'"

c.9 Who owns it?

The various physical elements of telephone networks are generally owned by private carriers and their customers (with respect to customer premises equipment (CPE)). Incumbent carriers have unsuccessfully argued that the burdening of their networks with mandated rights of access for third parties constitutes an expropriation of their assets.lzs While these rights are not in the nature of property rights per se (in the sense of restrictive covenants or licences), they are signilkant restrictions on the extent to which network owners can exercise their rights of ownership. Aside from this unique characteristic, the physical infrasttructure of the PSTN caE largely be characterized as privately-owned.

With respect to the technical infrastructure of the PSTN, matters are not quite as clear. The question of who owns the contents of telephone books has been a lively issue in copyright law in Canada and the United States for several years. with the final word (in Canada, at least) being that the listings themselves are in the public domain, but their arrangement can be protected by copyright."" Canadian carriers apparently do not own the phone numbers which they assign to their customers, because the CRTC has required them to

I" In "Taxonomy," supra note 1 10 at 6. Rutkowski writes: "...on a scale of one to one hundred, a central office telephone switching object might rate a 70-" Unfortunately this is the only figure he offers in tfie chapter, which does not provide a scheme for assigning particulx public index figures.

123 See, for example. Bell Canada v. Unirel Comm~rnicotionsInc. ( 1 992). 99 Dominion Law Reports (4') 533 (Fed. C.A.).

124 Tele-Direct (Publicariorrs) Inc. v- American Business /nformation Inc. ( 1 996), 74 Canadian Patent Reports (3d) 72. afl'd (1997), 76 Canadian Patent Reports (3d) 296 (Fed. C-A*), leave to appeal rehsed (2 1 May 1998) (Doc. 26403) (S.C.C.). allow their customers to "take their numbers with them" when they switch local service providers (known as Local Number Portability (LNP)).

While some elements of the local network might be considered partially public, or in the public domain, the majority of those elements (and the information objects which they represent) are privately-owned.

(1) Who controIs it?

The concept of control is very important in telephone networks, The circuit- switched model of telephone networks requires that the physical connection between two callers be managed, from dialing, through answering, to terminating. While the high level of control which is built into telephone networks allow them to provide very high-quality service, it also rigidly defines those nehvorks' capabilities. The CCS7 system defines certain hctions which networks employing it can offer. While these are tremendous advances on early mechanically-controlled networks. they still limit network hctionality in a way which TCPAP networks do not, because they defer control issues to higher-level services or applications, such as the A telephone network is very much controlled by the telephone company (the contrast of course. being that there are is no ''Internet company" - at least not today).

We can think about control of many other aspects of the PSTN, as well. Users can control the core information object in the sense that they can directly dial phone calls, or access voice messages from anywhere in the world. but they cannot alter the facilities which make the service work. or change the technical aspects of their service. Their access privileges are the lowest. For instance. by dialing a certain number the user manipulates his or her local switch, but that is all they can do with it. They cannot change the tables in it, nor

l3 lrnpfementafionof Regufatory Framework - Dmefoprnenf of Currier Interfaces and Orher Procedures, TeIecom Public Notice CRTC 96-48. August 1, 1996. ~http~/~.crtc.gc.ca/eng/telecom/noticd1996/p9628-O.t;ut>.

126 This is the crux of the disagreement between the "Bellhead," or telephone company, and "Nethead," or Internet user views of how networks should work. This debate is well presented in T.M Denton Consultants, Wetheads versus Bellheads: Research into Emerging Policy Issues in the Development and Depioyrnent of Internet Protocols - Final Report" (Report prepared for Industry Canada) (1999). . connect new loops. Only the publisher of the telephone book can control what goes in it, while any caller can control what message goes into another user's voicemail box.

A particularly relevant information object in the PSTN context is the number space-the pool of combinations of Arabic numerals which comprises the total number of different telephone numbers available in any particular zone on the network. Not only are telephone numbers not considered to be owned by either customers or the carriers which assign them, they are further considered a shared resource which must be managed in the interests of ail users and carriers. Canada's incumbent and prospective local carriers have created the Canadian Numbering Administration (cNA)"' to handle the management and allocation of phone numbers in Canada a task which was traditionally just another internal technical matter for the monopoly phone companies. Since all carriers need numbers to assign to customers, and uncoordinated use of numbers would likely be wastefd of what it theoretically a scarce resource. the industry and the CRTC ensure that these responsibilities were in the hands of a neutral entity to which all carriers contribute.

The CNAC, then. could be said to be in control of the number space, and even though its members are private carriers. the numbers themselves are viewed as public, in the sense that they are shared. This conforms to Rutkowski's comment that -'[g]enerally, if the control of an information object is anonymously equal. it can be regarded as public.""8 The considerations relevant to the local telephone number space can be expected to be quite similar to those related to the Internet name and number spaces. about which more below.

e! Who gays lbr it?

Finally. the economic costs associated with the various '-information objects and their array in networks" are borne by many different entities in the PSTN. However, no elements of the PSTN are directly paid for by public bodies (except, of course, where the telephone company itself is a public body). One might argue that the public monopoly which the incumbent telephone companies enjoyed until recently allowed them to build out their networks using public (i.e. ratepayer) fbnds. but now that all aspects of the industry are

127 .

12s Rutkowski, "Taxonomy," supra note II0 at 6. competitive, subsidies must be explicit. Carriers compensate each other for their use of one another's facilities, and carriers serving high-cost areas have access to central bdswhich subsidize such service out of revenues from more profitable areas. While the former monopoly model may give rise to an impression that the public "paid fof the PSTN, in the modem environment that conception is inapplicable because there are no more "public monopolies."

9 Public Index Figure

The PSTN can be said to have a high public index figure because several of the indicia discussed above tend towards the public. instead of the private end of the scale, for various reasons. Some objects are public because public regulation says that they are, white others are public because they are shared by all network participants. Many objects are aIso private, particularly since the transition from monopoly to competition in telecommunications. It is reasonable to suggest. however, that on balance. the PSTN is more a public network than a private one.

Much more than the above indicia what makes the PSTN a public network in the minds of most are its common carrier status and state-imposed obligations to serve the public. Eli Noam explains:

In the traditional network environment, the granting of access and non-discriminatory content-neutrality is required of the general "public" networks by law. common carriage regulation. and even common law. [...I One of the central observations of the -'law and economics" school of thought has been the fundamental economic eficiency of the common law. The implication is that common carriage, as the product of common law judges later codified by statutes, was an economically efficient institution. Among its purposes were reduction of market power, protection of an essential service; protection of free flow in good[s] and information: promotion of basic infrastructure: reduction in transaction cost: and limited liability.'"

1 29 E.M.Noam, "Beyond Liberalization: From the Network of Networks to the System of Systems," in Noam & NiShuilIeabhain, supra note 74: 423 at 428. Common carriers have special obligations and benefits which other firms do not. For

instance, under Canada's Tetecommzmi~ions Ac~.'jO Canadian carriers "' may not discriminate among cust~mers.'~'control the content of messages they and can be ordered to provide service where they might otherwise not wish to. 13' They also benefit irnited liability towards their customers. The CRTC-approved Terms of Service (TOS) 1 Canada are a typical example:

Except with regard to physical injuries, death or damage to customer premises or other property occasioned by its negligence, Bell Canada's liability for negligence. including negligence with regard to intercept, reference of call service and emergency service from coin telephones, and also for breach of contract where the breach results from the negligence of Bell Canada is limited to the greater of $20 and three times the amounts refunded or cancelled in accordance with Articles 13.1 [Directory Errors and Omissions] and 15.1 [Service Problems]. as appli~able."~

Communications regulators have tried to maintain the delicate balance of common carrier obligations and privileges during the transition away from the most beneficial of those privileges, monopoly. One of the most difficult tasks has been maintaining the obligations of incumbent carriers to serve uneconomic customers, while giving their competitors sufficient incentives to build out the networks which will ultimately compete with the incumbent networks in as many areas as possible. The maintenance of these obligations (and universal

S.C. 1993, C. 38. An omce consolidation of the Act is avaiIable on the CRTC Web site at .

A "Canadian carriei' is "a telecommunications common carrier that is subject to the legislative authority of Parliament," while a "telecommunications common carriei' is "a person who owns or operates a transmission facility used by that person or another person to provide teIecommunications services to the public for compensation." /bid. s. 31).

Ibid., s. 27(2): "No Canadian carrier shall. in relation to the provision of a telecommunications service or the charging of a rate for itunjustly discriminate or give an undue or unreasonable preference toward any person. including itself. or subject any person to an undue or unreasonable disadvantage."

/bid., s. 36: "Except where the Commission approves othenvise, a Canadian carrier shall not control the content or influence the meaning or purpose of telecommunications carried by it for the public."

/bid., s. 24: "The offering and provision of any telecommunications service by a Canadian carrier are subject to any conditions imposed by the Commission or included in a tariff approved by the Commission."

Bell Canada Terms of Service For Regulated Services (Effective 25 September 1986). ~ service schemes more generally) is another important element of what has been called the "new" competitive public telephone network.

Beyond the vague calculus of his public indicia Rutkowski rightly recognizes the importance of accessibility of telecommunications services. He concludes his discussion of the taxonomy of public and private networks by noting that %ere may be individual information objects that represent such an important public asset. that they should be protected with a high 'public index factor'. -- t36 Canadians and Americans have historically viewed telecommunications as essential to all facets of human activity. social. economic, political, and personal. For this reason, telecommunications has always been (and continues to be) treated as an essential public infrastructure. This does not mean that ii musr be slate- owned, nor even resrlated like other prdilic infmstrzicttires- Nor does it mean that every information object comprising it need be public (indeed. such a situation seems incompatible with a competitive environment). only that certain elements of the network should be designated as public. in the public interest. Rutkowski's insightfid conclusion implies that those information objects which are so essential to the provision of public telecommunications services that they should be "protected with a high pubiic index factor should be subject to a system of governance which can give effect to the broader public interest in their operation. This is certainly a value which underlies the regulation of communications in Canada and the United States. and, as suggested in Section VII below, should not be discarded when we come to address the governance of the Internet. We will now apply Rutkowski's public network indicia to the Internet itself.

4. 1he Internet

Unlike SWIFT and one-s local telephone service, it is impossible to define exactly who provides the Internet. Each individual's access can come from one or more of a variety of sources: a cable company. a dial-up ISP. or an office network, as examples. In the telephone context, while only one company typically provides initial access and then many other companies are involved in completing an international call, there are rigid legal

'36 Rutkowski, "Tauonomy," supra note I I0 at 6. arrangements (as matters of both private and public law. domestically and internationally) which bind the companies together. The Intemet. by contrast. has no such arrangements, as we have seen in discussions on peering and settlements- Packets are sent out onto ?he network" (whatever that is) and other networks voluntarily bump them along. End-to-end connectivity is "provided" by the voluntary actions of many networks in between the ones contracted with by sender and receiver- However. we can be more specific about discrete information objects.

The popular Web index Yah~o!''~is provided, of course. by Yahoo! Inc., a publicly-traded American corporation. Yahoo! was one of the first and perhaps the most successful of the early Web indexes. but now has competition fiom other indexes as we11 as open-text search engines. like ~lta~ista."~These free resources are among the most valuable information objects on the Web for end-users- Many services like them compete, and serve in niche markets. They are available for use by the public (that is. anyone with Internet access), but they cannot be modified by the public. They are not provided by public bodies. nor do they operate under any obligations imposed by pubtic bodies. Yahoo!, AItaVista and other services like them can therefore be categorized as private information objects.

A different kind of information object is the name server which each ISP operates. This function. which is necessary to match up Intemet address requests with desired locations (whether for e-mail or Web traffic), is provided by one-s ISP, but is generally done using the Berkeley Internet Name Domain (BWD). public domain software available for free from the ISC. Almost all network operators use BIND as the reference implementation of the DNS, and as such. it has come to define the DNS. The ISC describes itself as foIlows:

Our goai is to produce high-quality reference implementations that meet production standards. Reference implementations of Internet standards often have the weight of defaccto [sic] standards and our goal is to ensure that those reference implementations are properly supported and made freely available to the Internet community. [...I Our code helps keep key protocols running by insuring

------13' ~http://w.yahoo.com). Also see Yahoo! Canada. .

'" . Similarly. see AltaVista Canada. . interoperability, compliance with important aspects of the standards, and by providing an easy " plug-and-play" solution for vendors to issue products that are compatible with the rest of the internet.'"

The sponsorship of this non-profit corporation. which has such tremendous influence over how the Internet operates, is another example of the unique cooperative spirit which supports many aspects of the Internet's technical infnstmcture:

ISC efforts continue to be supported by the donations of generous sponsors. and other parties who believe that freely available impiementations of key protocols are necessary to keep the Internet running. The ISC administers these grants of money and equipment to qualified sohvare developers who then create and/or maintain freely available sohvare used on most of the Internet. rm

After protocols are collaboratively developed in the IETF. they are ofien supported by voluntary entities and non-profit corporations like ISC.

The DNS might be labeled a public information object for many reasons- Not only do unrelated networks provide address resolution and transiting of packets "for free," but the software which facilitates these services is in the public domain. being developed, maintained, and distributed for free by a non-profit corporation which explicitly sees itself as offering a public service to the Internet community. Except for the fact that IANA was not incorporated, this description also fits Jon Postel's IANA. Postel explicitly viewed his work as a public service to the Internet community. He considered TLD registries to be "trustees for the delegated domain." with "a duty to serve the community." In RFC 1591 he wrote: b'[c]oncerns about 'rights' and 'ownership' of domains are inappropriate, It is appropriate to be concerned about *responsibilities' and *service' to the ~ornmunit~."'~'

It is notable that both IANA and ISC are essentially institutional names for highly-respected and trusted individuals. Jon Postel in the case of IANA. and Paul Vixie in the case of ISC. While each agency has other staff who actually do most of the work, the moral authority which each of Postel and Vixie have wielded in the Internet community for

139 Internet Sofhvare Consortium. "About the ISC." .

I40 Ib id.

I41 RFC 159 I, supra note 17. many years should not be underestimated. We might speculate as to the extent to which Vixie's goal, that of ensuring that reference implementations of Internet standards are properly supported and made freely available to the Internet community. could survive without him, As NSI has proven, control over de fucro standards in technical infrastructure can be very lucrative. BIND is a de fi~tostandard. used by all of the largest commercial networks, and holds an effective monopoly. The important difference behveen NSI and ISC, of course, is that ISC is non-profit. providing another reason why the DNS might be described as a public information object based on the question: "who provides it?"

Finally, we might ask kvho provides the IP number space on which the DNS and the rest of the Internet relies- We know that. unlike domain names. IP addresses must be globally unique across the Internet. While two instances of the same domain name might cause confkion among users, two of the same IP numbers would be intolerable For the computers which route traff~cover the Intemet. IP numbers have been managed relatively uncontroversially by regional number registries tbr many years. Numbers were previously allocated to networks in the Americas. Sub-Saharan AFrica and the Caribbean by NSI, but in 1998 this task was transferred to the American Registry for Internet Numbers (ARIN.)"~ ARIN3s articles of incorporation describe two of its purposes as "to manage and help conserve scarce Internet protocol resources. and to educate Internet protocoi users on how to efficiently utilize these scarce resources as a service to the entire Internet community."'J3 One wonders whether the relatively minimal commercial value of IP addresses versus domain names had anything to do with NSI's eagerness to offload responsibility for their assignment. Similarly, the presence of the --public resource" language in ARM'S articles suggests that their drafting was influenced more by members of the old-line Intemet community than the fanatical1y free-market-oriented domain name business community.

Who wnacces it?

Returning to the Internet in general, it appears that anyone with a modem or other form of physical access to the network can access the Internet itself. There are no

143 Article 7(4), Articles of Amendment of the Articles of Incorporation of The American Registry for Internet Numbers, Ltd.,

Perhaps the most instructive information object with regards to accessibility is the InterNIC, or Internet Network Information i enter.'" InterNIC is essentially the database of gTLDs registered with NSI- Almost all Internet addresses which "matterm' on the commercial Internet are gTLDs registered ~vithNSI. The InterNIC is the automated interface between that database and the Internet community. Individuals and Web service providers use the InterNIC to apply for and manage their domain names and those of their customers. Registration data, such as fP addresses. contact personnel. and registration status are available on existing domains and are easily searchable using the WHOIS database. The InterNIC database can be updated by domain name registrants as information, such as underlying IP addresses and plain old street address, changes. The most important function of InterNIC. of course. is determining whether a desired SLD is taken or not- and by whom.

The InterNIC began as one of three projects directed by the NSF. NSI was selected to provide name registration services and its responsibilities were set out in the NSF-

144 Local rules, of course. at the receiver's end may prevent the transmission of the message. For instance, a network operator may determine that e-mail coming fiom a particular second-level domain is frequently 'spam' or junk e-mad. and set its mail server to automatically delete a? mail from an address within that SLD. Many network operators use another Paul Vixie product. the Real-time Blackhole List (RBL), to automate this process. The RBL is frequently updated with the names of domains found to have been the source of sparn. and messages heading for nehvorks employing RBL are automatically 'filtered' and deleted. This product raises significant freedom of espression concern which are beyond the scope of this study. See the Mail Abuse Prrvention System (MAPS) RE3L home page at ~http://maps.vix.com/rbI/> and J. Clausing. "Crusader Thwarts Invaders of the E-Mailbox" New York Times CyberTimes ( 14 December 1998). .

145 Formerly at . now at ~http:!/w~~v.ne~orksolutions.com/>. NSI Cooperative ~geernent.''~Symbolic of the transition from the non-commercial to the commercial eras, NSI changed the mode of access to the InterNIC on March 19, 1999, directing requests to an NSI marketing page instead of the usual InterNTC interfaces, many of which had been automatically consulted by other computers on the nehvork for several

'47 The information was still available. but the user had to hunt for it on NSI's heavily- branded site, instead of the comparatively generic InterNIC site- These passages Eiom a news report demonstrate the degree to which the Internet community considered the InterMC to be a public resource:

Many players bombarded ICANN and the US. Department of Commerce with questions about what was going on. If NSI. without prior notice to the U.S. government. could boldly claim as company property what many have come to view as a public resource. what would prevent the company from taking other actions to thwart competition? "It was a shot across the bow." says Rich Fonnan. founder of Register.com. which has become one of the largest registrars of cyberspace addresses and plans to become an ICANN 'test' registrar. "The InterNIC and the 'whois' database were almost like the U.S- Postal Service. It was quasi-public and had a lot of trust built up in it. It was a public entity that people had trust in, and now they've turned it into a private vehicle.--iJg

At the core of the issue is NSI's assertion that it owns the contents of the key database:

"The registry information is our proprietary information." says NSI spokesman Chris Clough. "We've been providing it free to the community. but under the contract. all of the intellectual property gathered through the InterNIC process is our proprietary information.--i4"

146 Supra note 24.

I47 E. Wassermann, "Just Whose InterNIC Is It, Anyway?" The Indusfry Scandard(26 March 1999), : C. Oakes. "Companies Decry NetSol Policy" Wired Nmvs ( 1 8 February 1 999). : and M.J. Maninel "Network Solutions Registers Dissent: What's Up With Domain Name Database?" ABC Nmvs.com (27 March 1999)- ~http~/abcnews.go.com/sections/tech/dailynews~netso1990326.htmI>. i48 Wassermann, supra note 147.

I49 Ib id. NSI has begun to monetize that information by offering a yellow pages-styIe directory of .corn addres~es."~Given the fact that most commercial Web sites in the United States and Canada, at least, are in the .corn. .net or .org TLDs. such a resource would. in fact, serve as a very effective yellow pages-type directory for Canadian and American businesses. A listing in the directory comes fiee with every name registered with NSI. but costs USD$119 when registering through one of NSI's competitors."'

Combined with NSI's refusal to recognize 1CAM\lys authority over its business and its assertion of proprietary rights in the WOlS database."' the bundling of dotcorndirectory listings with name registrations suggests that NSI does not view Internet addresses or those portions of the associated architecture within its control as public information objects The United States Department of Commerce has expressed its disagreement, on behalf of the Internet community and ICAM\I. A letter was sent to NSI on July 23, 1999 asserting. in part. that:

Nothing in the Cooperative Agreement nor in existing law gives NSI the right to restrict access to this infonnation, which NSI obtained in the course of providinf registry services - on an exclusive basis - under the authority of the United States

This issue is on-going and is intimately tied to the success or failure of ICANN, which needs NSI to sign its standard Registrar Agreement to complete its introduction of the first phase of competition in domain name registration. At stake. of course is access to key elements of the Internet's technical infrastructure.

The very high level of access which has heretofore defined these information objects and many other like them suggests their classification as pubtic. Keeping them that

I51 J. Clausing, "A Planned Internet Yellow Pages Draws FederaI Scrutiny" ~VmvYork Times Cyberrimes (26 July 1999), ~http://~vww.nytimes.comilibmryl~tecM99/O7hecicle~6ican.html~.

152 A claim which NSI restated before the U.S. House of Congress Commerce Committee hearings into ICANN on July 22, 1999. See J. Clausing. "Internet Address Company Grilled in Congressn New York Times Cyberrimes (23 July 1999). .

153 Letter from Andrew J. Pincus, General Counsel. U.S. Department of Commerce, to Network Solutions, Inc., dated July 23, 1999. quoted in Clausing. supra note 15 I. way may require a legal derrrmination of their public or proprietary sratus. The Cooperative Agreement appears to be silent on the issue of database title on termination. so evidence would have to be led as to its public nature in the Internet community. to respond to NSI's ownership claims. Such a proceeding. whether it be before a United States House committee or a federal court, might yield findings of fact on many of the vague concepts which pervade this debate. such as "Internet community.'. "root zone." and the authority of the Internet's various informal governance bodies.

c.9 Who owns it?

It is often said that no one owns the Internet. and at a general level, this is quite true- The Internet is not listed on stock exchanges, although hype about it has seemingly carried them in 1998 and 1999. The ARPANET and NSFNET might be said to have been "owned" by the United States. but NSFNET was "privatized" in 1995. Anthony Rutkowski expressed the view in 1997 that the United States Department of Defense continues to "own" the IANA root."" Individual network operators. of course, own the hardware associated with their networks. Most of the Internet's key protocols. and even the software implementations of them. though. are in the public domain. yahoo!‘^ database, on the other hand, is protected by copyright in favour of Yahoo! The MP3 digital music files which are becoming increasingly popular on the Internet are subject to myriad property interests, with the user only receiving a limited licence to play them. The tibre optic and other cables over which Internet packets travel are. again. owned by many different entities, some public and some private.

Questions relating to ownership of the elements of the Internet's technical infrastructure can be expected to continue to arise for many years yet. The domain name controversy, and in particular the question of whether to add new TLDs. has brought this issue to the fore, whereas in the Internet's early years it does not appear to have been considered at all. As long as numbers and names were plentiful and non-conflicting, there

1% See "Competing Models of Internet DNS Service Governance" (20 September 1997) posted on the home page of the World Internetworking AlIiance (a DNS advocacy group coordinated by Rutkowski),

There is clearly a conflict between the notion that the Internet's coordinating functions and databases are public services. and attempts to commercialize more and more aspects of it. Perhaps no one could have predicted the future value of domain names in 1993, when the task of registering them appeared to be just as mundane as allocating IP addresses. Perhaps if many more TLDs were available and their registration had always been shared, there would not be nearly as much focus on the question of wheker domain names can be owned or not. Ideally there would be enough for everybody. While IP addresses are also theoretically scarce, the Internet community seems comfortable with their being managed by ARlN "as a service to the entire Internet community." and not subject to market forces. Yet the situation with domain names is inexplicably different. The question of who "owns" the legacy root, the WHOIS database. and domain names themselves is only beginning to be addressed by the Internet community and ICr-INN, and can be expected to take some time to be resolved. However. the hndamental conflict between identifiers as public resources and as private property will continue to animate the debate.

This issue is perhaps the most written-about aspect of the Internet's infrastructure. Some of the more insightful domain narneftrade-mark articles are: D.L. Burk, "A First Look at the Emerging Law of Cybermarks" (1995) 1 Richmond Journal of Law &tTechnology I : R. Shaw. "Internet Domain Names: Whose Domain is Tnis?," in Kahin &: KeIler. supra note 9 1: 107; G-P. Albert. Jr.. "Right on the Mark: Defining the Nexus Between Trademarks and Internet Domain Names" (1 997) 15 John Marshall Journal of Computer & Information Law 277: G.Weiswasser. "Domain Names, The Internet, and Trademarks: Infringement in Cyberspace" ( 1997) 13 Santa Clara Computer & High Technology Law Journal 137; and D.W. Maher. "Trademark Law on the Internet-Will it Scale? The Challenge to Develop International Trademark Law" (1997) 16 John Marshall Journal ofcomputer & Information Law 3. For a contrary view, see M. Mueller. "Trademarks and Domain Names: Property Rights and Institutional Evolution in Cyberspace" (Telccomrnunications Policy Research Conference, Arlington, Virginia, October 3, 1998) [unpublished]. Who cvntm/s it?

With respect to the fourth of Rutkowski's public indicia a similar pattern emerges. No one appears to control the Internet overall, while many different parties control certain aspects of it. Individual users control their own e-mail boxes by being able to filter out messages from certain people and compose. reply to. forward. and delete messages. Their network administrators, though. have a higher level of control and can change the overall architecture of the mailbox and control the network's mail server. What is most striking, though, is the way the Internet routing system seems to work as a consequence of the voluntary actions of thousands of independent networks all over the world- While ISC writes the sohare which most people use. it does not control routing. Similarly. while Cisco manufactures the majority of routers in the nehvork. it does not control routing either.

Rutkowski specificat ly provides for the categorization of in formation objects like the DNS and routing systems. which are contributed to by many different people, yet beyond the complete control of anybody in particular. His statement that "[glenerally, if the control of an information object is anonymously equal. it can be regarded as is an apt characterization of many elements of the Internet 's technical infrastructure. which have been considered to be '-public services" and '-public resources" by the Internet community for many years. In fact, it has only been competing demands tbr SLD registrations. primarfiy in the .corn domain. which has driven recent claims that some aspects of that inti-astructure are private. not public.

------156 Rutkowski, "Taxonomy," strpru note I I 0 at 6. This final question is the subject of the emerging field of "Internet ~conornics."'~~Because the commercial Internet has expanded so dramatically on the basis of the simple "all you can eat" "one size fits all'- model of flat-rate access. the actual cost of the various elements of the Internet are hidden From the user. These costs are currently being sorted out by the industry. QoS innovations will permit a much more precise apportionment of costs, at least for premium services. and usage-based billing may be the way of the future. On the other hand, that statement has been made several times before and still has not come true. At any rate, the incompatibility of flat-rate pricing with different classes of service suggests that these changes will come eventually. Whether that trend will extend to explicit charges for routing and transiting services (which are already implicitly charged for through the peering and interconnection in-kind exchange) remains to be seen, Indeed. as business models at the content level evolve. charges for content also becomes more likely.

One of the most striking features of the early Internet was that everything looked free (especiaily if one gained access through a corporate or institutional network). What is more accurate is that someone else paid for it. Universities, supported in both Canada and the United States by substantial government subsidies. paid For a good deal of the connectivity to the early Internet. The number of Internet-related companies begun by students should cause no surprise: they had Internet access of a quality not available to the general public. In this sense the early Internet. and certainly the ARPANET and NSFNET could be classified as public on the basis that they were largely paid for by public bodies.

To the extent that end-to-end connectivity today is "paid for" by a number of different parties, Rutkowski's statement to the effect that information objects anonymously controlled can be regarded as public can be modified to fit this issue as well. Since comprehensive connectivity is "paid for" more or less anonymously by the millions of unrelated individuals and networks which together make the Internet work. the end-to-end connectivity which defines the Internet should be considered in some senses as a public phenomenon.

-- 157 A good introduction to the field is McKnight & Bailey. supra note 22. 9 Public Index Figure

On balance, most the Internet's technical infrastructure and certain aspects of its physical infrastructure can be characterized as '-public." Rutkowski himself appears to support this view, elsewhere describing the modem Internet as a &large-scale global public infrastructure and marketplace."''s Even Milton Mueller. who suggests that a system of private property would bring order to the technical infrastructure. has acknowledged the public nature of those elements of the network which are commonly-shared. In a 1996 chapter describing the remarkable shift towards private control of corporate communications networks, he concludes:

We need a new metaphor or model for this new relationship between public and private networks. lMany speak in grand terms of network --confederationseand "electronic highways." 1 propose a humbler but more accurate analogy, The uppropriare model is not hig/twg.s but plumbing. In plumbing systems there is a large-scale public infrastructure for general distribution, but the domain of these public utilities basically stops at the building premises. Once inside the building. the owner controls the equipment choices and c~nfi~uration."~(emphasis in original)

This characterization of public and private telecornrnunications nehvorks is a very good way of describing the Internet. Local networks are very much private. but what happens between them is fhdamentally public. We have examined several elements of the --large-scale public infrastructure" which provides "general distribution" of Internet traffic. The large-scale public infrastructure to which Mueller refers has also been labeled a "public metanetwork" by Brian ahi in.'^'

Mueller recognizes the enduring importance of public telecommunications networks in an age of private networking:

Public nehvorks retain powerful - indeed, insurmountable - advantages outside [users'] premises due to the high transaction costs

IS8 Rutkowski, "Factors." supra note 1 12 at 103.

159 M. Mueller, "The User-Driven Nehvork: The Present Extent of Private Networking in the United States," in Noarn & NiShuilleabhain, supra note 74: 65 (hereinafter "The User-Driven Network") at 8 1-82.

160 See Kahin, supru note 74. and other entry barriers associated with the use of public rights of way. and because of the significant economies of scale in shared transmi~sion.'~'

Indeed. public networks will remain essential to public telecommunications. and far from withering with liberalization. the discipline of telecommunications law will be occupied with the challenges of the new public network for many years to come. New access issues relating to buildings and rights-of-way and relationships between competing carriers who must cooperatively carry out the work of the Formerly unified monopoly operators will continue to arise. Far from becoming obsolete. questions of how public networks are governed will become more relevant in environments featuring many interconnecting public networks, where previously there was only one. The Internet should be understood as being part of this environment and this debate. Whether this new metmetwork or network of networks will be considered a public nehvork or merely an abstraction from a number of private networks is a central question tbr the evolution of telecommunications law. This thesis rests on the conviction that the Internet is indeed a public network -- an unprecedented global public network.

The Internet is an open. accessible. global. and hndamentally public network. Focusing on the private actions of individual networks and operators certainly gives the impression that the Internet is only about private activities. The example of two individuals having an e-mail conversation best captures the shortcomings in the "private" view of the Internet. It assumes that only a sender and receiver are required for an e-mail conversation. By focusing on the ends. we lose sight of what is in the middle: the infrastructure which makes the conversation possible. Neither sender nor receiver knows the people whose networks bump their conversation' s packets along the network. nor do they compensate them. The parties in the middle send those packets along because they expect that other networks will do them the same favour. The high vnlzte placed on ztvtivcrscd inrerconnection has thus far led nehvork operators ro rerrninde uml carry orher nenvorh ' pckm because it

-- 161 Mueller, "The User-Driven Network," supra note 159 at 82. is in the interests of all network that ati packe~she delivered and be delivrrrrble to as many dest inufions as possible.

The Internet is a remarkably versatile platform. which anybody can experiment on and even change, at different levels. These ideas touch on the hdamental openness of the Internet. The fact that nobody in particular is "in charge." combined with the many elements of its technical infrastructure which are shared in the interests of universal interconnection and interoperability. make the Internet a fhdamentally public space. While

the Internet's predecessor nenvorks might have been described us "prtblic." because they were provided and confrofledby government crgencies. the modern lnterner is public because it is a massive, globally-shared common commr micut ions en vironmenr.

Despite the appearance that the internet has retained this open. accessible, and public character becarise no one has the power to change it. we should not lose sight of the people. institutions, rules, and principles which keep it the way it is. Commitment to the fimdarnental values of the Internet is what keeps the ISC distributing BIND for free. what leads independent network operators to transit other networks' packets for "free." and what makes the IETF work in an extremely competitive industry. There is a tangible commitment to maintaining the Intemet as a unified infrastructure available to all. It is a mistake to characterize the network which these values have shaped as something of a state of nature, where interoperability and interconnection simply arose organically. Rather. these technical characteristics are the product of explicit decisions made by people with a particular vision of what the Intemet is all about, This vision was born in a different era of the Internet. and as we shall see, is under attack in many ways as the commercialization of the Internet continues.

The challenge is ro muinruin the open charac~erof rhe Inremet while accommodating the investmen! und modfic~~tionswhich will be required to improve its usefrdness and reliability. In order to ensure that the Internet-s infrastructure develops in a way which maintains its open character. we will need effective public governance, on a global level, based on principled public network policy which can be applied to the innumerable incremental decisions which will be made in the development of the Internet in coming years. THEEVOLUTION OF THE INTERNET'SGOVERNANCE STRUCTURES

The informal and ovenvhelmingly personal governance structure of the Internet's non-commercial era is not suited to its commercial future. While many calls for reform of the DNS came fiorn individuals who were frustrated with Postel's refkaf to add new TLDs to the IANA root. it must be recailed that it was Postel himself who began the reform process which continues to this day. He did this first by suggesting that up to I50 new iTLDs be added.'6' and then. when he was not able to obtain the level of consensus which he preferred, he called on ISOC to begin the IAHC process. Postel knew that the question of adding new TLDs was one that affected the entire Internet community and therefore required a much broader consensus to implement. It appears that when the IAHC's supporters, and the new informal bodies which the gTLD-MoU spawned. took steps towards implementing the gTLD-MoU. its opponents successfidly lobbied the United States government to step in and take over the process. The release of the Green Paper, as the NTIA's January 30. 1998 drafi proposal is knowdb3 was a pivotal point in the reform process. Its next incarnation. the White Paper, describes the circumstances preceding the United States government's release of the Green Paper:

Although the IAHC proposal gained support in many quarters of the Internet community. the IAHC process was criticized for its aggressive tecl~nologydevelopment and implementation schedule, for being dominated by the Internet engineering communi&. and for lacking participation by and input from business interests and others in the Internet community. Others criticized the plan for failing to solve the competitive problems that were such a source of dissatisfaction among Intemet users and for imposing unnecessary burdens on trademark holders. Although the POC" responded by revising the original plan. demonstrating a commendable degree of flexibility. the proposal was not able to overcome initial criticism of both the plan and the process by which the plan was de~reloped.

-- - 162 Supra note 30.

163 Supra note 44.

164 The POC (Policy Oversight Committee) was a top-level representative council called for in the gTLD- MoU. Important segments of the Internet community remained outside the lAHC process. criticizing it as insufficiently representative!h5

Almost exactly the same observations could now be made with respect to the ICANN process, which began in October. 1998. and is ostensibly the embodiment of the plan called for in the White Paper.

It is remarkable to note the extent to which issues of process have hobbled the Internet community's own "private" reform attempts. and now IC ANN. Entrepreneurs accuse large corporate trade-mark holders of trying to influence the process to their advantage, at the expense of "the little guy.'. The old guard engineers do not like the new influence of commercial interests. which in turn decry their low level of input compared to that of governments. The vaunted consensus-based decision-making processes which were thought to define the Internet have simply failed to produce meaningll progress on even the first major network policy issue which the commercial-era community has faced. This is partly because several of these matters. particularly management of the root zone, had never been governed by this idealized method. Rather. Jon Postel made decisions after (usually, but not always) consulting with a small group of peers. Implementations of the DNS may be debated in the IETF. but by a relatively small number of people (that is. by those who designed it or know precisely how it works - a very small group). The influence of Jon Postel and Paul Vixie. though. has been far more powerful. in that they have been the ultimate decision-makers with respect to this element of shared infnstructure.

Around the time of the Green Paper. Postel put together a "transition advisory group" of six highly influential members of the Internet engineering community. They were: Brian Carpenter, Program Director. Internet standards and technology. IBM. and Chair, IAB; , Director of network engineering. Verio. Board of Trustees member. ARM, and Chair of the IETF's DNS working group: David Farber. Professor. computer and information science and electrical engineering. University of Pennsylvania. Board of Trustees member, ISOC: Geoff Huston. Technical Manager for Telsrra (formerly Telecom Australia) Internet, and President, Internet Society of Australia: , Senior Data Architect, MCI Internet Engineering Organization. and Member. IAB: and Steve Wolff, Executive Director, Advanced internet Initiatives, Cisco Systems. and former Director, NSFNET.

165 White Paper, supra note 45 at 7- These are the type of people whom Jon Postel consulted about changes in the Internet's architecture. not the Internet community at large. It is interesting to note that despite the employment of several of these men at major Intemet companies like IBM, Verio, MCI. Telstra. and Cisco, they almost all went there from posts with institutional networks of the non-commercial era. While one might see the names of these companies in the resumes of the leaders of the Internet technical community and come away with the impression that these individuals are representatives of something of an industry group. we must recall David Clark's words about IETF members. for instance. "leaving their corporate allegiances at the door.= These individuals' sewice to agencies like the IAB. ETF. and Postel's transition advisory group, is in their personal capacity.

Organizations like the IETF are industry bodies only to the extent that their members work for companies in the industry. Beyond that. the best interests of the Intemet itself are paramount. This is particularly the case among the most well-known leaders. This appears to be something of a -'code" among these individuals, alternately referred to as "Internet pioneers," "Internet old-timers." or the internet's '-old guard.'' Is it arrogance or a genuine belief that he and his fellow transition group members "know what's best" for the Internet which leads John Klensin of MCI to say:

One of the diseases of the Internet community is that there are a large number of individuals who say, "Well, everything would be a11 right if I were in charge." One of the things the community does not need is any more half-baked proposals. which result in endless. pointless flaming about the details and who was consulted and who wasn't!66

The implication is clearly that Internet-community-wide decision-making is inefficient and often destructive. This begs the crucial question: what happens when the close-knit cadre of

"old timers, " steeped in the traditions of insritutional networking, retires or becomes too frustrated to continue vdunreering? It should not be surprising that none of the attempts at "consensus-based" Internet-community-wide reform have accomplished very much in the way of change so far. The loss of Postel is increasingly lamented because it just seemed so

166 Quoted in T. Spangler, "Net's Old Guard Shaping New DNS lnternet World (23 February 1998), ~hnp~/w.iw.com/pnnt/1998/0~3/news/l9980223-guard.htrnI~."Flaming" refers to heated public e-mail exchanges which often degenerate into puerile name-calling. much easier when there was one person who could make decisions and command the support of almost everyone.

The ineffectiveness of Internet-community-based reform efforts suggests that popular conceptions of what that community is are not congruent with the reality of the commercial Internet. The stabilizing effect of the personal leadership of Postel and others created the illusion that management of the Internet's technical ~~~~~~~e is inherently uncontroversial. Indeed. much of the mythology of the Internet assumes that no one entity can exert control over any aspect of the network. anyway. NSI's stranglehold over .corn and recent moves to capitalize on this power indicate otherwise. If NSI is willing to openly defy the United States Commerce Department. it would not likely be interested in what any private governance body might have to say. Indeed. NSI's senior vice president for Internet relations has given a glimpse of NSl's likely response to any policy decision which might adversely affect it in any way. Speaking about NSI's negotiations with the NTIA over the profit which it should be guaranteed on competitors' registrations of .corn. .net, and .org names through the SRS, Don Telage said:

We've invested a lot of time. sweat, blood and equity into the registry. We have a fiduciary responsibility to our shareholders in keeping that and to make a reasonable profit on it.I6'

In addition to his misstatement that the officers of NSI or NSI itself owe fiduciary duties to NSI's shareholders (any duties owed. of course. are to NSI only). Telage demonstrates that NSI is now most definitely more interested in its own earnings than providing a public service to the Internet community.

Beyond NSI. there are many other very powerful interests tvith significant influence over the modern Internet. private interests which can be expected to act so as to increase shareholder value, not altruistically according to vaguely-defined and romantic ideals of what internetworking should be about.

I67 Quoted in T. Spmgler, "The trouble with ICANN Inre@acrive Week (19 Ju1y 1999). . The Internet's governance structures must mature to keep pace with the network's role as an essential communications infrastructure for the entire world. If the Internet is in the nature of a public infrastructure. what is the significance of this characteristic for the way in which it is governed? The balance of this section will argue that the Internet's infrastructure should be subject to public oversight. in the interests of all participants. However. the words --public." "oversight" and '-governance" need not necessarily imply "government." The interner commrmiry needs to develop a mature conceprion of ifself and of the network q'ir intends to be able ro collecriveiy manage the zmderlying ir@-asmrcturein the manner in which it has thus fur been governed

This will not be easy. At present. the Internet community. driven both by commercial interests and a dominant culture of American libertarianism, is vidently anti- government and revels in its perception of the Internet as the ultimate individualist environment. Given the hndamental importance of the underlying infrastructure, however, and the strong possibility that the Internet \trill change in the future. the Internet community needs to be able to conceive of itself as having a collective interest in how that infiastntcture is governed. and not simply assume that the dominant values of the past will continue to rule.

Perhaps the grerrresr sin& bc~rrierlo developing a matzrre conception of the need for public governance of the inter-ner :s injbc~strzrcnrreis a prevailing orthodoxy that government, in any shape or form. is bcd While this may simply be an espression of the neo-liberal beliefs which currently dominate political discourse in the United States. this orthodoxy is particularly strong in the computer and Internet industries. whose recent financial performance seems to validate it. The --dead hand of government~'lhRis consistently presented as the enemy of the Internet. and the fate of the Comrnunicution.s Decency Act (most of which was declared unconstitutional by the United States Supreme Court in 1997) held up as proof. 169

A phrase used by Milton Mueller in the pubk discussion period of the ICANN Governmental Advisory Cornminee (GAC) open meeting in Berlin. Germany (25 May 1999). Archived materials relating this meeting are available at: chttp:/!cyber.harvard.edu/icann/berlinlarchiver>.

I69 Reno v. American Civil Liberries rlssociurion. 1 1 7 S. Ct. 2329 ( 1997). More moderate expressions of this anti-government sentiment assert that the nature of the Internet is such that it is simply impossible for government to control. Referring to the "new networked economy." IBM President Lou Gerstner says:

The existing models of regulation, government-mandated control, and even the idea of effective national policy, will not apply, The policies and the issues we're grappling with aren't new. But the Net adds significant new dimensions- Because it is moving much faster than any bureaucracy. committee, or legislative process ever could, it is a mistake to think that the Internet will develop under the kind of regulation we were able to apply to. say. the phone system or even broadcast media at a point in time when most nations were on a cold standard and many markets were underde~elo~ed.~" C

Whatever else might be said about this statement. it obscures the fact that the Internet is the product of certain values and human actions by casting it as modem. technology-driven, and immune to any expression of public values or public will. Yet fimdamentalty public values are what explain the Internet's very existence.

Many Like to think of the Internet as the ultimate tool to circumvent government, and that once government is gotten rid of completely. a state of paradise will prevail. The short-sightedness of this view is well captured by Lawrence Lessig:

When we don't have government running things: when we unite behind this mantra of anti-statism: when we erupt with this scream of what we don't want - do we know what we will have in exchange[?] When we don't have government, what will we have? For here's the obvious point: When government steps aside. it is not as if nothing takes its place. When government disappears, it is not as if paradise prevails. It-s not as if private interests have no interests: as if private interests don't have ends that they will then pursue. To push the anti-government button is not to teleport us to Eden. When the interests of government are gone, other interests take their place. Do we know what those interests are? And are we so certain they are anything better?"'

-- -- - I iO L. Gerstner, "A Policy of Restraint." ~http:/~www.ibm.com/thinkma~10u/restnint'text.html~.

171 L. Lessig, "Governance" (Draft 3.0 I) (Keynote. Computer Professionals for Social Responsibility (CPSR) Conference on Internet Governance, Massachusetts Institute of Technology. Cambridge. Massachusetts (I 0 October 1998).

Certain management functions require coordination, In these cases, responsible. private-sector action is preferable to government control. A private coordinating process is likely to be more flexible than government and to move rapidy enough to meet the changing needs of the Internet and of Internet users. The private process should, as far as possibte. reflect the bottom-up governance that has characterized development of the Internet to date.'"

What has followed the creation of ICANN. of course. has not resembled this vaunted bottom- up governance at all- Perhaps this is because that "bottom-up governance" has actually meant decision-making by a very small number of like-minded American computer scientists, morally bound by a very community-oriented ethic of internetworking. The NTIA embraced the Internet community's anti-government betiefs by committing itself to a private- sector coordinating process. Yet the way that process has played out has led many observers, even libertarians. to question whether a private process is the right one.

A useful illustration of this phenomenon is provided in a short commentary by Solveig Singleton of the Cato Institute. an American libertarian organization. She writes, in part:

While we sleep. the Internet Corporation for Assigned Names and Numbers, ICANN, is creating a mechanism to subdue the Internet- The U.S. government created [CAW to administer a few technical rules. But ICANN seems poised to make itself an internationa1 government for the Internet. not a technical-standards body. ICANN's regime is neither democratic nor constitutional.'"

I72 White Paper, supra note 45 at 22. In S. Singleton, "The Internet Needs an Independence Day" (6 July 1999),

[tlhe equipment that makes up the Internet is scattered all over the world. This decentralized arrangement makes the Net hard to govern. Some technical tasks are controlled by just a handhl of people. For years. the Net was run by just one man, Jon Postel. Before his death last year. he worked quietly out of a California think tank. assigning the numbers computers need to find other

Somehow, mere "technical coordinationg*becomes "control." the "hard to govern" Net is "run by just one man," and. oddly. IS1 becomes a "think tank." Further on. technical coordination is cast as even more powert'ul:

Whoever controls the domain-name system controls the Internet, Countries. companies. and individuals could vanish from the Internet overnight at the whim of the domain-name administrators. And as control passes from NSI to ICANN. the domain-name system's independence from the government is under

For a network which is supposed to be uncontrollab~e. control of the domain name system sounds surprisingly like centralized control. The Cato Institute was apparently uncritical of the autocratic power of Postel. since he. too. had the power to make countries. companies, and individuals vanish at his whim. Rather. NSI is said to have held that power. and as a for- profit government monopoly is apparently preferable to a non-profit govemment contractor or the non-profit ICANN. The poinr thcrr whoever conrrols the DNS conrrols the Internet begs [he question of what principles should injorrn the exercise of [ha/formiduhle power.

The stated purpose of the USgovernrnent's privatization of the DNS was, of course, to increase the Internet's independence from government. To the extent that it is still not independent, the problem more likely lies in the limitations of private coordinating

174 ibid. 175 ibid. processes than any plot on ICANN's part to "subdue the Internet." Singleton acknowledges that its faults lie with issues of process. not policy:

ICANN is now the government of the internet- With its elite meetings and expensive retreats, it is not a democratic government, Nor is it a constitutional government. Where does ICANN's authority come from? How can abuse be prevented?'76

Singleton's critique of ICANN cuts the legs out from under the ideal of self- regulation. Private bodies are to be preferred only as long as they behave like governments! No mention is made of the fact that one of the forces which prevented Postel from abusing his power was the oversight of the United States government agencies which contracted with IS1 to perform the IANA function. The whole point of the White Paper-directed "NewCo" was that it was not to be government. that it need not have a constitution like a government, that it be "bottom-up." ICANN's critics now attack it for not observing due process. nor being subject to oversight by elected bodies. ICciNiV is stzrck between the near-Jianatical insistence among the Internet commzinity (or irs most vocal members) that it he private, that it not be government, and the rec-rrir-enrentthat it crcr publicly, like a governnwnr,

These conflicts demonstrate the potential weakness of self-governance mechanisms with respect to essential infrastructure. Self-governance is most popularly championed in the Internet content field- particularly with respect to privacy practices of Web site operators and the security of messages. The Clinton White House's dominant mantra that "the private sector should Iead -- 1 77 was w-arrnly received by business, as this excerpt from the Global Internet Project's "Policy Architecture for the Internet" demonstrates:

As the Internet becomes the primary channel for commerce worldwide, determining how the Internet is to be managed is a vital issue. Our highway needs rules of the road ...and a way to enforce them firmly but fairly. The Global Internet Project believes that the marketplace. not government. is best able to promote open access. provide a level playing field. stimulate innovation. offer security, protect privacy,

I76 lb id.

I77 See A Framework for Global Electronic Conrmerce. supra note 34. and respect intellectual property with the fewest and least intrusive oversight mechanisms. Accordingly. the industry is identifj.ing the mechanisms which demonstrate its ability to self-govern in critical areas like privacy and content selection. In some cases international non-profit bodies are evolving to address issues such as domain names, and we may see other issues addressed in a similar manner in the future!"

Not only have the industry's attempts to demonstrate its ability to self-govern with respect to user privacy met with the consistent disapproval of the United States Federal Trade Commission (FTC),"~but the international non-profit body created to address domain names is broke and under siege fiom all sides, including the United States House Commerce Committee. The Global Internet Project (GIP). one of many trade associations dedicated to keeping government out of e-commerce. at least appreciates what is at stake:

The rapid adoption of the Internet has led us to treat it like a natural resource, as fundamental to the Information Age as clean air and fresh water. The more we use it. the more we take it for granted. We just expect the Internet to be there. just as we espect the lights to go on when we flip a switch or the dial tone to greet us when we pick up the telephone. But we can't take the Internet for granted any more than other resources. As internet usage begins to reach critical mass. the industry must take steps to ensure that the Internet is 'always on' with open access and participation to

These statements clearly demonstrate a desire that the internet be governed in accordance with what we consider to be public values. They echo Solveig Singleton's demands that ICANN act like a government. Owrcdf. there is cr strong desire that public rcdues prevail with respect ro the Internet 's infiustrrtctzrre. hrrr a distuste for public insrirutions.

1 78 GIobal Internet Project. 'The Opportunity and the Challenge to Sustain Rapid Internet Growth: A Policy Architecture for the Internet" (Version 1.0). .

179 See United States Federal Trade Commission. "Privacy Online: A Report to Congress" (June 1998). ~h~://www.fic.gov/reports/privac~/toc.htmand "Self-regulation and Privacy On1 ine: A Report to Congress" (July t 999), . What is needed is a vision of Internet self-governance as a public matter of concern to the entire world, not just to the vaguely-defined "Internet community." This vision can be expressed in the Form of a compound of what we traditionally think of as public and private governance. Dogmatic assertions that the Internet must be self-regulated ignore the significant public interests which the Internet will increasingly engage. The extent of the current stated commitment to the Internet as the primary global communications environment (or "global information infixstructure") must be klly recognized. As many recent works on the impact of the Internet on traditional communications regulation demonstrate. the Internet frustrates many aspects of existing national telecommunications and broadcasting regulation. and significantly reduces the abiIity of individual nations to impose national policy on communications originating and terminating within their borders.'*' This rhetoric can go too far. of course, and as Eli Noam reminds us. the Internet is not truly beyond the domestic control of states at all:

At this point, people wili usually assert that even if you wanted to do something about this, you simply cannot regulate the Internet and transactions over the internet. so it is hopeless.. .In a way, that is not true. It is dificult to regulate the electronic transactions themselves, but comrnzcnications ix trot jut ubozit bit streanrs and rrtmsuc~ions. They also involve physical entities. people. institutions with domiciles and assets. Therefore. ifyou cannor catch the rraohi/c.parts in rhe sysrem. you cun go c#er [he irnnrobile parts. szcch as zmderlving rransmission networks. ph ysicat delivery. packages, people, transmission facilities. assets, advertisers or whatever. [. .-1 &I) cconlzrsion is ijyori wunr to repfate [he inrerner. -vou can!82 (emphasis added)

These comments relate mainly to content on the Internet, but to the extent that infkastructure is even more territorially-based. they also apply to technical and particularly physical

- - 181 See E.M. Noam & A.J. Wolfson. eds.. Globufism uncl Localism in Tefecommunications (Amsterdam: Ekevier Science. 1997), particulariy their "Introduction: The End ofTerritoriality in Communications" at xvii, and the "Poticy Issues for the New Global Communications Environment" section at 297ff. Also see B. Kahin & C-Nesson. eds.. Borders in Cyberspace: Informarion Poliqv and the Global information infistnrcture (Cam bridge. MA: M IT Press. 1 997).

182 Quoted in Canada Senate, Subcommittee on Communications of the Standing Senate Committee on Transport and Communications, Final Report. Wired to Win! Canada's Positioning Within the World's Technological Revolution (May 1999). ~http://w~v.pari.gc.ca/36/1/parlbus/commbus/senate/com- e/comm-elrep-e/finalrepmay99-e.htm>. at Section 11, "The Internet as a New Paradigm." . infiastructure elements. Self-governance should be recognized as an alternative to state regulation, not the only option. What is needed is a more accommodating conception of how private and public governance forces can continue to work together to support the Internet's infrastructure, as they always have.

The Internet still has something of a sense of innocence about it. It feels fimdarnentally liberating. in that it makes international communications easier, more accessible, and more expressive than ever before. and all apparently without the need to pay the phone company. This. of course. is a misconception. but the impression of many people is that the Internet is about getting urozind entrenched interests. be they telephone companies. publishers, or governments. However. the Internet is fast becoming mainstream, as the trends discussed in Section VI demonstrate, Efforts to transfer voice telephone calls fiom circuit-switched networks to the Internet suggest that at some point. ail telephone calls wili be Internet transmissions. The same goal is held out for television- The issues at play are those of convergence. which have been mooted almost since digitization began to revolutionize communications of all kinds. While this thesis is limited to the narrower issues of governance of the Internet's infrastructure. the general issues at stake are the same - how should communications systems be governed in the public interest?

As the commercial Internet continues to move away f5om its quaint. "gee- whiz" adolescence and replace more and more existing communications infrastructure. these broader questions of communications policy will increasingly be posed of the Internet. The idea that this global public communications infrastructure should be privately governed according to private interests (or even the collective interests of those private entities which can most impose their influence on the Internet) wilI become harder and harder to defend. As Eli Noarn put it, "self-regulation has this marvelous ring to it. but there is just too much on the table now, 7.183 The goal should be ro find u workable model of hternet governance. not obsess abour whether "government" shozild he involved or not. Lessig has described the stumbling-block of anti-government sentiment this way (speaking about content control, not strictly about infkastructure):

- I83 Quoted in W.J. Drake, rapporteur. Towurd Szrstuinubie Competirion in Giobal Teircommzrnicationr, Fram Principle to Practice (A Report of the Third Annual Aspen Institute Roundtable on International Telecommunications) (Washington. D.C.: The Aspen Institute. 1999) at 85. The primary good here is a set of values, not absence of governmental interference independent of those values. And quite ofien - more than the Libertarians seem keen to admit - those values are only protected by a government acting - acting against tyrannies imposed by individuals. and by groups!"

We will return to these public values in Section VII below, when we ask what principles should inform the governance of the Internet's ~tructure.First, we must confront more of the prevailing orthodoxy about the Internet which denies its need for public governance.

Many writers have suggested that the Internet needs only "coordination," not governance, and that governance imports concepts of power structures and politics which are not appropriate in the Internet context. Milton Mueller explains:

I don't know who came up with the term "Internet governance." But it seems to have become the label of choice for the evolution of what is more properly called the legal and institutional framework of computer networking. It was an unfortunate choice of a term. "Governance" means control. the "act, process, or power of governing.'' National governments and corporate boards engage in "governance." Corporate governance refers to how a specific company elects its board of directors and adopts policies. But the Internet is not a political community. Nor is it a corporation or any kind of a bounded organization. It is a method by which millions of autonomous corporations, organizations, governments, and individuals interconnect and communicate. "The Internet" is a myth; there is no "The9 there. There is only aprocess of internenvorking. This process does not need "governance." It needs only co~rdination.'~'(emphasis in original)

To the contrary, the evolution of the legal and institutional bework of computer networking is precisely what Internet governance is about. Mueller's interpretation of governance seems to focus on unified, autocratic control, not the notion of guiding, or influencing, as discussed in Section 11. The Internet has not been autocratically controlled by any single entity since the ARPANET and NSFNET eras, but it has been guided and

184 Lessig, "Governance,"sum note 1 7 1 at I 2.

185 M. Mueller, "The 'Governance' Debacle: How the Ideal of Internetworking Got Buried by Politics" (Internet Society mET'98 Conference, Geneva, Switzerland, 2 1-24 July 1998), (hereinafter, "The 'Governance' Debacle"). influenced in significant ways by many people, institutions. rules. and principles, as this thesis demonstrates.

In another somewhat idiosyncratic description of the Internet by Anthony Rutkowski, he describes what he considers to be the "essential properties of the Internet":

It is an open global self-orsanizing agglomeration of networked computer-based resources under the collective control of the rn illions of organizations and individuals who make those resources available. The medium routes around control, The users are subject to law, but that is different from control of the Internet. As long as those essential properties remain, there isn't a lot to worry about - except the embrace of intergovernmental ~r~anizations!~

The reference to "collective control" contradicts the claim that "the medi urn routes around control.7, 187 In any event, Rutkowski at least hints at the controlling effects of the actions of autonomous network operators, in contrast to Mueller's implicit assertion that internetworking just "happens." that it produces a process. instead of a process producing it. Rutkowski's conception focuses on the positive contributions which the "millions of organizations and individuais who make those resources available" make to the collective control" of the Internet. while Mueller-s conception is more negative. His "millions of autonomous corporations. organizations. governments. and individuals" merety take part in a process of internetworking. They just do what everybody else is doing. They do not control anything beyond their local networks. and to the extent that they do anything in common, the effect is insignificant.

Given that the views of MueIler and Rutkowski are otherwise usually quite similar, it is interesting that they can describe the same phenomenon. using almost the same language, in ways which completely differ based on the profile given to the cooperative

1 86 A.M. Rutkowski. "Comment" ( 1999). .

187 Rutkowski is presumably alluding to a remark attributed to John Gilmore of the Electronic Frontier Foundation to the effect that: 'The Net interprets censorship as damage and routes around it" (quoted in H. Rheingold, The VirfualCommuniry: Homesreuding on the electronic frontier (Reading, MA: Addison-Wesley. 1993)). This epithet has since taken on a life of its own and variations on it appear in many works of Internet-related literature. Aside from being an inaccurate anthropomorphism, Gilmore's phrase appears to have been extended by Rutkowski for the point that the Internet not only routes around censorship, but around any kind of control at all. nature of Internet communication. Rutkowski's characterization supports the recognition of that cooperative activity as a form of public governance of the Internet. while Mueller's denies the existence of governance altogether. A hrther reading of this claim identifies the same assumption which underlies the claims of many other writers: the Internet's in$-astructureis just them, and needs no positive collective action ro support it.

For Mueller, internetworking is a purely private, organic matter:

Let's begin by understanding and applying the distinction between an "institutional framework'' and "g~vernance.~'The difference is subtle but important. The former supplies impersonal rules within which individuals and organizations can transact independently and autonomously. The latter tells individuals and organizations how to act; it regulates their behavior. The Internet needs a stable. defined institutional framework. It does not need

The technical infrastructure is benign. even unimportant. It may safely be taken for granted. "Institutional frameworks" are neutral and value-less. However. the view that the Internet does not need governance. but rather only an institutional framework, renders invisible the process by which that institutional framework itself is created and maintained. Mueller suggests that all that is needed is a system of property rights. Others might recommend a system of individual rights. while still others might counter that "concerns about 'rights' and 'ownership' are inappropriate." and that -'it is appropriate to be concerned about 'responsibilities' and -service' to the community."

The choice of institutional framework is never neutral. Mueller describes an institutional fiamework as "impersonal rules within which individuals and organizations can transact independently and autonomously." The questions of who makes and enforces the rules are invisible. The existing structures of Internet governance do not "tell individds and organizations how to act" nor "regulate their behavior." But they are extremely influential and, because of the scale which the Internet has attained, they are continually accretiag more and more influence. The search for a stable. defined institutional framework for the Internet's infi-astructure has been underway for several years now. and is. in fact, about Internet governance. not just sterile "coordination." To suggest the overlay of a system of

18* Mueller, "The 'Governance' Debacle," slrpru note 185. property rights is, paradoxically. to choose an approach which contradicts the non- commercial and non-proprietary approach which has guided the Internet to date.

Other writers take a broader view of coordinating hctions and are more comfortable using the word governance. David Post. best-known for his sanguine view of self-organizing legal mechanisms in cyberspace. recently characterized ICAM\lYs responsibilities thus:

[t is all we11 and good to say that this new institution will not be engaged in Internet governance - but words will not make it so. Any entity exercising control over the DNS will be subject to immense pressure to do more than mere -technical management' because, bizarre as it may seem at first glance. the root sewer, and the various domain servers to which it points, constitute the very heart of the Internet. the Archimedean point on which this vast global network balance^.'^"

MuelIer might say that this is precisely what we do not need. as opposed to denying that it constitutes governance. However. this view demonstrates the other end of the spectrum in terms of the perceived significance of "coordination."

Post's point about pressure on ICANN to make decisions which go beyond mere technical coordination has already been borne out. ICANN reportedly rejected an arrangement proposed by NSI in March. I999 whereby NS17s control over the .corn TLD would have been strengthened, in exchange for a cash payment, essentidly a pay-off to keep ICANN's nose out of NSI's management of .corn. A July 23. 1999 :Vewsbytes story explains:

The cash-strapped Internet Corporation for Assigned Names and Numbers (ICANN) several months ago rejected a lucrative offer from Network Solutions Inc. that would have bolstered and legitimized the Internet registrar's control over the global .corn domain name registry. Newsbytes has learned. "They (Nehvork Solutions) wanted to sign a contract with the Department of Commerce that would have given them more control

189 D. Post, "Governing Cyberspace: 'Where is James Madison when you need him?'" (posted 6 June 1999), ~http://ww.icannwatch.orp/archives/essa930604982sl. [CAWWatch is a watchdog- type site edited by Professors Michael Froomkin of the University of Miami (Florida) School of Law, David Farber of the University of Pennsylvania Departments of Computer Science and Electrical Engineering, and David Post of Temple University Law School. Its main page is at . and financial interest in .corn than we thought was appropriate," ICANN interim Chair Esther Dyson today told Newsbytes. Nehvork Solutions would not confirm the offer to Newsbytes, but spokesperson Brian O'Shaughnessy said, "Throughout the course of these negotiations, we put seven1 proposals on the table. ICANN has no right to reject or comment on any of those private negotiations." Although Dyson would not disclose the sum that she claimed Nehvork Solutions proposed to pay ICANN, she confirms the figure was a substantial one. ICANN and the Commerce Department rejected the offer last March, Dyson saidjm

ICANNzs interim executive clearly decided that .corn should be subject to the same rules as dl other TLDs, such that domain policy would be consistent across registries and registrars. A self-governance approach might have suggested that NSI should have been given .corn to coordinate. If the matter were merely about --technical coordination." no one would be better equipped to coordinate .corn than NSI. There is clearly much more at stake. Control over the .com registry is control over the most important zone of the commercial Internet's architecture. It is something akin to control over the entire VHF television broadcasting band. There is always UHF, but does it matter?

During the currency of the Cooperative Agreement. the United States Department of Commerce at least theoretically has the power to restrain NSI's control over .corn, but after its expiry on September 30. 2000. only ICANN will (in theory) be able to tell NSI how it can operate .corn. NSI has refused to recognize ICANN and is forging ahead with ever more brash leveraging of the iucntive .corn franchise and other resources the Internet community has always considered public. To suggest that the .corn domain has never been governed, but rather only technically coordinated. ignores the significance of the Cooperative Agreement and NSI's relationship with the NSF and NTIA. ICANN has already learned that without the NTIA it is virtually powerless, and even with its help. may not be abIe to tame NSI. A top-level domain controlled by a private entity with no public restraints at all and insulated !?om meaningfbl competition may have dramatically different characteristics fiom what users expect from the Internet. NSI's March, 1999 changes to the InterNIC could pale by comparison-

190 D. McGuire, "ICANN Nixed Deal To Bolster NSI Control Of Registry" Newsbyres (23 July 1999)- - Nhat is at stake is far more than technical coordination. it is the power to make network policy, to make the nth which dejine the network's clrchitecfure and capabilities. Forcefit1 expression of this idea can be found in the work of Lawrence Lessig. In a talk given at the 1998 annual conference or Computer Professionals for Social Responsibility (CPSR), Lessig said:

The net is governed already. It is governed in places by people - by peopie who set the protocols of the space, people who enforce rules on the space: and it is governed everywhere by code - by the sofhvare and hardware that sets the architecture of the place. and sets the terms on which access to the space is granted. These governors - these rulers both human and code - impose values on the space. Their actions reflect the values of the space. Their rules are expressed primarily through code. but their rules are expressed also as rules. They give the space the character it has.19'

Whether we call it coordination or governance. we are taiking about the forces and institutions which influence the way the Internet works. This is nor abour conrrol the way we usualIy think abour conrrol. but rufher o compound of many dzrerent kinds of influence. It is a goal of this thesis to understand more about the principles which we have come to associate with the Internet so that we may be ready to apply them. if necessary. to preserve and extend the Internet's revolutionary characteristics. in the face of the commerciaIization of a previousty non-commercial environment.

The vast majority of scholarly writing about the Internet relates to content or transactions on it, not the Internet itself. Much of this kind of writing is imbued with exuberant declarations that the Internet's nature is such that it does not need "traditional"

191 Supra note 171. regulation because it and its communities have their own ways of governing themselves.192 Another large body of writing relates to sovereignty on the Internet with respect to civil j~risdiction.'~~Still other writing relates to objectionable content and gaming as matters requiring Internet governance. Clearly there are a number of interpretations of what "Internet governance" means, but here it is used to describe the people. institutions, rules, and principles which guide the Internet's infrastructure. Thus far. very little writing has addressed the infrastructure on its own terrns-

Many authors seem to fall under the spell of the unprecedented diversity of lnternet content and negiect to consider what facilitates it.. and who makes the rules regarding that underlying infrastructure. An example is the tendency of some writers to describe the Internet as "chaotic.'? In his early 1995 article. "Controlling the Uncontrollable: Regulating the Internet," Dov Wisebrod wrote:

The very essence of the Internet is anarchy. a diametrical opposite of authority. To say that the two do not intermix well is to state the obvious, but what is perhaps not readily apparent is that the anarchy of the Internet is a powerful. cooperative. functional force that cannot be subjected to centralized contr01.'~(emphasis in original)

192 Examples include D.G. Post. "Anarchy. State. and the Internet: An Essay on Law-iMking in Cyberspace" [I9951 Journal of Online Law 3. , also at ~http~/www-.temple.edu/1a~vschoovdpost/Ahhtl:H.H. Perritt. Jr., "Cyberspace Self- Government: Town Hall Democracy or Rediscovered Royalism?" (1 997) 12 Berkeley Technology Law Journal 2, : L.J. Gibbons, "No Regulation. Government Regulation. or Self-Regulation: Social Enforcement or Social Contracting for Governance in Cyberspace" (1997) 6 Cornell Journal of Law & Public Policy 475; S-S- Salbu, "Who Should Govern the Internet: Monitoring and Supporting a New Frontier- ( 1998) 1 1 Harvard Journal of Law & Technolog 429: D.G. Post. "The Unsettled Paradox: The hternet, the State, and the Consent of the Governed ( 1998) 5 Indiana Journal of GIobal Legal Studies 52 1. also at : DR. Johnson & D.G. Post. "The New Civic Virtue of the Net: A Complex Systems Model for the Governance of Cyberspace".

193 See D.R. Johnson & D.G. Post. "Law and Borders" (1 996) 48 Stanford Law Review 1367, also at

194 D. Wisebrod, "ConuolIing the Uncontrollable: Regulating the Internet" (1995) 4 Media & Communications Law Review 33 1 at 332-333. Not all writers are comfortable with the word -'chaotic'' or such deterministic imagery. Mindful of the unique authority of bodies like the IETF and individuals like Jon Postel, Johnson and Post prefer a more moderate form of this view:

The decentralized. emergent form of coltective action involves voluntary acceptance of standards (or, as the Internet Engineering Task Force motto would have it: 'rough consensus and working code.') Despite the fears of those who cannot conceive of order arising fiom anything other than topdown, hierarchical control, this is not a process that neccssuri[v leads to chaos and anarchy. To the contrary. the teclmical protocols of the Internet have in effect created a complex adaptive system that produces a type of order that does not rely on lawyers. coun decisions. statutes. or votes.19' (emphasis in original)

Corroboration of this characterization is provided in the words of Paul Vixie of the Internet Software Consortium. in this excerpt from a 1996 posting to the "newdom" discussion list:

Chaos is inimical to freedom. The order you see me fighting for is the status quo, because I believe that the true fight for freedom lies in content rather than naming. Without coherent names we won't be able to locate the content I'm so worried about. Coherency is notfi.ee and it 's never cn crccidenr. 1% (emphasis added)

Vixie succinctly states the argument for a unified name and number space. and at the same time demonstrates the folly of taking infrastructure for granted. The Internet's idhstructure, as we have seen in this thesis, is f'ar from chaotic or anarchical.

Many authors assume that all Internet users will always be able to access all content currently available on the Internet. and further. that it wi11 only get easier and easier for individuals to put content on the Web. A number of the trends discussed in Section VI suggest that such optimism may be misplaced. If these aspects of the Internet's open character should be diminished. then the force of those arguments would also be diminished. This is what Paul Vixie means when he says. "I believe the true fight for freedom lies in

195 D.R, Johnson & D-G. Post, "And How Shall the Net Be Governed?: A Meditation on the Relative Virtues of Decentralized, Emergent Law." in Kahin & Keller. supra note 9 I :62 at 68.

I % P.A. Vixie, "Re: bogosity," message posted to newdom (New Domains) discussion list ([email protected]) (28 October 1996). content rather than naming." For present purposes. however. it is sufficient to be aware of the need to distinguish between content-oriented and inhstructure-oriented arguments.

E. IMPLICIT AND EXPLICITNORMS

Closely related to the tendency to focus on content issues and ignore infrastructure is the techno1ogicaI determinism which runs through much of the scholarly and popular writing about the internet. it is crucial that we distinguish between the technical characreristics which make the Internet what it is. and the governance forces which keep it that way. Many authors describe the Internet as \bide-open and frontier-like. and declare that it can not be controlled. Wisebrod again provides an example:

The Internet is truly unique, and if its special characteristics are not fully understood by government. attempts to regulate it will fail. It must be remembered. as Ithiel de SoIa Pool reminded us. that -freedom is also a policy.' Just as there is a choice to be made in regulating the telecommunications industry between policies of competition and regulation. there exists a choice with respect to the Internet between policies of freedom and regulation- A policy of freedom will allow the Internet to evolve naturally and beneficially, becoming the paradoxical sum of its users' involvement: a self- controlling. yet uncontrollable. functional anarchy-'97

Wisebrod characterizes regulation as the antithesis of the freedom of the Internet. He acknowledges that ''freedom is also a policy." but only as a matter of governmenral policy. The implication is that since freedom rules on the Internet it is not and cannot be influenced by any particular policy of its own. A second passage reinforces this view:

Thus, while the esistence of a normative basis for regulating the Internet would be an interesting subject for debate, the exercise has little practical application. Due to the nature of the Internet, including its histor).. culture. amorphousness, and universality. it is quite impossible to effectively regulate, As author Bruce Sterling put it: "The lnternet is a rare example of a true, modern. functional, anarchy.'""

197 Wisebrod, supra note 194 at 363.

198 [bid at 332-333. The nature of the Internet, its alleged anarchy. forecloses the possibility of regulation, or presumably, any kind of governance on any normative basis. This thesis has already argued that the Internet has been very much "governed by identifiable people, institutions, rulest and principles, and that this governance has been informed by particular implicit norms. The more interesting point, though, is that Wisebrod uses the word "universality" to describe the Internet's nature. By conflating a policy principle like "universality" with technical characteristics, he implies that since the Internet cannot be controlled, it cannot be changed, and therefore its universality cannot and wilt not change.

The risk presented by such technologically determinist thinking, which pervades much of the popular and even academic writing on the subject. is that ifthe inrernet did ever change, valued features like openness and accessibility could be lost and there wordd be no normative basis for usserting tha they shoztld be retained. If the technology changes, for instance. if proprietary protocols replace open protocols. or if the Internet market changes, such as it might if more economically rational peering and settlement arrangements lead to aggregation of ISPs. then openness and universality could be vulnerable to very real threats. Perhaps technological determinists would shrug and say "that's the way it goes." but it is this study's contention that there are certain characteristics which we treasure about the Internet. implicitly or explicitly. and which we think define the Internet. We do not think of these characteristics as policies because they have just always been there, they are implicit norms which underlie both users' and network operators' ideas of what the Internet is. However, we need to identijj those ci~~iracteristicsand recopize that they are the products of the Internet 's rtniipre governcmce. nor immutablefearrtres of its technoIogygy.

To a great extent. the ongoing process of evolving the legal and institutional framework of the Internet can be viewed as a process of translating implicit norms into explicit norms. Internet norm theorists acknowledge the governance forces which have made the Internet what it is today, but are still often led by technological determinism to conclude that no other forms of governance could ever effectively guide the Internet. The following passage demonstrates the optimistic view of the continuing suitability of norms as the primary embodiment of Internet governance:

... Internet govemance wilt be most successful when it encompasses the unique norms and customs that have evolved within the Internet, and expresses itself in familiar forms that are practical, understandable. and predictable for individual and commercial entities. New management regimes that embody well-developed Internet customs - such as open participation, consensus-building, and grassroots organization - will foster a common code of behavior for an increasingly diverse Internet, provide individuals with a stronger voice in a rapidly commercializing Internet, and create an environment that experienced Internet actors. who play key roles in the formation of new governance structures. know and tr~st.'~

The desire to hold on to the norms of the non-commercial era despite the very different environment of the commercial era is clear, There is almost universal ugreement on [he poinr fhat rhe norms of rhe past are good The disugrcernenr lies in how ro c.cwry [hem into the frtrure and, indeed, wherher auy posirirv action is necessary to do so. The kith in norms was expressed first by Johnson & Post:

We wil 1 argue that the same decentralized decision-making process that created the Internet at a technical level may be able to create a workable. and, indeed, empowering and just form of order even at the highest level of the protocol stack -the realm of rules applicable to the collective social evaluation and governance of human behavior."'

By hsing the Internet's technical characteristics with the norms which have guided its development, an image is created of a governance system with the same virtues as the technology - decentralized. open. even democratic. as many have optimistically claimed.20' Refusing to think about the Internet's technical features and the forces underlying them separately gives the impression that only the stylized traditional forms of Internet governance can ever rule. By focusing on the past. and the virtuous technical characteristics and governance patterns of that era. we risk being taken off guard by the Internet's future because

of an inability to conceive of lnternet policy on its OWTI terms.

What is happening in the rejorm of the domain name sysfem is very much a process of translating implicir norms info policy. The Internet technical community has

I99 "The Domain Name System: A Case Study of the Significance of Norms to lnternet Governance," in "Developments in the Law - The Law of Cyberspace" ( 1999) 1 12 Harvard Law Review 1574 at 1658.

200 Johnson & Post. supra note 195 at 68.

10 1 For a contrary view, see R.W. McChesney, "The lnternet and U.S. Communication Policy-Making in Historicat and Critical Perspective" (1996) 46 Journal ofCommunication 98. Also available at . proven to be very uncomfortable with this idea. as demonstrated by its hostility (or at least

that of its most vocal elements) towards ICANN.'O"O~ them. -new-IANA' or ICANN lacks the reassuring and neutral presence of Jon Postel. Postel represented the implicit norms of the Internet's technical infrastructure in a very concrete way. With him as IANA, there was no need to codify these norms. Over his 28-year career in internetworking. Postel gained an enormous levei of trust and respect among the Internet technical community. Had he not passed away in October, 1998. the course of Internet governance would likely be very different today. Without his presence. though. his corporate successors have had to try to earn the trust of this peculiar community. Wile a new-IANA with Postel involved (not even necessarily in charge) could probably have continued to operate in a relatively informal way, ICANN has been forced to codiQ its powers and reveal pubiicly how it makes decisions (of even the most mundane sort). Its critics hmv demctnded of ICAIVN a level of transparency and accountability which was newt- reqltired of Postel's IrlhrA. This process of writing down Postel's power, of codieing the implicit norms which guided his actions. has varied from difficult to outright vicious.

Nobody knows more about this process as a matter of law than Joe Sims, corporate lawyer to Jon Postel and now ICANN. Sims' job was to translate some of the Internet's implicit norms into law. in the form of ICANN's articles of incorporation and by- laws. In a fascinating article in an American financial services industry periodical called E- Money. S ims offers his characterization of this difficult process:

[ICANN] sees its job as maintaining the operational stability of the Internet - in short. of continuing to do as good a job as Postel did, but in a new environment with continuing public scrutiny and more formal policies and procedures. If it meets that goal, ICANN will truly have something to be proud of, and coincidentally. will ensure the continued esistence of the platform necessary for the growth of e-commerce and society that most now assume will occur. [CANN is in some ways an experiment to see if the private sector can provide the kind of infrastructure management in cyberspace that has always been the province of governments in the physical world.

202 For example, influential networking industry analyst Gordon Cook signed ail of his e-mail messages in summer, 1999 with a signature block containing the statement: "The Only Good [CAW is a Dead IC ANN." See The Cook Report, .

107 It will be interesting to see if it works. For e-commerce to prosper, we should all hope that it does."'

Sims' interest "to see if it works" betrays a lawyer's suspicion that an infrastructure as important as the Internet, with such diverse stakeholders, and so integral to "e-commerce and society" all over the world, needs to be governed in a more public manner. The implication is that the Internet's existing governance structures have been managing this infixstructure in the public interest so far. and that ICANN will attempt to keep doing so. That. of course, will be very difficult, and it will be made no easier by the fact that ICANN will encounter strong resistance fiom the Internet community every time it attempts to translate the Internet's implicit norms into explicit network policy. A principled basis for the formation of network policy. however, whether it be implemented by ICANN. its replacement. or any other body which contributes to the governance of the global public nehvork, is therefore necessary. These principles are best demonstrated by the extent to which the Internet's technical characteristics could be compromised by patterns of possible change in the Internet's technical and physical i~tructures.

VI.

A.

That the Internet will continue to evolve is uncontroversial. Beyond that certainty, it is notoriously diff~cultto predict the development of new technologies. There are complex and interrelated social. economic. and political forces involved. not to mention the mercurial tastes of consumers. which constantly confound marketing gurus. The evolution of the Intemet market. speaking broadly. is particularly difficult to predict because it is so new, and seems to contradict traditional norms of retail economics. Internet businesses of all kinds are still struggling to identie the business models which "work." In the summer of 1999. the newest tacks were giving away free computers with pre-paid Intemet access, and giving away free access. subject to an unavoidable barrage of

203 i. Sims, "Privatizing the Domain Name System: The Formation of the Internet Corporation" &-Money, Vol, 1, No. 9 (January 1999) 3 at 8. adverti~ernents.~~~The dominant Web browser suites are also given away free. contradicting the high prices for other software of similar complexity (even from the same vendors). Even the great Amazon.com, usually held up as one of the most successfUI Internet businesses, was said in spring, 1999, to still be losing money with every book it shipped.205

This section will suggest three general directions in which the Internet might evolve. They are not mutually exclusive and. in fact. what is more likely is that elements of each will be observed with respect to different aspects of the Internet. as it develops. We will first identify the general forces of change at play. before expanding on their possible expression in changes to the three general areas of Internet infrastructure already identified

What a contrast the present-day Internet, with its millions of corporate e- commerce sites, cottage-industry business sites. and advertising-supported content services, presents to the NSFNET AUP of only seven years ago. excerpted below:

GENERAL PRWCIPLE: 1) NSFNET Backbone services are provided to support open research and education in and among U.S. research and instructional institutions, plus research arms of for-profit firms when engaged in open scholarly communications and research. Use for other purposes is not acceptable.

SPECIFiCALLY ACCEPTABLE USES: 7) Announcements of new products or services for use in research or instruction, but not advertising of any kind.

UNACCEPTABLE USES:

10) Use for for-profit activities (consulting for pay. sales or administration of campus stores. sales of tickets to sports events. and so on) or use by for-profit institutions unless covered by the General Principle or as a specifically acceptable use.

- ~ ~ -- 204 See S. Miles, "Microworkz shake-up underscores free PC troubles" CN&T~Vews.com(26 August 1999), chnp://news.cnet.com/ne~~~s/O-1006-200-346506.htmI> and A. Patrizio, "AltaVista Joins Free ISP Brigade" Wired News (I 2 August 1999). ~http://www.wired.com/news/news/businesssto125 1 .html>

205 P. de Jonge, "Riding the WiId. Perilous Waters of Amazon.com" The New York Times Magazine ( 14 March 1999) 36. (1 I) Extensive use for private or personal business. This statement applies to use of the NSFNET Backbone only. NSF expects that connecting networks will formulate their own use policies. The NSF Division of Networking and Communications Research and Infrastructure will resolve any questions about this Policy or its interpretation?

Connecting networks did indeed establish their own policies, but the blanket restriction on commercial traffic over the NSFNET backbone effectively limited the scope of commercial activities to those networks themselves - not a particularfy valuable proposition. However, the invention of the Mosaic graphical Web browser in 1993, and the desire of American businesses of all kinds to enjoy the on-Iine sales numbers which Internet equipment vendors like Cisco were already enjoying. rendered the NSFNET AUP effectively meaningless. By the time the NSFNET was privatized in 1995. many private networks. led by UUNet Technologies, Inc. and Performance Systems International (PSM~~)."~had already interconnected by other means. imposing no traffic restrictions on their customers. From that point on, the Internet's physicd infrastructure has been an almost entirely commercial proposition, although the contributions of university networks, still some of the largest single networks in the Internet. should not be underestimated.

With each cohort of university graduates in Canada and the United States in the mid- 1990s came a new pool of customers for ISPs. as young people yearned for the e- mail accounts and Web access they enjoyed as students. In what has aptly been described as a "digital tornado," the availability of consumer access drove the quantity of services on the Web, which drove demand for cheaper computers. faster modems, and faster Net access, which has merdriven both the quantity and quality of Web sites up and the prices of

206 NSFNET Backbone Service Acceptable Use Policy (1992 version). excerpted in Randall. supra note 3 at 248-249.

207 It is interesting to note that UUNet Technologies (now part of MCIWorldCom). Performance Systems International (PSI). and BBN Interneworking (the 'BBN' being Cambridge, lMassachusetts consulting engineering firm Bolt Beranek Br Newman. which designed the hardware for the original ARPANET nodes) (now GTE internetworking) were all originally U.S. government contractors or spin-offs from U.S. government-fbnded programs. They are now among the dominant backbone providers in the U.S. See Kahin & McConnell, supra note 74 at note 37. associated hardware and software The explosion of the consumer Internet market has led to a frenzy of investment in e-companies and i-companies. of which profits, or even revenues, do not appear to be required by investors.

The stratospheric valuations of initial public offerings and aftermarket prices for "Internet company" shares have only been exceeded by the value of mergers and all-stock acquisitions among the behemoths of the sector. Notable examples are AOL's acquisition of

Netscape for $USD42 billion in stock'0g in February. 1999 and yahoo!'^ January, 1999 acquisition of personal Web page host GeoCities. for USDS3.56 billion in stock."0 Massive investments are being made in anything and everything vaguely related to the Internet, in hopes of finding the brands that will dominate the multi-trillion-dollar e-commerce market which many predict."' The race is on to get a slice of the Internet pie. and hopefully be among the last still standing when the others. which never actually turned a profit, fail. There is a genuine frenzy surrounding the Internet. What a contrast to the slow, awkward, text-only network of only ten years ago. which was primarily the province of university researchers and computer scientists! Forecasters already speak of the coming saturation of the American market and how the next growth opportunities will be in Latin America and Asia.

How might the continued commercialization of the Intemet play out? What follow are three possible scenarios. They are important to this study because while the Internet's infrastructure previously determined the nature of the services available on it, we are in the midst of a shift whereby the services and markets which the Internet enables will likely begin to drive the development of its infrastructure.

208 K. Werbach, "Digital Tornado: The Internet and Telecommunications Policy" U.S. Federal Communications Commission Office of Plans and Policy Working Paper Series No. 29 (March 1997) at 4, ~hffp~/~~~.f~~~g0v/Burea~s/OPP/~vorkingapeoppwp29f.hI~.Werbach's excellent paper is the seminal piece on the relationship between the [nternet and the public switched telephone network (PSTN) in the United States. Werbach is currently Managing Editor of Relewe 1.0, a software industry newsletter published by ICANN interim Chairman. Esther Dyson.

209 "AOL Names Andreesen CTO Wired 1Velt.s ( I8 February 1 999). .

210 C. Bicknell, "Yahoo Gobbles Up GeoCities" Wired News (28 January 1 999)-

21 1 See, for exampte. H. Scoffield, "E-commerce expected to explode, OECD says" The Giobe and Mail (29 September 1998) B6. 1. Market Evolution

The first scenario can be thought of as a continuation of the Internet substantially as we know it today. That is. the physical and technical infrastructures which currently define the Internet would continue to be embraced as the basis of the network, worldwide. The unified global public Internet of today, although comprised of diverse physical elements, would be built upon and expanded. This would require a commitment by equipment vendors to offer products which continually enhance the performance of the pubIic Internet. This. in turn would require collaborative research and development on improving the end-to-end performance of the Internet itself. as opposed to focusing on products which make use of proprietary CPE or high-capacity networks not part of the public network.

These issues will. of course. be driven by consumer demand. and at present that demand appears to be for integrated '*IP solutions" which can replace existing stand- alone voice (telephony of all ranges) and data services (fax. internal e-mail and file sharing, Internet e-mail, Internet access. private network connectivity). For the public Internet to be used to provide industrial-grade services such as these to business customers. it would have to be upgraded dramatically. Specifically. the current single class of "best effort" delivery will have to be replaced by multiple service levels suitable to services which are more or less latency-sensitive (negatively affected by an inconsistent g-connection'>. Wlat is more likely, in the short term, at least, is that equipment and services employing IP will replace the existing hardware, but these new integrated systems will connect to the existing telephone network, not the public Internet. through gateways.

The open scenario thus assumes that either: (a) premium business voice and data communications services will continue to be provided over existing telecommunications networks (that is, not employing the Internet-s specific technical infrastructure), while employing advanced IP technology at the local level. or (b) such services will be provided by means of both the Internet's technical and physical infkastructures, that is. "over the Internet." At the consumer level, the open scenario assumes continued strong competition in the ISP market, both for standard analog dial-up and high-speed Digital Subscriber Line (DSL) or cable services. Here there may be a significant difference in the shape of the hture ISP industry in Canada and the United States.

In Canada both telephone companies and cable companies must provide access to their networks (and customers) to competing ISPs on terms no less favorable than those on which they provide such access to themselves."' The Canadian cable television and Internet access industries have been jointly testing means of interconnecting competing ISPs with cable company networks."' American cable companies. on the other hand, are not required to give such third party access, although this situation could change. The open scenario thus also assumes that both the standard and high-speed segments of the Internet access market will be as strongly contested as the standard access market is today. While we will return to this issue. we will now sketch the open scenario's expression in the three areas of Internet infrastructure with which we are already familiar.

2. Identifiers and Routing

The most important issue in the identifier and routing area is the integrity of the name and number spaces. The Internet. like its predecessor networks. has benefited from having one unified identifier system. which supports one unified routing structure. Alternative root systems do exist, which incorporate the legacy root (also referred to as the IANA root) domains plus new. non-[ANA-recognized domains. the most well-known of which being .web."4 However. they remain obscure and are not supported by the vast majority of commercial and institutional networks. The open scenario assumes that the name and number spaces will remain unified. with the same degree of interoperability between zones as at present. More importantly. this means that software and hardware developers

212 See Regulation Under The Telecomrnzrnicutions Act Of Certain Telecommunica~ionsSmices Ogered By "BroadcastCarriers". Telecom Decision CRTC 98-9. July 9. 1998. ~httpY/www.crtc.gc.ca/engltelecom/decision/1998/d989-0.t?ct> and Regulation Under The TelecommunicationsAct Of Cable Carriers'=lccess Services, Telecom Decision CRTC 99-8, JuIy 6, 1999, ~http~/www.crtc.gc.ca/en~telecom/decisio1999/d998-l .t?ct>.

113 See Canadian Cable Television Association (CCTA). "Submission to CRTC In Response to Telecom Decision 98-9, Technical Report on the Status of Impternentation of Access for Internet Service Providers" (8 February 1999).

214 Well-known being a reIative term in this context. given that an infinitesimally small percentage of Internet users is probably aware that there is such a thing as a .web domain. will view the current name and number spaces as essentiai to the Internet, and not commit to proprietary alternatives which might ofTer greater hnctionality or security at the expense of diminished routability of packets.

The open scenario also assumes a smooth implementation of IPv6, the new version of the Internet ~rotocol.~'~While the transition theoretically began in Summer, 1999, it is estimated that it will take fiom five to twenty-five years for iPv6 to completely replace the existing IPv4 addresses throughout the public Internet. IPv6 offers a vastly larger number of potential addresses and supports the transmission of much more detailed identification information behveen hosts. which permits more reliable authentication and addressability of and between networks. Whiie the first blocks of IPv6 numbers have been assigned to the RIRS."~ implementations of the new protocol are still found mainly in specialized research networks. mainly in .Japan." '

3. Protocols and Standards

The continued openness of the Internet is dependent to a great degree on the commitment of the Internet industry to open. non-proprietary protocols and standards. The non-proprietary TCP/IP standard is becoming the common protocol for the next generation of telecommunications hardware at all levels. TCP/IP and its associated protocols have been collaboratively evolved in the IETF environment since the Internet's non-commercial era, right up to the present. In order for these standards to remain not only dominant but also "open", the IETF will have to continue to tlourish. As well, the IETF will have to be able to maintain its moral authority over Internet standards in the face of proprieta~challenges from vendors who seek to short-circuit the standards process either to gain market advantage or simply to get a product to market sooner. At the commercial level. the possibility also exists for competing QoS implementations. an outcome which the IETF appears to be successfdly keeping at bay at present. The IETF's moral authority is dependent on its credibility among the Internet community, a good portion of which. it must be remembered. is now made up of

'IS See .

216 Supra note 1 5.

217 See, for instance, the KAME Project at . Internet hardware and software vendors. not curious individuals. Those vendors must remain convinced that open, collaborative standards are in their collective and individual interests for the IETF to maintain its premier position in the Internet standards world.

4. Peerina and Interconnection

Closely related to the assumption that the retail and commercial Internet access markets would remain highly competitive under the open scenario is the requirement that [SPs of all sizes continue to be able to arrange for the transiting of their tdk "upstream" and amongst each other on reasonable terms. As suggested above. independent ISPs in Canada and the United States currently appear to be comfortable with the current backbone market and the terms on which they are offered transiting services. and that state of affairs is assumed to continue in the open scenario.

1. Market Evolution

There are many dimensions on which the Internet of today is vulnerable to fragmentation. While the open scenario depends upon continued comrnitrnenr to open and non-proprietary elements. the fingrnented lntet-net scenario is premised on erosion of that commitment. This would likely be caused by the introduction of new services using technical and physical elements other than those which define the present Internet. While QoS would be implemented in the open scenario in such a way as to preserve universal interoperability and intercomectivity. under a fragmented scenario these two principles might be compromised to some degree in the name of better quality end-to-end service. QoS, and the Resource Reservation Protocol (RsVP) in particular. are intended to create a "virtual circuit" which transmits special grades of traffic throughout the network from sender to receiver@). In the present Internet. traffic is forced to fit into '-one size fits all" transmission patterns which offer "best effort" service only. The result is a relatively unreliable. unpredictable service (compared to existing telecommunications services) which relies on the facilities of many unrelated, unknown. and not directly compensated networks to route and transit packets to their final destination. Under the fiagmented scenario. due to the introduction of a substantial degree of end-to-end QoS over the public Internet. and the establishment of some hrrn of financial settlements system among Ws. customers would begin to be charged for specific types and classes of service provided over the public Internet. As a general matter. a fiagmented Internet would display two or more distinct bands of service quality, which might be roughly classified as consumer and commercial. This would be a significant change from the standard, flat-rated dial-up access model which currentiy dominates the consumer Internet market in Canada and the United States. The simplicity of TCPAP is one of the enduring achievements of the Internet. in both the non-commercial and commercial eras. yet now is ironically thought of as holding it back.

2. Identifiers and Routinq

An outcome which has been mooted in the Internet community since the beginning of the DNS reform process is "fragmentation of the root." This refers to the existence of root zone structures (or "trees'-) other than the IANA or legacy root. This could happen as a consequence of at least two courses of events. First, dissatisfaction with the IANA root (meaning dissatisfaction with its operation by ICANN or its replacement) could lead those who wish to see new TLDs added to the root zone to attempt to popularize alternative root zones. and the alternative name servers which would be required for networks to resolve such names. Second. the introduction of a new class or classes of premium Internet services could necessitate new TLDs or new types of domains within established domains (e-g. .corn). These new services could simply be premium transmission services, or some combination of premium transmission services (e-g. QoS) and premium content or transactional services. such as are currently available on private Electronic Data Interchange (EDI) networks.218 In any event. both would be provided over the public Internet, but proprietary or alternative identifiers would be used, requiring either integration of those new identifiers into the existing routing system. or the introduction of a new routing system to serve the new zones. This model might be described as multiple '&inner-nets" within the public Internet. The introduction of new identifier systems need not necessarily mean a loss of interconnectivity across the Internet, if the new identifiers are comprehensively integrated into the legacy system. This would effectively preserve the unified name and number spaces of today, and maintain the integrity of the routing protocols which keep traffic moving to its proper destination. If they are not. the potential for problems is obvious. If two different root zones contain the same TLD names but they map to two different network locations, confusion would reign. The situation would be similar to two unrelated individuds on opposite sides of in the world having the exact same long-form telephone number. Much like the same 7-digit phone number may be assigned to different users in different area codes, 1aw.utoronto.ca and law-utoronto-corn can co-exist on the Internet, Without a unified top level. however, such a structure breaks dotm.

Another variant of this problem would be a situation in which separate zones (i.e. area codes or TLDs) contained names or numbers which were not reachabIe from other zones. There need not be a conflict. but rather. numbers within one zone might simply not how of the existence of numbers in the second zone. This undesirable situation is avoided in the telephone system. even behveen networks with different phone number lengths, by interfaces between the nehvorks maintained by the interconnecting carriers. Similar interfaces could technically be inserted into the Internet, but the question is where? There are no central points where two co-existing name and number spaces could eficiently interface because of the distributed. hierarchical nature of the Internet's routing system- A router either knows where to get authoritative routing information or it doesn't. If a zone is not part of the unified zone used by the largest networks across the Internet. packets simply will not be able to be routed to that zone,

A unified name and number space. and indeed technical architecture generally, is thus desirable in nehvork environments. and any derogation from that unity could theoretically result in a decrease in eEcient interconnectivity (yet paradoxically an increase in total connectivity due to the introduction of new names and addresses, even if they are only selectively reachable). In the fragmented scenario, the unified name and

218 An example of which is General Electric information Services (GEIS). which provides secure inter- corporate transaction services by means of virtual private nenvorks (VPNs). See . address space which the Internet currently employs (and, aside fiom scaling efforts over the years. has employed since the early non-commercial era), would co-exist with alternate zones within the name and number spaces which are not necessarily integrated into the unified identifier and routing systems,

3. Protocols and Standards

Fragmentation of the standards and protocols which are the '-glue" of the public Internet would Werderogate tiom the unified Internet of today. The Internet has largely avoided the vicious. drawn-out domestic and international standards battles which have mired the cellular telephone and DSL segments of the telecommunications equipment industry. While strong competition at the application. content. and transactions layers has lead to competing proprietary software and enhancements, there has been a remarkable embrace of open standards at the transport and session layersa9 among network administrators and equipment vendors. As suggested above, although QoS protocols are being collaboratively developed through the IETF process. the competitive pressure to be first to market may strain the IETF's hold on these standards. It can be expected. as well, that a number of associated protocols and extensions will be developed around the QoS protocols, and if special hardware is required to implement end-to-end QoS. some of those associated protocols may be protected as part of hardware design."* While the "one size firs all" nature of the cw-rent. open model has encowuged common protocols. rivalry among

219 The OSI Reference Model is a commonly-used method of referring to the various elements of communications networks. Each layer specifies particular network functions. The following summary is from 3Com's Web site: Layer 7. the application layer, the highest layer of the model, defines the way applications interact with the nenvork: Layer 6. the presentation layer. includes protocols that are part of the operating system, and defines how information is formatted for display or printing and how data is encrypted. and translation of other character sets; Layer 5, the session layer. coordinates communication benveen systems, maintaining sessions for as long as needed and performing security, logging, and administrative functions: Layer 4. the transport layer. controls the movement of data between systems. defines protocols for structuring messages, and supervises the validity of transmissions by performing error checking: Layer 3, the network layer, defines protocols for routing data by opening and maintaining a path on the nenvork benveen systems to ensure that data arrives at the correct destination node; Layer 2. the data-link layer, defines the rules for sending and receiving information from one node to another between systems: Layer I. the physical layer. governs hardware connections and byte-stream encoding for transmission. It is the only layer that involves a physical transfer of inforrnation between nenvork nodes. See: ~http:/lwww.3com.com/nsdglossary/osireferencemodel.htm~.

220 Particularly since software which is an integrai part of hardware is patentable subject-matter in the United States. equipment vendors and their respective need for d~yerentiationin the extremely lucrative business market may lead to the introdzicrion of proprietary protocols and mtensions.

Examples of this potential trend can be found in the Web standards field. Microsofi was granted a patent by the United States Patent and Trademark Office in January, 1999 for a particular application of a Web page construction technology known as "stylesheets." Web developers protested that the patent would give Microsofi control over important open Web design standards which were previously thought to be open, public- domain standards, collaboratively developed in the World Wide Web Consortium (W3C), the primary industry group developing Web features.ll Another o pen-staudard group called the Web Standards Project has chailenped the patent but Microsofi insists that it only uses sohare patents "defensively" in this area '-to protect against others who might west control over open standards using their own proprietary techn~lo~ies."~Not known as an open standard crusader, it is somewhat disingenuous for Microsoft to claim that it only patented style sheets to keep the standard open. especially since it has rebuffed pleas to assign the patent to the W3C. Microsoft says that it licenses Web standards freely and without charge, but the fact of its supreme claim to the technology runs counter to the open traditions of Internet standards-making.

It is reasonable to speculate that as investment in Web companies increases, and with it pressure on those companies to earn real profits. the situation described above may arise more frequently. Free und open stun~Iurdsmay facilitate interoperubility, but it is dif~cztlrto make money without cr d~@mnticrtedpi.~clricr. m~d diflerentiation ojien depends on propriew technoIogy. As we shall see. this trend away from "openness" and towards "closed" models is also observable in the high-speed access and enhanced content markets. In a fragmented scenario. htemet standards bodies like the IETF and W3C would be weakened by assertions of intellectual property rights over standards and protocols, or even bypassed altogether by determined vendors.

It is important to remember that the lETF and W3C are voluntary bodies and have no legal or even contractual influence in their respective fields. They rely on the

22 I C. Oakes, "MS Wins Patent for Web Standard Wired News (4 February 1999).

4. Peetina and Interconnection

Another of the important elements of a fragmented Internet infrastructure relates to peering and interconnection. Some of the descriptive elements of this scenario's peering and interconnection changes have already been referred to in Section 11I.C- A fragmented scenario would see an uneven introduction of financial settlements for traffic, mainly for premium services. The result would be (at minimum) a two-tiered market, with a relatively small number of service providers who have either implemented a common QoS architecture or compete on the basis of differentiated end-to-end premium services arranged through vertical alliances or the like. The lower tier would be made up of a somewhat !arger number of standard access providers which cannot offer industrial-grade QoS.

A fragmented scenario also implies that interconnection among networks may suffer. either due to technical incompatibilities. financial barriers. or marketing arrangements. Separate networks employing proprietary name and number spaces (or zones therein) or proprietary user interfaces may interconnect with standard ISPs via gateways and on commercial terms. However. standard ISPs' customers may eventually decide that the seamless connectivity available through the premium networks is worth higher Internet access fees. For those for whom it is not, there may be standard ISPs offering something less than complete connectivity across the Internet, at a lower rate.

Putting aside the possibility that new premium services employing new technical (and probably physical) infmtructure elements might be introduced, there is potential enough for dramatic change in the ISP-backbone relationship. Papers by Geoff Huston and Rob Frieden both suggest, in effect, that the "other shoe will soon drop," and that the flat-rate, no-settlement model of the current Internet will have to change. For Frieden, the exempt status of ISPs under United States telecommunications law will eventually come up against the fact that ISPs (particularly those which offer Internet telephony services), and the backbone providers with whom they interconnect. are essentially telecommunications service providers, and yet not subject to universal service ~bli~ations.~Huston's concern is that the current uniform best-effort environment. which has no logical basis for inter-provider settlements, results in strong pressure for aggregation among providers."J He speculates:

Without the adoption of a settlement regime that supports some form of cost distribution among Internet providers there are serious structural problems in supporting a diverse and well-populated provider industry sector. These problems are exacerbated by the additional observation that the Internet transmission and retail markets both admit significant economies of scale of operation. The combination of these two factors Ieah to the economic conc*hsion that the Internet marker is nor ir sztsrainabk open cornperirive tmrker, Under such circumstu~~cesthere is no natural market ourcorrle other than agqegdon of provickrs. leading to the establishment of monopoly positions in the Internet provider space?' (emphasis added)

It seems that the standard (or non-high-speed) ISP may be vulnerable regardless of how QoS trends play out. Huston suggests that on the current model ISPs cannot differentiate themselves and therefore must compete in a commodity market given to economies of scale, which favours the largest providers. Alternatively. if QoS implementation requires significant capital investment to upgrade equipment. or access to a network of other providers working together on a particular implementation of QoS. the small or standard ISP may be unable to compete in the premium services segment. and be left to compete for an ever-dwindling pool of customers with low expectations.

Frieden quotes a WcrN Street Journal op-ed piece by Hal Varian. perhaps the pre-eminent Intemet economist. who further explains the issues which the present model presents:

[A]s the [Internet] industry matures, settlement-free interconnect does not necessarily provide appropriate incentives to the industry players [operating the large. high bandwidth national backbone

223 Frieden, supra note 88.

224 G. Huston, "Interconnection. Peering. and Settlements" (Internet Society INET'99 Conference, San Jose, California. 23 June 1999). . networks]. "Why shouId I help my competitors by giving them fiee access to my network?" say the [backbone managers.].-. -But the Internet won't work unless everything is connected to everything eke," say the [Internet users and engineers.] ... Both are right- Interconnection b heidthy for the indue as a whole. but the current buriness rnocld /or interconnect may easily getrerate incentives for inniviciuul curriers to [deny interconnection. or to] overcharge their c~m~etirors.~~(emphasis added)

A fragmented scenario would therefore feature isolated examples of this kind of activity, which could result in ISPs being forced to aggregate, or justify comparatively higher charges to their customers by adding value in other ways. An executive from an IP equipment vendor summed up the situation this way:

In the public [networking] sector. "ISPs are desperate to find a way to make more money." said Gordon Saussy. vice president of marketing for Torrent Networking Technologies. in Silver Spring, Md. "They need to find a way to sell first-class seats and get out of the business of selling flat-rate services. Fine-grained quality-of- service is their way to [do] that." "'

The timing of the "rationalization" which may be approaching in the ISP industry will Likely coincide with the introduction of different classes of service. The implementation of QoS, which is primarily a protocol issue. also has implications for the other major areas of the Internet's technical and physical infrastructure. including identifiers and routing and peering and interconnection.

1. Market Evolution

In many ways. the closed scenario represents the extreme of many of the trends identified in the fiagmented scenario. IPXile ~nderthe fiagmenied scenario, two or

Z26 H.R. Varian. "How to Strengthen the Internet's Backbone" WulfStreet Journal (8 June 1998) A22, quoted in Frieden. supra note 88 at para. 37. Square brackets are Frieden's. Professor Frieden will present a paper at the 27" TeIecommunications Policy Research Conference (TPRC)on September 27, 1999, titled: "When Internet Peers Become Customers: The Consequences of Settlement-based Interconnection."

227 L. Wirbel, "Internet Protocol Gets Rules For Good Behavior" TechWeb News (I 1 May 1998), ~http:llwww.techweb.codnews/story~TWB199805 I 1 SO0 IS>. more "tiers" of Internet service would co-exist, a closed scenario essentially represents the replacement of the Internet with [two or more competing closed. proprietary networkx The model can be most simply understood as a duopoly based on control of high-speed access lines. which at present would appear to be telephone company DSL over copper wire and cable company cable modem service. The likelihood of this outcome is moderated by the requirement, in Canada. at least, that telephone companies and cable companies give ISPs access to their lines on non-discriminatory terms- as well as the trend towards competition in the provision of "last-mile" telephone access service. However. further elements of a closed scenario, such as the emergence of exclusive content offerings and restrictive traffic exchange policies, might counter that factor.

At present, most Internet content is "free" to the user. and sites which have tried to change to a subscription model have found it very difficult to get users to pay for content. Most well-known Web sites, be they entertainment. news. sports, search engines or Web indexes (such as Yahoo!) are presently advertising-supported. At the same time, many of these sites are entering into marketing alliances with major ISPs to make those sites prominent in the ISP's "portal" or first screen. At the extreme end of the subscription model, of course, is America Online (AOL). which is not the Internet at all. but rather a closed, dial- up service. AOL is perhaps the most profitable Internet-industry company. a feat achieved by charging for content. not giving it away. AOL offers value-added services to its members which are either not available on the Internet. or not as conveniently. Its success (with 18 million subscribers as of August. 1999)."' however. cannot be ignored. -4s Web site operators. or more parricularly [heir inwsrors. tire of losing money on Web ventures, they may be expected ro look for ways to enszrre CI steady revenue streurn. Excltrsive arrangements and subscription fees are r\co opr ions.

The Intemet content industry has been searching for profitable business models since the beginning of the commercial era. Until very recently, content and access were quite separate in the Internet market as a whole (except on AOL. of course). The perception was that once one gained access from an ISP. the bounty of content on the Internet was free for the taking. However. one possible course of the Internet's evolution is

28 America Online. Inc., News Release, "AOL Surpasses 1 8 Million Members" (1 7 August 1999), . that the needs of broadband network operators to recover their massive idkstructure investments, and of content providers to find a stable source of real revenue, might converge in closed (or semi-closed) networks. While it is technically possible for ISPs (especially cable companies) to selectively block out access to content which is not provided by their affiliates or alliance partners. the desire of consumers to have access to the entire Internet has made any such decision unpopular thus far.

If the structure of the industry altered as a whole, however. that could certainly change quickly. An early hint of the possibilities is presented by an incident involving an open cable advocacy group in California and AT&T's cable television subsidiary TCI. The Bay Area Open Access Coalition approached TCI to place a paid advertisement on TCI's system advocating non-discriminatory access to San Francisco7s cable networks during a period when the city-s authorities were deliberating on just such a proposal. TCI reportedly refused to run the advertisement. calling it "inappropriate," prompting the coalition to accuse TCI of censorship.z9 Prebrential treatment for content of all kinds is what cable television is all about, so it would not be unusual if cable companies which operate Intemet access services were to employ this lucrative method with respect to Internet content.

The closed scenario might also therefore be called a cable model because of its similarity to the existing cable television model which predominates in Canada and the United States. Indeed, many have forecast that the Internet will completely replace the cable television industry, a possibility which has not escaped the attention of cable companies, which are in the process of '-reinventing themselves as '-new media" companies. If small ISPs cannot gain access to premium content which is competitive with that offered by large access providers vertically integrated with content producers (e-p.. television studios), or part of vertical marketing alliances. then the value of connectivity through them may be low enough to make it impossible for small ISPs to survive. An affiliation structure might perhaps emerge among ISPs at national or international levels. by means of which they could gain access to desirable content. and eschange e-mail.

- -- - "9 D. McGuire, "Coalition Accuses AITTTCI of Censorship" ~Vewsb~vtes(2 1 Juty 1999). . This latter possibility is perhaps the most striking: the possibility that the largest networks may not exchange tru-c with any networks other than rhemelves (a situation made quite possible by the technical architecture of peering). If smaller networks are unable or unwilling to pay what Rob Frieden has called **one-waytransfer payments'"0 for upward packet transiting, those networks may simply "fall off' the commercial Internet. The incentive for consumers. and especially businesses. to have e-mail addresses which are as widely reachable as possible is obvious. Based on text-based e-mail alone, it could become very difficult for smaller networks to survive if they cannot provide comprehensive e-mail service. Add premium content like live sporting events and first-release Hollywood movies to this list of advantages of connecting through a large. high-speed provider and the prospects for the currently-vibrant independent ISP industry become even dimmer. A closed scenario assumes that these and similar issues are not resolved in favour of openness, but rather in favour of closed. proprietary models.

2. Identifiers and Routinq

A shift towards closed networks implies a shift towards closed technical infrastructure. proprietary name and number spaces, and associated routing systems. An alternative is a new identifier and routing system shared by those private networks which can afford to make the investment required to offer premium services. A proprietary technical int'rastructure would not facilitate access to the entire Internet. but rather only to those Web pages, e-mail addresses and resources which are contained within the respective private networks. As suggested above. it simpb does not make sense for un Interner se~vicetoday to provide connectiviry to anything less rhan all of dze Internet's Web pages. e-mail accounts and resources. However, there may come a point, which might coiucide with the introduction of dtflerentiated classes of services and therefore dt~erenriuredpricing, at which exclusive arrangements cordd be prof;rub[y made which derogate from the present model of universal interconnection. If high-speed nehvork providers also controlled the majority of e-mail accounts. exclusive arrangements could easily be made to restrict interconnection with other networks. thus pushing more and more business and consumer users towards those high-speed premium networks.

230 Frieden, supra note 88 at para. 40. A striking example of the value of controlling technical infrastructure is presented by NSI's claim to ownership of the database of gTLD names which it has built up by registering those names since 1993 under its official monopoly. Although this claim has never been legally tested, it likely is part of the reason why NSI has thus far reked to sign ICANN's model Registrar Agreement. If NSI successfully defends this position, it would give NSI a proprietary hold on the most vaiuable top-level domains: .corn. .net and .erg. While these names continue to be operational throughout the root server system. the potential for exploitation of the enormous goodwill associated with .corn addresses in particular presents the possibility of anti-competitive behavior if the root zone and root servers ever came under the influence of NSI-

How powerfid is .corn? In 1999 Sun Microsystems added "we're the dot in dot corn" to its trade-mark portfolio. This might seem odd. since Sun does not gain any particular advantage fiom the popularity of the .corn domain, as NSI does ('-the dot corn people"). In a magazine advertisement picturing a surfer holding a cell phone with ".corn" written next to it, Sun says:

By extending the Net to devices. we're enabling companies to reach people in new ways. You did want to be reached now, didn't you? Sorry to bother you, But the power of -corn is about to shoat out beyond where the desktop is - to wherever the customer is. Thanks to some very cool technologies from Sun-.,

Sun uses .corn as a euphemism for the Internet. Why not just use "Interneto. instead of LC ,corn"? Because .com is what the American business community considers the Internet to be. Certainly any business elsewhere in the world which hopes to sell goods in the United States had better have a .corn address. The name philips.com is much more attractive to the American consumer than phi1ips.d. Control over the -corn domain. which has been exercised relatively imocuously by NSI so far (likely because of the Cooperative Agreement is still in effect), is an enormous degree of power over the commercial Internet.

To speculate a little further. the bundling of exclusive content high-speed access, guaranteed quality of service. and. corn addresses would make for an unbeatable combination as a commercial internetworking offering. The .corn TLD is part of the unified address space right now, but under the control of NSI (or an acquirer, or a vertical merger partner), it could be removed and operated as a root zone unto itself, if enough networks around the world recognized it. Given the investment which has been put into all things .corn. the value of the .com franchise seems ready-made for even further exploitation. This aspect of the closed scenario demonstrates again that while Internet content and physical infrastructure have already been through complete transformations from the non-commercial into the commercial era, the technical infrastructure has not. and commercialization can be expected to continue to pressure it in ways which cannot even be imagined yet. The technical inf?astructure can be thought of as the undeveloped point of a triangle, and yet is perhaps its most important point. since it is shared by all Internet participants of whatever description, all over the world. The closed scenario suggests the breakdown of the unified identifier and routing system. in favour of multiple proprietary systems. each with more or less backing from the largest Internet industry players.

3. Protocols and Standards

The multiple proprietary network model described above impIicitIy suggests the dominance of proprietary protocols and standards. Much like the loss of unified identifier and routing systems, the loss of open standards and the widespread interoperability which they facilitate would similarly constitute the end of the Internet in its present form. While the goal of QoS development is for it to be able to work across the public Internet. the reality of its implementation may involve proprietary associated protocols, and/or inconsistency in which networks can afford to implement it. and therefore participate in the sharing of revenues which it will enable. Further. while high-grade reserved service and standard "best effort" service can theoretically co-exist on the public Internet, networks which cannot accommodate the higher grade of &afTk may have trouble holding on to customers for long.

A closed scenario with respect to standards and protocols also implies either the demise or marginalization of the IETF. As has been noted, the IETF relies to a great extent on both voluntary participation and the trust of the Internet community in the bona fides of its leaders. If market pressures compelled vendors to short-circuit the IETF, or if the pool of leaders whose commercial neutrality can inspire widespread allegiance dries up, the effectiveness of the IETF itself could be significantly reduced. Collaborative standards- making activity might perhaps continue in other fora but with a big difference. Anyone with interest and ability can currently join an IETF working group and suggest a standard, or at least considerations other than profit-mauimization which should be reflected in a standard. Private standards-making consortia on the other hand tend not to be nearly as open. The vaunted "openness" of the Internet would suffer directly from the weakening of the IETF.

Perhaps the most striking example of the forces which may come to bear on the IETF comes fiom an internal Microsoft document produced in connection with the United States Department of Justice's anti-trust prosecution of Microsoft. In an e-mail which has come to be known as the "Halloween Document" (presumably due to its date), a Microsoft employee describes the "Open Source Software" movement, and its relevance to Microsoft's operating system business. This passage fiom Esther Dyson-s Release 1.0 newsletter tells the story:

It's no accident that Microsoft's already infamous "Halloween Document" (an internal Microsoft memo leaked to Eric Raymond that anaIyzed responses to the open-source movement, referred to in the memo as OSS) identified the IETF as much as Linux as the force to be neutralized: "There is a large amount of IQ being expended in various IETF working groups which are quickly creating the architectural model for integration for these OSS projects.,,," "OSS projects have been able to gain a foothold in many server applications because of the wide utility of highly commoditized, simple protocols- By extending these protocols and developing new protocols, we can deny OSS projects entry into the market.--'j'

This internal Microsoft document both accurately identifies the power of open protocols, and demonstrates their vulnerability. Elsewhere the author urges the company to "de- commoditize protocols and applications" by -'extending these protocols and developing new pro toco 1s. 97232 If Microsofl, or any other intcrnet industry player or consorlirrm. decided to "extend" the Internet's key protocols, its openness could be in serious jeopardy.

-- 231 T. 0' Reilty, "The Open Source Revolution" Release 1-0 (November 1998) at "The IETF as Open Source", .

232 Quoted in Berkman Center for Internet and Society. "The Power of Openness: Why Citizens, Education, Government and Business Should Care About the Coming Revolution in Open Source Code Software," (1999), . A Microsofi venture in Hong Kong demonstrates the potentiat for Microsofi to --ernbrace and extend'' the ~ntemet."' In March. 1999. Microsoft and Hong Kong Telecom announced a joint venture to offer the 'Zoom Network." a high-speed Internet- based service capable of delivering movies on demand, live videocoderencing. news clips, interactive gaming, and e-commerce transactions."" These excerpts from a Reuters news story about Zoom demonstrate the value-added features which can be provided via a closed network:

[Microsoft Chairman Bill] Gates described the proposed service, to be called Zoom, as -'the Internet in a more powerful form." He believes that "people wili use this to do things more efiiciently, to find creative gifts, to ptan trips, to stay in touch with people." The high-speed nehvork uses Hong Kong's 6.8 million residents as a test market for the new multimedia service. [---I Among the new applications to be implemented on the Zoom service are video, music, software on demand, news from MSNBC. and the MSN gamin,a zone. It will run on an integrated Microsofi platform, based on the company's Windows NT operating system and commercial Internet system."'

Proprietary networks are clearly being experimented with. and given the success of AOL's closed-network business model. this should not be a swprise. Why would NBC news have such profile on Zoom as opposed to CNN or any other source? Because Microsoft and NBC operate a joint venture called MSNBC. a hybrid television-Web news reporting "channel." Why would Zoom run on an integrated Microsofi pIatform based on proprietary Windows NT technology instead of an open platform based on W3C and IETF standards? Because it will give Microsoft much stronger control over the technical infrastructure of the network.

. 233 The phrase "embrace and extend is a loaded one in the sohvare industry because it is how Microsoft explained its incorporation of a proprietary version of the open-platform Java programming language, whose function is to allow the development of software on any platform which will operate on any other platform, into its dominant Windows product.

234 "Microsoft Speeds Hong Kong Net" Wired News (9 March 1999), . Control allows the network operator to provide much more robust, reliable, and profitable services. instead of merely the lowest common denominator supported by external networks which it does not control. Secure e-commerce, Intemet telephony, and virtual reality modeling are three areas of future service which are currently attracting a great deal of research and development attention. The IETF's slow-moving, collaborative efforts in these areas are based on the traditional internetworking principles of low-level simplicity, while allowing just enough control to support new features. Microsoft's efforts in the Zoom experiment are likely much less -'open" and much more like the centrally-controlled, predominantly one-way (and protitable) cable networks of today.

The closed scenario. in summary. supposes the substantial replacement of open. IETF and W3C-evolved protocols and standards with closed. proprietary protocols and standards, in the commercial segments of the Internet.

4. Peerina and Interconnection

Finally, a closed model with respect to peering and interconnection would assume that the issues and trends identified in the fragmented model cannot be resolved in such a way as to preserve the viability of small [SPs. or any ISP at any level which is unwilling or unable to pay for upward transiting of its traffic. While the ISPIC commented in September, 1998 that "ISPs have traditionally viewed peering as a necessary fimction of network operations. not as a profit-making a~tivity.""~a closed scenario assumes that those in the backbone industry with the power to leverage their control over packet transiting at the highest level, do so. Because the Internet depends to such a great degree on a mesh of many different sizes and types of intercomected networks. more selective interconnection would significantly reduce total interconnectivity. to the detriment of the public Internet. A situation in which a few very large networks only interconnect with each other. and only e- mail accounts and Web sites which are located within those large networks are reachable to most users, would essentially be the end of the public Internet as we know it today.

While we can hope that the closed outcome is only a "worst-case scenario," industry trends in that direction should not be ignored. We can already observe derogation

--

'36 Supra note 86. fiom the values which have always underIain the Internet. We need, therefore. to develop a principled basis for those values which we think should ground Internet governance in the .. hture. The touchstone of those values is --openness.

VII. WHAT PRINCIPLES SHOULD INFORM THE GOVERNANCE OF THE INTERNET'S INFRASTRUCTURE?

The word most commonly used to describe the Internet is "open," It is used in many different contexts, by diverse writers and speakers. to describe airnost any aspect of the Internet. The Internet is frequently described as an "open network." based on "open protocols." FCC Chairman William Kennard uses the word often, for instmce: "I think that when cable modem technology is deployed. the open culture of the Internet should deploy with it."=' Here openness is given a motive force of its own. Speaking to the United States National Cable Television Association in June. 1999. Kennard said, again rererring to the deployment of cable modem technology:

The ball is in your court. And if you act responsibly, consumers will get broadband and that broadband will follow the open tradition of C the Internet- I was glad to hear yesterday that Michael Armstrong [Chairman of AT&T] said that he is committed to this open tradition. both with respect to conduit and content. We ure nor regzrl~~rir~gBur we me tc'u~ctzing.'" (emphasis added)

The head of America's federal communications regulator seems to like the open character of the Internet, but one wonders whether he knows precisely what "the open tradition of the Internet" means (or indeed if anyone does). One further suspects that his audience's idea of "open" was probably something else again.

237 Quoted in J. HeaIey, ed., "FCC: Keep marketplace competitive" Sun Jose ~Mercun.Artws Silicon Vaffey.com (25 July I 999).

38 W. Kennard, Chairman, United States Federal Communications Commission, "The Road Not Taken: Building a Broadband Future for America" (National Cable Television Association. Chicago, Illinois, 1 S June 1999), . Openness describes the Internet. but it is also an important principle on its own, as a matter of network policy. The celebrated openness of the Internet, and more specifically that of its infrastructure. has been preserved thanks to widespread commitment to certain implicit network policy principles. namely. that the Internet be: (I) universally interoperable; (2) universally interconnected: (3) non-proprietary (with respect to standards and protocols); (4) non-proprietary (with respect to networks generally); and (5) unified. We will briefly consider each principle in turn.

1. Universallv Interonerable

The relative simplicity of TCP/IP and the decision to keep the core layers of the Internet as simple and "stateless"'j9 as possible supports its interoperability with an atmost unlimited array of transmission media and terminal equipment. Similarly, the Web is designed to make pages look substantially the same. regardless of what type of computer, operating system, or browser is used. Interoperubiliry is one of the Internet 's most remarkable achievements. Andrew Shapiro has described interoperability as essential to the Internet's "networkness."'"o while Gillen and Kapor have cleverly described its fragility:

We have discussed how the Internet's technology affords its decentralized operation. But technology is only part of the picture. The other part is people. All the clever design in the world cannot create an interoperable system unless enough people hold interoperability as a shared goal. Interoperability is like Tinkerbell: it only works ifeveryone believes in it.14'

The standardization of transport protocols. like TCPAP. and application protocols, like Hypertext Transfer Protocol (HTTP) and Simple Mail Transfer Protocol (SMTP), are essential to maintaining universal interoperability- The most simple example is provided by SMTP e-mail messages, which many different programs can decode and present in different ways. based on the same source information. There are currently several efforts underway to

-- - - 239 StateIess refers to a network which does not have different modes or states. such as "open" or "closed," like a telephone circuit.

240 A. L. Sha piro, The Control Revolurion: How [he Interne[ is putting individuals in churge and changing [he world we know (New York: PubIicA ffairs, 1999) 16.

'4 I Gillett & Kapor, supra note 9 1 at 16. enhance the hctiondity of Internet e-mail. to the level which can be enjoyed within a closed network, such as a corporate or institutional LAN. These efforts all involve more or less deviation from the SMTP standard. and the degree to which they will interoperate with each other remains to be seen.

Not all Intemet sohvare is universally interoperable. Competing Internet telephony sohare products ofien employ incompatible protocols, presumably to encourage sales of each product by virtue of the need of one's conversation partners to employ the same software. However, efforts are undenvay to work out common standards for interoperability among Internet telephony products. and a first-generation standard called "I nteroperability Now" (INOW) has been developed by a consortium of manufacturers. While the burgeoning Internet telephony industry appears to recognize the benefits for all players if common standards for the various hardware and sohvare elements involved can be worked out, there is also something of a battle undenvay between interoperability initiatives based on ITU and IETF standards, known as H.323 and SIP. respectively.""

While interoperabili~on the Internet as n general goal iJ- not likely to be threatened, it is small incremental devicrrions @urn it which can have the efficr of reducing interoperability in the aggregate. A usehl example is presented by instant messaging services. Instant messaging is like real-time e-mail. except the parties converse line by line, instead of message by message. As a closed, proprietary network. AOL has supported instant messaging among its members for some time."' The software notifies the user when certain pre-selected other users are also on-line. so that they may "chat." if desired. Yahoo! and Microsoft's Hotmail atso offer instant messaging services, but until recently only within

- - - - 242 P. Bernier, "Standards Adoption Key for IP Voice Service" /nteM;,cri\:e Week ( 15 October 1997), ~http://www.zdnet.com/zdnn/content/inwk/O45/l58899.html~.See also T.M Denton Consultants, supra note 126 at 36-37. On the topic of internet telephony generally. the reader is referred to the excellent Web site of the MIT Internet & Telecoms Conve~enceConsortium at . Indicative of the trend towards converging the Internet with traditional telecommunications, this group was formerly named the MIT Internet Telephony Consortium. reflecting the earlier sense that Intemet telephony was something different to regular telephony. The direction of both industries is now very much towards eliminating the difference. which wit1 nonetheless likely remain with us at the retail level for several years to come.

243 A similar service. "ICQ," has also available on the public Internet for several years using proprietary software. AOL acquired ICQ (and 1CQTs58 million users) in 1998 for USDS325 million: M-Wok "AOL Battles RivaIs Over Instant Messaging" ficire News (25 July 1999). ~http://news.excite.com/news/r/990725/00/tech-microsoft-aoI>. their own networks. In July, 1999. both Yahoo! and Hotmail introduced new versions of their instant messaging software which extended the pool of users to include the AOL Instant Messenger (AIM) population, some 40 million bVhar happened nert is a classic illustration of the forces for and against rmiversul interoperabiliry .

AOL alleged that Microsoft and yahoo!'^ actions constituted unauthorized access to AOL's infkastructure. and changed the AIM software to keep the unwanted visitors out. AOL argued that the expanded user base "opens it up to sparn, to pornography, to all ,.ZJ5 kinds of things that people don't want. but clearly its primary motivation was to keep its customers within AOL's own "walls," that is, within the bounds of its own chat environment, to maintain the vdue of advertising spots on those pages. Microsoft took up the Eight and argued that instant messaging will eventually be "open" and that industry standards should be negotiated. AOL responded that Microsoti had compromised the privacy of AM'S members, and illegally "reverse engineered the AIM software. Columnist Dan Gillmor explains the irony of this tussle between proprietary network operator AOL and the "reigning monarch of proprietary systems and software." Microsoft:

AOL has a rational desire to hold onto its instant-messaging customers in a proprietary way. Rational - but AOL's comments and policies are the height of hypocrisy given its positions on other issues. [--I AOL is absolutely right on the cable open-access matter. and absohtely hypocritical to keep its instant-messaging customers walled off from the rest of the Internet. The need for standards in the instant-messaging arena could not be more obvious. Making it happen involves solving some technical problems as well as political ones, but so far AOL has shown no interest beyond vague statements that it would like to be more open. Let's hope AOL changes its tune - and practices - soon. AOLfs only competition in the hypocrisy derby is its chief opponent, Microsoft. I didn't notice any particular openness in the Microsoft Messenger product for one thing?"6

254 S. Hansell, "Positions Harden in Instant-Message Fight" New York Times Qberrimes (28 July 1999),

245 AOL spokeswoman Ann BrackbiIl quoted in Wok. supra note 243.

2.16 D. GiIlmor, "Messaging flap makes Microsoft, AOL instant hypocrites" Sun Jose Merc~rryNews Silicon Vaffey.corn (26 Ju iy 1999). ~http://www.mercurycenter.com/svtech/cofumn~gillmor/docs/dg072799.htm~. Not to be outdone, AT&T general counsel Jim Cicconi is reported to have issued a statement labeling AOL's actions "hypocritical and antithetical to the very ethos of the internet:.-247 Two things are certain, First, there are many versions of "openness." Second. there is not universal devotion to it- While "the very ethos of the Internet" was defined in the non- commercial era by public institutions and users, it will likely be redefined in the future by companies such as Microsoft, AT&T and AOL. AOL 's stmce on instanr messaging, and indeed its entire business model us u closed proprietury network. presenr serious threats to [he ideal of interoperability and therefore [he Inremet generally.

The threat is almost certainly not just fiom AOL. The Wall Street Journal has reported that:

Microsofi is also pondering plans to consolidate the fragmented Internet-senrice market and broaden its cut-rate offerings. According to people familiar with the situation. Microsoft has discussed purchasing or partnering with a number of large Internet-service providers. including EarthLink Nehvork Inc. and Mindspring Enterprises Inc., and partnering with PC makers such as Compaq Computer Corp. and Dell Computer Corp., which have launched their own access services?'*

Consolidation of the ISP industry by Microsofi. AOL. and AT&T."~ for instance, would dramatically change the Internet experience for users. Would these mega-networks be likely to provide access to each others' value-added services at the expense of their own? Would they be likely to invest in improving access to non-commercial materials beyond that necessary to reassure consumers that they have access to the entire Internet if they need it? Another example of the sacrifice of universal interoperability and interconnection is provided by the Blue Mountain Arts dispute.

247 Quoted in "AOL messaging policy might risk cable deals" C!VETNews.com (27 July 1999). .

248 N. Wingfield & D. Bank, "Microsoft-AOL War Heats Up Over Net Access" Wall Street Journal (5 August 1 999) 86.3.

249 At this point it is worth noting that Microsofi invested a reported USDSS billion in AT&T in May, 1999, an arrangement which was intended to lead to Microsoft sofnvare being used in the digital set- top boxes which AT&T's TCI subsidiary will offer to its cable television subscribers. See S. Junnarkar & M.A. Farmer, "Microsofi. AT&T in S5 billion pact" CNET Nervs.com (6 May 1 999).

I do find it beyond coincidence. and I do find it difficult if not impossible to accept the assertion by Defendant [Microsoft] that those individuals developing the [spam] filter were unaware of the electronic greeting card aspect of Microsoft's business. and that the factors that were programmed into the logarithmic filter were developed without any input from. or any knowledge of it. or any influence by. the individuals developing the electronic greeting card business. [...I So. I do believe that there was some intent on Defendant's part in developing, or in using the filter. not only to filter out spam generally but that there was some concern if not outright targeting of Blue Mountain or similar outfits that were in competition offering electronic greeting cards?'

A Microsoft representative stated during the instant messaging "war-. that '-[w]e intend to be aggressive with access.'. and: "AOL might think about it as a profit center. Thatgsnot how we think about it-77252If its actions with respect to Blue Mountain arts are an example of Microsoft being "aggressive with access.-- the Internet would change dramatically if it succeeded in its reputed (and quite commercially sensible) plans to consolidate the access so The information in this paragraph is derived from a chronology of the dispute posted on Blue Mountain Arts' Web site. See .

25 I Partial Transcript of Proceedings in the Superior Coun for the State of California, Santa Clara County Judicial District, Before the Honourable Robert A. Baines, Judge, Blue Mountain Arts [Plaintiw v. Microsoft and WebTV [Defendant]. January 28. 1999. Available on Blue Mountain's web site: . The most recent step in this litigation was a successful application by the Sun Jose Mercuy News to have the court file made public. A trial appears to be forthcoming.

252 Supra note 248. market. Even if access is not explicitly viewed as a "profit centre." control over elements of the Internet's technical infrastructure allows tbr all manner of anti-competitive activity whose effects are not easily identifiable. Issues of interoperability spill over into issues of interconnection.

2. Universallv Interconnected

Just as powefil a defining characteristic as universal interoperability, the complementary value of universal interconnection is fundamental to what we know as the Internet. Simply stated. this value is that everybody should be able to reach everybody else, regardless of what network one --lives on:' and where on earth that network might "be". Interconnection is also closely related to the non-proprietary network value. It is quite true that in the Internet's earliest days (and the eras of its predecessor networks). no one entity was in a position to effectively control it. to impose restrictions on which networks connected with which- Once one had an IP address and a server "advertised in the routing system, one was fully connected. Connectivity through a larger network. and through it to still larger networks, all of which interconnect at various levels with other networks, gives the impression that anyone can reach anyone. Horvever, the fact of this remarkable degree of interconnectivity, like many other chuructcristics of the Internet. shozrtd not be taken as concltrsive proof that restrictions on interconnectivity cottld never be imposed at any level. To the contrary. there is very real potential for such restrictions at both the consumer access and backbone levels.

The potential for the spread of closed, proprietary networks presents the risk that interconnectivity could be limited. either to increase the value of advertising space within local networks. or on the basis of afiliation-type agreements among networks. The forces at play in the instant messaging dispute could easily transfer to the interco~ectivity arena. Kevin Werbach explains. in the summary of a Release I.0 article titled "The Architecture of Internet 2.0":

We're approaching a critical point in the evolution of the Internet. So far we've been lucky. Back in the early 1990s, the major players in the computing and communications industries were busy chasing video on demand. closed proprietary online services and the information superhighway. They didn't much care about a bunch of academics, researchers and engineers setting up a commercial inter- network based on the Internet protocot. Many of those companies now get it ... which may be cause for concern. Architecture matters. For the most part. today's Net is open, decentralized and competitive. It fosters innovation because it is a standards-based general-purpose platform. [...I The people building the next generation of high-speed access pipelines are trying to change this model, They want to tie those pipelines to the content and services they are selling, and control interconnection to the world at large. Their intractability may damage the network's openness and slow down its development, Today's Internet has grown rapidly because it is open: the next- "genention Internet will not grow quickly unless it too is open and competitive. The mlg& of the cyber-commons is that conrperirive access bme/is df providers colZeccive[v but few indiridtruZ~~?~ (emphasis added)

The notion that we have been "lucky-' to have enjoyed the network's "openness" so far seems to fly in the face of deterministic Internet m4ythology. Many writers suggest that the Internet arose organically, and is both uncontrolled and uncontrollable. the implication being that it could never be changed by anything other than the incremental acts of the autonomous individuals who sustain it. David Post's account demonstrates this latter view:

The rise of cyberspace. it is worth noting. took virtually everyone by surprise: how could something as ridiculously complex as a single global communications network be built without identifiable direction. without some "authority" in charge of bringing it into being? [...I Though we can point ex past to many individuals and institutions who played particularly important roles in its emergence, no one "created" the set of rules we now know as the Internet because no one was or could have been in the position to do so. any more than anyone is in a position to create a new set of ruIes for English syntax. Emergent institutions like the Internet Engineering Task Force (whose motto. "We reject Kings, Presidents. and voting; we seek rough consensus and working code," aptly captures its decentralized orientation). the World Wide Web consortium, the Internet Assigned Numbering [sic] Authority, and the like - institutions with no authority whatsoever to act on anyone's behalf, no fixed address or membership, and no formal legal esistence - somehow got hundreds of miilions of individuals across the globe to agree on a common syntax for their electronic conver~ations."~

33 K, Werbach, "The Architecture of Internet 2.0" Rcleue 1.0 (February 1999). .

154 D.G. Post, "Of Horses, Black Holes, and Decentralized Law-Making in Cyberspace" (Private Censorship/Perfect Choice: Speech regulation on the Net, Yale Law School. New Haven. Connecticut, 9- 1 1 April 1999). ~http://webserver.law.yaIeeedu/censor/post.htm~. At least two comments can be made about this argument, which is found in a paper otherwise about content censorship. First. it ignores the role of the ARPANET and NSFNET as the progenitors of the modem Internet. As we have seen. the Internet's technical infi-astructure has remained essentially unchanged since the non-commercial era. The last major architectural change was the introduction of gTLDs by Jon Postel's IANA in 1983, and other minor changes were also made under the trusted hand of former ARPANET and NSFNET coordinator Postel, Second, the argument forecloses the possibility of change due to external, or market forces. such as those suggested by the fragmented and dosed scenarios. It ignores the immense influence of Microsoft. AT&T, and AOL. among others. companies which have demonstrated a willingness to circumvent the IETF and W3C.

Visions which exalt the anonymous network administrators who experimented with inliastructure in the non-commercial era (and who still operate smaller networks) ignore the corporate imperatives which bind their colleagues at the larger networks. The most vocal personalities in the Internet governance debate are small entrepreneurs. who operate their own small networks and have a strong intrinsic sense of control over them. Many hope to operate monopoly TLDs. Other individuals are employees of major Internet-related companies, such as IBM. Cisco. and Microsoft, but participate in their personal capacities only. Only very rarely do the companies themselves participate in public debates about ICANN or Internet governance. The personal values of these individuals are almost certainly overridden at work by the corporate imperatives of their employers.

It is no exaggeration to suggest that a joint decision by the technical departments of MCIWorldCom. Sprintlink and AT&T could completely change the commercial 1nternet.s architecture at a stroke. Competition law concerns aside. statements to the effect that it is impossible for anyone to control the Internet rest on the historical record of there never having been such concerted action. not that it is impossible. The requirement that QoS hardware and software be implemented on a wide scale to allow for end-to-end service suggests that this kind of concerted action among the larger players is not only possible, but inevitable. The great challenge of QoS. of course. is to make a heterogeneous patchwork of stateless networks perform like a homogenous. unified, logically-controlled network. In other words, to make the internet more like the telephone network. without being a telephone network. The "lowest common denominatof' or "one size fits all" model will be replaced by differential levels of service. in order to optimize performance at the global level.

Consolidation in the ISP industry. which many economists and other commentators suggest is also inevitable owing to the basic economics of the business and the possible end of ISPs' "Free ride" as a matter of telecommunications policy. has also the potential to entirely reshape the commercial Internet. The implementation of QoS may hasten the conditions for consolidation if small networks prove unable to access the capital necessary to upgrade their networks to participate in QoS. Since QoS provides the means to offer premium services, it is seen as the only way for ISPs to escape a commodity market and make money. Quite without anti-competitive intent. larger networks which can offer end-to- end QoS will have a distinct advantage in the lucrative business communications market. Small network operators would remain free to continue to experiment with their own networks, and perhaps continue to collaborate with others on a consensus basis. However. the ethos of non-commercial experimentation. which was in many ways a carryover f?om the sheltered, government-funded ARPANET and NSMET eras, cannot be expected to rule in the marketing departments of the huge American public companies whose fortunes are becoming ever more tied to "The Internet." Perhaps this is what Kevin Werbach means when he says "so far we've been lucky."

A popular misconception is the idea that "InterneU" is coming. and that it will be faster and better than the current version. Perhaps conditioned by the consumer marketing practice of adding '2" to an existing product to denote its next generation in the market, many members of the public believe that Internet2 will soon be available "from the makers of Internetl." The Next Generation Internet and 1nternet2."~ and their Canadian cousins CA*netII and ~~*net3,"~are testbeds for future internetworks, but they are not part of the Internet today, nor will they soon be added as premium commercial services. Much like ARPANET and, to a lesser extent NSFNET. these experimental networks are much faster than anything else around. but also closed to outsiders. Users must be affiliated with either a

35 See and , respectiveiy.

256 See ,and ~httpd/~.canarie.ca/frames/startnet~vorks~e~html.respectively. CA*neG is the worId's first national all-optical, all-IP nehvork. research institution or one of the private companies contributing to the networks' development, to gain access. While these closed networks likely have e-mail gateways to allow mail traffic in and out. the "sites" on these networks are not reachable from the outside. Even more so than AOL, these advanced broadband networks are not the Internet, and contrary to public perception, are not coming to replace Internet 1.

Not only does riniversul interconnection nor appear to he 0 paramount nehvork principle for AOL, us evidenced by its position on instant messaging. but it is not even essential to the scientijic cornmztnity which needs the speed and capacity which the next- generation internetworks oger. It is simply not justified to assume that universal interconnection will always be a value held by the operators of the various networks which comprise the Internet. While some writers believe that the technical characteristics of the Internet make it uncontrollable and therefore immutable, they do not account for the possibility of significant change driven by the massive corporate entities who have tied their futures to the network. Post does. for his part. recognize the possibility of control, both as a product of international legal harmonization and the influence of sofhvare (or "code'') features:

... having emerged from decentralized disorder - fiom the primordial ooze of the Internet Engineering Task Force - cyberspace has created conditions that favor the growth of powerful centralizing forces.?'

The significance of the Intemet's single technical infrastructure. as opposed to its diverse physical infkastructure and content. are captured in this statement. The very nature of networks is that they are locally diverse. yet globally unified. To participate in a network, one has to adhere to the same rules, refer to the same authoritative resources. and agree to make one's intiastructure available for the use of other participants. in exchange for the same privilege elsewhere. The common technical infra~lmcnrreof the Internet fiditates the remarkable interconnectivity which defines the Internet. but in the face of derogation fiom that zmiversality, we must recognize cmiversrzfity as o policy principle. nor just a technical feature.

257 Supra note 254. References to the open protocols and public domain software which define the Internet can be linked to the value placed on non-proprietary standards and protocols. The fact that the basic underlying protocols of the Internet are publicly available accounts to a large degree for the incredible accessibility of the Internet at almost all levels. The simple transport protocols allow almost unlimited freedom of design at the higher protocol layers, particularly at the application layer. Perhaps the most significant testament to the power of open lower-layer protocols is the fact that the World Wide Web. the application which is now driving Laternet development, was simply layered on top of the existing network-

It has to be remembered that until I968 in the United States and I982 in Canada, the incumbent telephone companies could prohibit the connection of equipment which they did not make or approve (referred to as -'foreign attachments") to their They had complete control over both their networks and all interfaces with them. The Internet thus presents a starkly opposite model. No one in particular controls the connection of hardware or software to it. beyond the local access provider's rules for use of its local network. No one speaks for the Internet. and changes to its open protocols are developed collaboratively and merely "suggested" as standards. Individual entities are free to experiment with protocols, but much like Tinkerbell. as Gillett & Kapor say, they only really work if everybody believes in them.

Lawrence Lessig has likened the Internet's non-proprietary protocol model to that of the Open Source Software community. which features collaborative public development of software over the Internet. This movement created the Linuv operating system, the same product which the "Halloween*' document suggests that Microsoft sought to neutralize, along with the IETF. As Lessig put it in a February, 1999 address:

..-most of the Internet is open source in just this sense, and certainly most of the internet that most of us have anything to do with is open source in just this sense. And most of the growth in the reach of the Internet has come from its being open source in just this sense. Think about this for a second. There have been electronic networks since the late part of the last century. There have been computer

258 See Carterphone (1 968), 13 FCC (2d) 410 and Artac/~menrofsubscriber-Provided Eqzripmenr, Telecorn Decision CRTC 82- 14. November 23, 1982. networks for the past 30 years. But these networks were primarily proprietary. These networks were built on the ideal that code. and protocols. be kept private. And so private they were. Networks clunked along at a tiny growth rate. Costs were high. Participation, low. But the hrerner ~vasbuilt on a d#erenr model. It is no accident that it came from the research community of major universities. And no accident that it was pushed onto these communities by the demand of government.z59 Once forced. researchers began to build for the Internet a set of protocols that would govern it, These protocols were public - [he-v were plcrced in a cornmons. and none claimed ownership over rheir source. Anyone was free to participate in the bodies that promoted these commons codes. And many people did.? (emphasis added)

The challenge, of course. is to maintain that type of community spirit in a commercial environment. If these implicit norms. thcrr commonZy-tcsed sofhvare sholrld be fiee and accessible, like the BIND suite for DIVS. czre overpowered by profir-minded behavior, the benefits of the largely non-proprietmy rechnicnl infrastructure of the internet could be threatened. As Mark Lemley has cautioned:

... it is not too hard to imagine a future in which the protocol - or the wires. or the implementing sohvare - is proprietary. A norm of "opennesss- on the Net may not turn out to mean very much if access ro the Net is itself a function of whose sofnvare you buy?6'

Lemley could have added "or where you get your Internet access." We now turn to the expression of the non-proprietary principie in networks generally.

239 Lessig is presumably referring here to the desire among U.S. federal funding agencies that American universities be able to share the large computers which the government had tbnded at various sites, as an alternative to finding more computers at more sites. See Hafner & Lyon. supra note 16 at 4 1-42.

260 L. Lessig, "Open Code and Open Societies: VaIues of Internet Governance" (Draft 2) (I999 Sibley Lecture, University of Georgia, Athens. Georgia. 16 February 1999) at 7-8,

26 1 M.A. Lemley, "The Law and Economics of [ntemet Norms," (1999) 73 Chicago-Kent Law Review (forthcoming), draft available at ~hnp://papers.ssm.com/paper.taf!ABSTRACTTID=1 5 1 789>. Closely related to the value of non-proprietary standards and protocols is the openness of the networks which comprise the Internet The current ISP business model in Canada and the United States is designed to provide complete access to the entire Internet. The first screen which users see when they start up their Internet software is usually a proprietary "portal" page of the ISP. but the subscriber is free to go anywhere fkom there. Further, special software is usually needed to connect through a commercial ISP. since most take advantage of the dial-up networking capabilities of Microsoft Windows. Further illustration of non-proprietary networks is best achieved by way of describing the alternative.

AOL is, as we have seen. one of the few profitable companies in the Internet content business. This is partly because AOL also controls access to its network, through a system of local dial-up servers. The instant messaging dispute shows AOL's attitude towards its network generally: it is private property and its customers are its alone. AOL provides exclusive content, and. grudgingly. access to the wider Intemet AOL has been described as -'the Internet on training wheels" because it automates and pre-packages many activities which Internet users do on their own and at a variety of different sites. AOL CEO Steve Case is quoted as saying that AOL's "mission statement*' is: "[t]o build a global medium as central to people's lives as the telephone or TV - and even more val~able."'~' If AOL's goal is to create a closed. proprietary global medium. we need to recognize that this model is fundamentally opposite to what we think of the Internet as being.

The complementary threat is from high-speed cable ISPs such as @Home, a technology and brand licensed to cable companies across Canada and the United States. @Home requires the installation of a proprietary Web browser and offers exclusive content, but also access to the wider Internet. However. ihrough a process known us Zocui caching, the operator is able to choose which content is crvuilabie a[ particularly high speeds, even during periods ofpeak use, when the performuncc of cable nehvorh suffers. @Home's local operators have been careful to emphasize that they do not block oza any Internet content, but the potential to favour affiliated news or sports sites. for instance. must be tempting. It is once again worth noting the reach of Microsoft in this field. Microsoft bought a reported

262 Quoted in J. Zaslow, "Net prophet" USA Weekend ( 19-2 1 February 1999) 19. USD$407 million worth of convertibie securities in Rogers Communications Inc., a Canadian @Home licensee. in July. 1999. presumably in exchange for the same privileges which it hopes to gain on AT&T's networks in the United states.'"

It is important to note that at present both AOL and @Home offer complete Internet access in addition to their own premium content.'" As suggested by the closed scenario, however, there is no guarantee that this state of affairs is permanent. Cable Internet access providers (like any ISP) have both the technical capability to block out certain content and freedom from restrictive broadcasting regulations, Yet AT&T Chairman Michael Armstrong has committed the company to being an "open platform," as indicated by these passages from a June. 1999 KzII Street Jozirncrl story:

"We wilI maintain open systems.-' Mr. Armstrong said in a keynote address at the National Cable Television Association convention- He said AT&T7s cable systems would remain open in terms of "conduit," which he defined as an open technology architecture served by muttiple vendors. "An open platform is the best way to stimulate innovation in cable." added Mr. Armstrong. [...I "We believe our cable customers should be able to access any portals and content they want to reach." Mr, Armstrong said. "But it should be done on the basis of a sound commercial relationship, not through regulation'' of the Internet or communications industry at large?

A number of things may be said about these remarkable comments by AT&T's Armstrong. The first relates to their timing. Much like the Bay Area Open Access Coalition's publicity- seeking attempt to place open access advertisements on TCI's system,

. . - -. -- - 263 See "Microsoft invests WOO million in Rogers Communications" CIVETNervs.com ( 12 July 1999), , In an editorial in the National Posr, Terence Corcoran offered this optimistic view of where the alliance might lead: "The Rogers-M icrosoft relationship promises to have new set-top cable boxes available within a year. If there is to be a great national broadband nehvork linking every Canadian with a high-speed interactive connection, it is likely to come from cable before it comes from any telephone company. and it is likely to come stamped with a Microsoft brand name and Microsoft software connection rather than a Bell connection.'* "Let the market control Rogers" National Post ( I3 July 1999) C7.

261 A point which the President & C.E.O. of @Home licensee Rogers Media Inc. took pains to emphasize in a letter to the editor following a story by Matthew Fraser in the National Posr on March 2, I999. He wrote, in part: "Mr. Fraser implies that [email protected] and similar services are blocking access to Web portals such as AOL. This is incorrect. All of the Internet programming services and portals such as AOL are available via Canadian cable Internet access providers - none are blocked out." J.H. Tory, Letter to the Editor, "Internet pandemonium" ,Vutional Post (i 5 March 1999) CS.

265 L. Cauley, "AT&T to Shun Exclusive Pacts For Cable TV' Wall Street Journal ( 1 5 June 1999) BS.5. Armstrong's comments came at a time when local franchising authorities all over the United States were deciding whether to impose access-related conditions on AT&T's TCI acquisition. These remarks would have been designed to allay the fears of local legislators that they might have to resort to regulation to promote the interests of their constituents in modem communications services. It should not be forgotten that only eight months prior, AT&T admitted in FCC filings that an open access requirement on TCl's networks would spell disaster for AT&T0s pending acquisition of TCI. and indeed its entire business plan.266

Second. Armstrong-s words do not match TCI's deeds. TCI has yet to allow a competitor to offer service to TCI's customers by means of TCI's cable network- Consumer advocacy groups (some of which. it must be noted. are funded by local Bell operating companies) such as No ~atekee~ers'~'and the 0pen.Net ~oalition'~'continue to press for such access, to no avail. Armstronggsopen systems rhetoric does. in any event, appear to be carrying the day with local authorities and even the FCC. Chairman William Kennard has said:

I think that when cable modem technology is deployed, the open culture of the Internet should deploy with it, That meum that anybody shotrid be able lo ger access lo an rrnafJiIiated /SP over cable modem. And in fact. when I talked to (chief esecutive) Michael Armstrong at AT&T, he says that he has every intention of making this happen. In fact he gave me a letter. which is in the record of the AT&T (and TCI) merger. where he committed that the consumers will be able to access unafiliated ISPs with one click on the cab!e modem.'b9

We need to carefdly compare Armstrong's commitment and Kennard's expectations. Neither particularly serves the public interest. Ironically. Armstrong's statement might come closer, though. Kennard seems to expect merely that AT&T's subscribers nil1 be able to

266 "AT&T: TCI deal threatened Wired New( 16 November 1998).

269 Quoted in "FCC: Keep marketpiace competitive," supra note 237. access unaffiliated ISPs over their AT&T modems. Unfortunately. this does nothing for either the unaffiliated ISP or the subscriber. The access to other ISPs which this arrangement provides is essentially the freedom to visit their portals. which are already offered to every Internet user in the worid for free- It is difficult to understand what else the statement: "consumers will be able to access unaffiliated ISPs with one click on the cable modem7' means. Armstrong's words seem to comport with Kennard-s - he says: "[wle believe our cable customers should be able to access any portals and content they want to reach." The only problem is, a portal is not an ISP. A portal is just a Web site.

Non-discriminatory access means physical access, the ability of non-affiliated ISPs to serve consumers by rneuns of AT&T's Zines. Kennard's repeated reference to access "over cable modem" is telling. He does not say over cable Zines. The freedom to choose one's own CPE is nothing new - at least not in telecommunications. In the cable television world. however, a world which AT&T and Microsofi. two of the world's most profitable corporations, have quite recently decided is where the money is in the '-new economy," freedom of CPE is intentionally foiled by cable companies. For Bill Gates to decide that USD$j billion was a good price to pay for the opportunity to place Windows in AT&T's set- top boxes, he must have been fairly confident that AT&T would not be encouraging freedom of choice in CPE. The irony was not lost on the Wnll Street Journal:

The approach is starkly at odds with the desires of one of AT&T7s newest, and potentially most influential. suppliers. Microsotl Corp, The giant sofnvare company recently invested $5 billion for a small stake in AT&T. with an eye toward making sure it has a significant role in AT&T's digital future- Under terms of the agreement, Microsofi will get a chance to pack millions of digital set-top boxes distributed by AT&T with its Windows CE operating system.-'0

Some of Armstrong's other characterizations of the commitment at first sound more encouraging. He committed AT&T to "maintain open systems," meaning its network would remain open with respect to conduit - an --open technology architecture served by muItiple vendors." What precisely these words mean is open to considerable interpretation, but they would seem to imply that competing ISPs could interconnect with this "open technology architecture." However. the multiple vendors to which Arrnstrong refers may

-- 270 Supra note 265. actually be vendors whom AT&T has chosen. not peers. as Armstrong's separate reference to "sound commercial relationships" implies. Finally. his cryptic statement that "[aln open platform is the best way to stimulate innovation in cable" is essentially meaningless. The Internet is an open platform. Cable networks are not. It is extremely unlikely that AT&T paid USDS46 billion for TCI. and another USDS.54 billion for Media0ne.'-* simply to open them up to competitors and turn cable TV info an open network and cornrnodiry market.

Moving up the hierarchy of the Internet's physical infrastructure. but staying with the same ideas, the non-proprietary value remains equally important. Backbone operators have traditionally been willing to peer with networks which can offer them a roughly equal volume of termination and transiting services in exchange. assuming that the other network features comparable security and reliability, They have also been willing to elctend the same service to smaller networks. for a price. Aside from price and quality standards, however, there does not appear to be any attempt to create affiliation systems such as in the broadcasting field. The potential certainly exists for the operators of backbones to "partner" with specific ISPs in local areas to the exclusion of others. If carriers at higher levels in the Internet hierarchy began to approach peering and interconnection with an exclusionary business model such as AOL's- the network value of non-proprietary networks which currently defines the Internet (as opposed to private networks like AOL) may need to be asserted to preserve this aspect of "openness."

5. Unified

The Internet's global unity is another remarkable achievement. All of what we consider to be "The Internet" shares a common name and number space, and networks communicate with each other using resources which depend on these common identifiers. Just as universal interoperability has largely led us to forget about the days. not so long ago, when computers did not speak a common language. the unity which has always defined the Internet makes us think that there is such a thing as "The Internet.'' Yet the Infernet's unip is no more immutable than any orher fearwe of its infi-astnrcfure. It depends on a shared commitment to use common, globally-unique identifiers, and common network resources

27 1 See "AT&T Wins War For Mediaone" IVired 1Vews (5 May 1999), ~http://~v.wired.com/news/news/businesstoI95 16.html>. based on them. Closely related is the commitment of operators of physical infrastructure to interconnect with as many other networks as possible. and to terminate and transit other networks' packets, as discussed above. Derogation fiom these commitments wouid also seriously undermine the unity of the Internet.

While the principle of unity plays a roIe in many aspects of network policy, the most significant at the moment is with respect to the root zone. Whether there need be one root zone, or whether many of them can co-esist. are central questions in Internet governance. They capture the core paradox of the Internet: that many different networks, employing many different kinds of physical infrastructure and local technical infrastructures, share one global metanehvork -- one common internetworking architecture. The significance of this fact is a source of significant disagreement in the Internet community. Some argue that a unified root is essential, while others insist that there could be any number of roots - in theory. at least.

Karl Auerbach is the most dedicated advocate of multiple roots. Multimedia Network Architect at Cisco. member of the California bar. and operator of his ownetwork, Auerbach is also one of the founders of the Open Root Server Confederacy (oRsc),~"a group of technical professionals whose NewCo proposal \vas rejected by the NTIA when ICANN was anointed. The ORSC continued to lobby the NTIA in ICANN-s early months for changes to its corporate structure. and several of its suggestions were ultimately incorporated. Auerbach knows of what he speaks. and is one of the small group of individuals who commands widespread trust among the Internet technical community.

Auerbach has summarized his views on alternate roots in a paper drafted in the form of the comments which he would have made to the United States House Commerce Committee, had he been asked to present them at the Committee's public hearings into ICANN on July 22. 1999.~'' Auerbach writes:

It wasn't that many years ago in the United States when there was one big, monolithic telephone campany.

272 .

273 See supra note 63. It was taken as gospel by many that the stability of the telephone network depended on there being one unified, monolithic telephone company. We've seen through that. Today we have a flourishing competitive telephone system filled with all kinds of commercial and technical offerings that were inconceivable during the days of "Ma Bel I". We routinely use directory services in a multiplicity of forms - telephone books published by local telephone companies or entrepreneurs. 41 I services in various shapes and forms. web pages, or even on CD-ROMs (indeed a well known Supreme Court case involved a telephone directory published on CD-ROM). These telephone directories are not published by any unified authority, there is no regulator). body sitting over them. And we as consumers are not damaged or harmed by this. And the telephone system continues to tvork just fine. Yet. on the Internet there are those who wail and gnash their teeth at the thought that the Domain Name System. the Internet's -'white pages" might have multiple points of entry. Indeed. the whole series of documents from NTIA - including the Green and White Papers - and the existence of ICAW is founded on the notion that there is but one root system for the Domain Name System. I assert that those naysayers are wrong. I assert that just like the telephone system can have multiple publishers of telephone directory services. the Internet can have multiple roots to the Domain Name System, There is no doubt that as a purely technical matter. the Internet can have multiple root systems for the DNS. It has had these for years. The question is whether to recognize the value and use of multiple root systems and not foreclose them!"

Auerbach correctly chooses the analogy to telephone numbers and directories in telephony, but makes two very significant misstatements- First, he conhses infrastructure with content- Auerbach claims that "just like the telephone system can have multiple publishers of telephone directory services. the Internet can have multiple roots to the Domain Name System." This argrrment conficses hrs of nzrrnher-s rvirh the numbers rhemselves. Indeed, there can be and are many different lists of Internet names and numbers. [n fact, there happen to be certain lists which are so nearly universally referred to that they have taken on

274 K. Auerbach, "What I would say to the House Commerce Committee were I invited to testify" (17 July 1999), ~http~/www.cavebear.corn/cavebear/growVissue~2.hun~. the nature of "official" versions. However. just as in the telephone context there is only one set of numbers fiom which the listings in any telephone book could be drawn, there is similarly only one set of numbers which comprises the Internet number space. As we shall see below, all telephone companies in a local area must have complete access to all numbers in that area to terminate their customers' calls. Alternate root zones are like alternate sets of numbers, not alternate directories - numbers which are not necessarily reachable (at least not seamlessly) from other local networks. Layered on top of the iP number space are domain names. Again, while there could conceivably be any number of directories of domain names, there is only one set of them which is functional across the entire Internet. These are the names in the legacy or IANA root.

For better or worse. the Internet is defined by certain name and number spaces. They demarcate the boundary between what is part of the Internet and what is not. For Johnson and Post, this is a natural outcome of the "decentralized, emergent decision- making" which they claim created and sustains the Internet. They enthusiastically suggest that -'even the apparently fatal conflict between inconsistent domain name registration systems seems likely to be avoided without top-down ~ontrols.""~They explain:

Most users and sysops are interested in accurate routing of messages, and thus they will rwnl to connect to the DNS sources (hut most other people use. The confusing and inconsistent system hypothesized above [more than one TLD using the same name] is unstable (or, rather. could never arise in the first place) because the most widely used of the two systems would soon attract virtually a11 of the trafflc... Thus. the successful deployment of two incompatible versions of a Top Level Domain, or two wideIy distributed yet incompatible sets of lookup tables, is about as likely as the simultaneous growth in one country of two languages that have the same words mapped onto different meanings. Becazlsepeople look lo reliable sources for hir in/omution. good data drives out (emphasis added)

Auerbach might be disappointed to hear this from Johnson and Post, who are otherwise thought of as opposing centralized control of the Internet. This passage demonstrates a common misconception among libertarian and other anti-centralized-control writers, that the

275 Supra note 195 at 85.

276 Ibid., at 85-86. elements of the Internet's technical infrastructure are organic and can easily be replaced. Their argument fust assumes that complete identifier and routing systems can be created at any time ("good data drives out bad"), but then cuts them down as doomed to failure (they "could never arise in the first place'?. Johnson and Post should perhaps not be so quick to rule out alternative roots, because doing so begs the question of who controls the one root which most people use. They continue:

Network economies. and the creation of order from positive returns to information structure. save the day- Thus. while it is physically possible and currently lawful for system operators to create a mess by pointing their domain name resolving software at multiple incompatible sources, that nightmare scenario probabty will not occur and need not be prohibited by legislation of any kind!'7

The authors apparently do not consider one private entity leveraging control over the one system which everybody uses as a "nightmare scenario." Theories of decentralized, self- ordering governance are generally unable to deal with powefil individuals or recognize the need for positive collective action where autonomous self-coordination is inadequate.

Auerbach's second misstatement is that there is "no regulatory body sitting over" telephone directories- While the production of telephone directories is largely unregulated in Canada and the United States today. the listings which comprise them are still subject to regulatory oversight. An example is provided by the CRTC's Local Competition ~ecision.~'~As part of its planning for the transition from monopoly to competition in local telephone service, the CRTC asked the industry (both incumbents and prospective competitors) to make proposals on the many matters which would need to be sorted out. A number of these matters related to lists of numbers: directories of subscriber listings (paper lists) and directory assistance databases (electronic lists). The Commission explains:

Currently ILECs [Incumbent Local Exchange Carriers] possess all subscriber Iistings (name, address and telephone number) which appear in the telephone directory. As CLECs [Competitive Local Exchange Carriers] enter the market, the CLEC, rather than the ILEC, will possess the subscriber information for the customers lost by an ILEC. By the same token. the CLEC will not possess subscriber information on ILEC customers. Thus. the only way in zn [bid., at 86.

278 Supra note 12 1 . which either or both can publish complete directories is if they exchange subscriber listings for their respective customers?

This was a relatively easy matter for the Commission and the parties to deal with. yet Auerbach's paradigm would likely result in just the kind of essentially worthless incomplete directories to which the Commission refers. At fmt, it seems intuitive that all Local Exchange Carriers (LECs) would want to share directory information with each other, so that they all can offer comprehensive paper and electronic lists of users of the local telephone network. Any lesser lists would be worthless. Yet might it not be in the incumbents' interests to rehe to share its Listings with competitors, and/or rehe to list the numbers of competitors' customers in their own directories? This would act as a strong disincentive to switch away fiom the ILEC, whose directories ~+ouldat least be the most comprehensive, and therefore the de facto standard (while ironically at one time they were also the de jure standard).

Whether the ILECs intended to behave this way with respect to directories or not. the CRTC decided that it was in everybody's interest that all carriers share their lists with each other, on commercial terms to be approved by the omm mission.'^^ The needfor authoritative. comprehensive direclories r-eqrrired srtch an intervention in the pzrblic interest. Tellingly, the Commission did not order that a centralized directory assistance (DA)database be created and administered by a neutral third party. as many parties had suggested, The lLECs were of the view that each CLEC should have to maintain its own DA database. The Commission decided:

The Commission notes Stentor's [the major ILECs] concerns that the management of a national common database would be highly contentious. administratively burdensome and expensive. The Commission considers this matter to be an issue of industry eficiency and is of the view that the public interest does not require that the Commission mandate such a common database at this time."'

.. - . .- -- . ------. '79 Ibid., at para. 222-

180 [bid., paras, 224,227.

28 1 Ibid., at para- 236. The parallels to the Internet are srriking. The Commission had previously ordered that the ILECs give their competitors near-red-time access to their directory databases so that the competitors can provide their own DA service to their customers, and that this provided adequate protection for the CLECs and their cu~torners.'~' The above- quoted passage refers to a hypothetical shared database that all LECs could contribute to and access in real time. The Commission evidently decided that once the number of subscribers whose information is under the primary control of CLECs grows larger in relation to those of the ILECs, then the industry will recognize its common interest in having a centralized database which they would jointly manage. Ironically. this is a less intrusive measure than that taken by ICANN in attempting to set up the SRS for the gTLDs. [CAW, with the muscle of the NTIA behind it, has mandated that NSI permit its prospective competitors to make changes to the SRS database. as they register new names in the .corn. .net, and .org domains. By contrast, the CRTC. as part of its overall facilities-based approach to the transition to local competition. ruled that CLECs will have to make their own directories, which they can fill up by "buying" raw listing files from the ILECs. and in the meantime have "read-only" (or one-way) access only to the [LECs' DA databases.

A final point on the parallel between telephone directories and domain name directories which Auerbach suggests is the importance of unity in identifier systems at the top level. As explained earlier, the telephone number space of almost the entire world is unified at the highest level. The numbers are globally unique. logically addressable, and carefully managed to keep them that way. At present the Internet name and number spaces are similarly unified, at least those names and numbers which are considered to be part of the Internet. There is no technical or legal reason for this to be so. only the common interest of network operators that packets be transmittable and terminable as widely as possible. Less- than-comprehensive lists of Internet names and numbers are theoretically possible, but practically of little value. Paul Vixie offers a counter-point to Auerbach's characterization:

[A particular proposed alternative root] ...will destabi 1ize the network and make the whole thing work less well, setting the information revolution back a few years or maybe a full cycle. You people arguing about the right of people everywhere to have whatever domain names they want are missing, and I mean entirely missing,

292 Ibid., para. 235. the point. These names need to be distinguishing - slightly meaningful and very unique. There needs to be just one authoritative source of information about any given name, whether it be VIX-COM or --."or MCS-NET- An-vthing you do to make this less true is like handing machine guns to those knuckle dragging hairless apes I spoke of earl ier.'s3

Again Vixie hits the nail on the head. As one of the most widely-respected advocates of internetworking, and a person who makes a significant personal contribution to the day-to- day operation of the Internet as the manager of BIND and RBL. Vixie can be taken to have the best interests of the nehvork at heart when he argues in favour of a unified name space. Postel was similarly so concerned with the unity of the name space that he refused to add new TLDs without a broad mandate from the Internet community. Insistence on unity is not a veiIed attempt to 'bsubdue'oor control the Internet. It is merely a sensible way to operate a network, in order to maintain one of its essential features. that is, that its computers remain "comected together through a physical infrastructure."

The unity of the Internet's technical infrastructure has. to date. been supported by the same kinds of norms and governance forces which have supported the other network values of universal interoperability. universal interconnection. and non-proprietary protocols, standards, and networks. These values must continue to inform the policy of the global public network. regardless of how that network changes in the future. At stake is the public interest in communications.

The concept of the public interest in communications is an old one, loaded with many different meanings in many different contexts. let alone in different jurisdictions. While a full account of the concept of the public interest in communications is beyond the scope of this thesis, this section attempts to tie it at a general level to the governance of the Internet.

283 Supra note 196. Earlier in the same posting Vixie dramatized his role by saying: "People with ready access to accurate information cannot be oppressed. With no counter influence, the average knuckle dragging hairless ape wilt try to oppress his or her fellows as a matter of course - this is human nature. I am a counter influence. I don't iike the raw form of human nature and 1 am trying to give individuals the tools they need to avoid information oppression by their feIlows." Canada and the United States formerly employed a model of territorial and market monopolies to serve the public interest in the availability of reliable, affordable telephone service. Both nations have now embraced a completely different model, that of competition in all market segments and in all territories (at least in theory). While it is questionable wherher this change has resrilred in a reduction in rhe overall quantity of regulation in these markets, it has crrruinly resulred in a reduction of its scope, in large measure substiruring market discipline for derailed @-level regulation. rronically, however, competition has given rise to new issues for regulators which simply could not have been foreseen in the era of monopoly networks. because both the technical and physical infrastructures were under the unified control of incumbent carrier^.'^

Now that many competing carriers. some operating their own physical infrastructures and some taking advantage of special privileges imposed by regulators, share the telephone system's technical infrastructure, both industry and regulator have been forced to find ways to equitably share that infrastructure. Examples include the assignment of new telephone numbers and area codes. network management. call control. and coordination of "91 1" emergency service. These matters and many others have occupied regulators in Canada and the United States for several years, requiring intensive investigations and lengthy decisions.285 These efforts can be characterized as attempts to foster the birth of universally interconnected and interoperable shared networks. where previously there were monolithic, unified networks. Issues of access, considered broadly. will continue to occupy regulators and lie at the root of disputes between carriers for many years to come. even after the maturation of competition. Indeed. The vice-chair for telecommunications at the CRTC, David Colville, has remarked that access issues are "the type of issues that can make or break competition" and will take up a considerabIe amount of the Commission's time in the coming years.'86

284 Eli Noam provided strikingly prescient insights into the issues which would arise in the new public network environment in "The Public Telecommunications Network: A Concept in Transition" (1987) 37 Journal of Communication 30.

285 See, for example, CRTC Local Competition Decision. supra note 12 1 and FCC, Firsr Report and Order, FCC 96-25 (rel. August 8. 1996). State regulators in the United States are even more involved in the "nitty-gritty" of the transition to local competition than the FCC.

286 Quoted in "Controversial Access Issues Will Provide CRTC With Solid Mandate Into The New Millennium" Canadian Commzrnications 'Venvork Letrer. Vol. 19, No. 26 (23 August 1999) 1 at 5. Economist Christopher Weare explains the economic benefits of interoperability in his paper. "Organizing Interoperability," presented at the 1995 Telecommunications Policy Research ~onference.'~~The basic benefit of interoperability stems from "network effectst - the idea that the value of having access to a network of any kind increases with the number of others on it.'88 The most valuable network. then, is the one with the greatest possible number of users. This. of course, is the case with both telephone networks and the Internet. and is why users prefer phone numbers and domain names which are as widely reachable as possible. A consequence is that the operator of the most valuable network gains tremendous power. as Weare explains:

When a firm builds a iarge instailed base of its proprietary technoiogy. it gains a self-perpetuating market advantage. Because consumers prefer technologies that offer large networks, they tend to favour the dominant firm's technology, even if it is technically inferior. This advantage increases as the relative size of the network increases [giving the examples of VHS video and Microsofi Windows]. The dominant firm may then maintain incompatibilities with its rivals' products to retain control over its installed base, increase the difficulty of entry. and reduce price c~rn~etition.'~'

Thus interoperability among technologies and their networks often suffers. as a downside of the otherwise positive effects of networks. Weare continues and explains why we regulate access to certain telecommunications networks:

In such cases, policymakers may determine that the public interest requires regulatory intervention to expand the scope of interoperability. For esample, the Federal Communications Commission (FCC) has long promoted the benefits of more open and diverse access to the public telephone

This is not always easy ...

Nevertheless. AT&T and the RBOCs [Regional Bell Operating Companies] that succeeded it have often resisted efforts to allow

287 Supra note 8.

288 See N. Econornides, "The Economics of Networks" (1996) 14 International Journal of Industrial Organization 673, also at: .

289 Supra note 8 at 1 55,

290 Ib id. rivals access to important functionalities. Thus, in the 1970s. FCC action was required to promote the interoperability of competitively manufactured terminal equipment: in the 1980s, the FCC promoted the concept of Open Network Architecture (ONA) in an effort to facilitate the interoperability of information services and the public telephone network?'

This is precisely the dynamic that is ar play in the cab le modem. InterNIC, insrant messaging, electronic greeting card, and other emerging uccess dispzites on the Internet. Yet the Internet community appears unwilling to acknowledge that any form of public governance might be necessary to promote interopenbility. which is simply one of the practical expressions of the Iarger concept of access.

Achieving interoperability in the public interest can be a very difficult task. This has been the goal of regulators in the transition to full competition in telephony. It is also the goal of ICANN in its troubled attempt to break NSI's stranglehold on .corn. This is not to say that interoperability does not o%enarise without any public intervention at all. It certainly does as a result of market forces in many industries. However. it rarely arises or survives organically in network-oriented industries. and certainly did not in the context of the Internet, despite widespread claims that it did,

The Internet was subject to strong public governance in its non-commercial era. which flowed into the early commercial era in the form of the implicit nonns which guided the individuals who led the Internet technical community. The ideal of C internetworking is a hndamentally cooperative. communitarian ideal. It underlies the remarkable global metmetwork which we refer to as the Internet. We have seen the epoch- making significance of the NSF's standardization of the ARPANET on the TCPflP protocol on January 1, 1983. Christopher Weare referred to another extremely influential public intervention above, the promotion of ONA in an effort to facilitate the interoperability of information services and the public telephone nehvork. While many popular writers and politicians glibly remark that the Internet flourished because it was unregulated. the opposite, in fact. is the case. The Internet flourished, in part, thanks to the favorable regulatory treatment accorded to data services by Canadian and American regulators?9' not to mention the effect of flat-rated local telephone service on the length of time which American users spend on- line. particularly relative to their European co~nter~arts."~The FCC has now begun to take credit for its contribution to the success of the Internet, which it calls the -'unregulation of the Internet. 77294 Jason Oman asserts:

The Internet has grown up over this country's telephone lines, a technological development that has made it possible for virtually any American to join the online community. Because of the vast expanse of telephone penetration in this nation, and because of the openness of that network, the Internet has exploded. Every American with a phone line and a computer can be part of the Internet. The phone network has historically been open in two senses: phone czrsromers ure permitted ro c~ccess ctnv Internet service provider of their choosing. and those czrscorners are permitted to attach heir own equernent ro the phone line. al/owing them to use modenrs to transform [heir phone lines into their own itformation ~u~erhi~hwuys.'~'(emphasis added)

Ironicdiy, Oxrnan restates Chairman Kennard's confbsion about the meaning of the open access debate. The ability of phone customers to access any Internet service provider of their choosing masks the monopoly of the telephone company over the "last mile." With the introduction of new last mile technologies. the ability of ISPs to serve customers by means of those lines has had to be established anew. While Canada has mandated such access, based on the speeches and inaction of FCC Chairman William Kennard. it appears at present that the United States will not. at least with respect to cable access. Oman at [east recognizes the impact of the openness of the telephone network on the success ofthe Internet, even if he

292 See Werbach, supra note 208 at 50.

293 See L. Anania & R.J. Solomon. "Flat - The Minimalist Price," in McKnight & Bailey. supra note 22: 9 1 at 94ff.

294 See J. Oxman. "The FCC and the Unregulation of the Internet" FCC Ofice of Plans and Poky Working Paper Series No. 3 I (July 1999). ~http~/~~r~.fcc.gov/Bureaus/OPP/svorkin~oppvpI .mt>. Another in this series of studies focuses on the narrow question of whether Internet service should be regulated as a cable service in the United States. See B. Esbin, "Internet Over Cable: Defining the Future In Terms of the Past" FCC Office of PIans and Policy Working Paper Series No. 30 (August 1998). does not explicitly acknowledge that that openness was the product of positive regulation, not "unregulation. ,9296

The internet has been sz(bjcct to a unique compound of pzrblic governance throughout its existence. The contours of rhat governance have changed over rime, bur on balance can be characterized as public. ARP A program managers, university network administrators. voIunteer software developers. IETF working group members. and the FCC, to name just a few of the people and institutions which we have discussed herein, all contributed to a governance compound which preserved the key network values of universal interoperability, universal interconnection non-proprietary protocols. standards, and networks, and unity. These expressions of the --openness" of the Internet all made the Internet profoundly accessible. and this accessibility is what has permitted the remarkable adoption and exploitation of the Internet which is now referred to as the "Internet economy."

The Internet's unprecedented public governance structures have indirectly served the public interest in communications in the Internet context. They achieved and preserved accessibility in a network environment without the need for recourse to intrusive regulation, which is merely a different form of public governance. If the Internet's traditional governance structures cannot survive changes in the Internet 's infrastructure driven by its continuing commercialization. we need to ask how the pubtic interest will continue to be served. Unfortunately a prevailing sense that government of any kind is bad makes it difficult for the Internet community to recognize the fimdamentally pzrblic nature of the Internet and the need for it to continue to be governed in the public interest. Kennard expressed the dominant orthodoxy in a Sun Juse ~ClercuryNews interview:

If our goal is openness. let's go to the people who are going to deploy these networks and hold them to their word. And do it without basically picking up all of the regulations that we've burdened the telephone companies with for almost a hundred years and dumping it on the broadband cable plant. Because I think that's going in the wrong dire~tion.'~'

2% Ibid., at 6. For instance. consider these self-contradictory statements: "As the Internet has matured over the last three decades, the Commission has acted in numerous ways to ensure that this incredible nehvork of networks continued to develop unregulated. Equally important, the Commission has also ensured universal access to the ubiquitous tekcornmunications network on which the Internet relies to reach millions of usen across America." The Commission "acting" does not sound like "unregulation" at all. (at 6)

297 Quoted in "FCC: Keep marketplace competitive." supra note 237. Chairman Kennard takes the opportunity to disassociate his agency from what the public perceives to be the evils of telecommunications regulation, yet ignores the essential role of that public governance in fostering the openness ofthe Internet, a phenomenon which has been recounted by Jason Oxman of the FCC's Oace of Pians and Policy. We must not forget about infrastructure, nor assume its continued governance in the public interest. Rather, we need to assess the possible directions of change in the Internet to attempt to determine what kinds of responses may be necessary to ensure that the Internet continues to be governed in the public interest.

VIII. IMPLICATIONS OF THE SCENARIOS FOR INTERNET GOVERNANCE

We will now apply the principles of openness and the public interest to the three scenarios for the Internet's future set out in Section VI. Proceeding on the basis that the Internet is a public network whose infnstructure has always been governed in an essentially public manner, how might its governance structures need to evolve to keep pace with the evolution of the Internet itself?

As indicated in the description of the open scenario, this trend is essentially a continuation of current circumstances and events underway at time of writing. Some of the governance structures described herein are historical. The IANA root is now under the control of ICANTC Indeed, IANA itself has been superseded by ICANN. which employs most of IANA's former employees. While the proponents of alternate roots speak of ICANN as if it could be replaced at a moment's notice. this is simply not accurate. The NTIA had carefidly lay the legal groundwork for ICANN to begin to succeed to the United States government's role as the ultimate authority over the DNS. a role which it has held since the ARPANET era Further. ICANN has begun negotiations with the mostly volunteer operators of the root server system to formalize their relationships. Finally, ICANN has begun the process of introducing competition in the registration of domain names. [CANN's low- profile interim president, Michael Roberts, clearly recognizes that his job is to balance public and private interests:

There not a historicul prececknt for the private sector nrcmuging a system us contplcx and worldwide as the domain name system rvirhour statutory mrrhority. The government's White Paper (outlining probtems and possible solutions involving operation of the Internet) gives a very persuasive argument that it's well worth giving the private sector un opportunity to demonstrate that it can serve both private interests cmd pzlblic interests in the managenlenr of the domain name system- And I think that most of the people that have worked so hard over the last year and a haIf putting ICANN together believe that it's well worth theeffort. But there are many critics. both in the United States and outside the U.S.. and it will take a period of time for us to determine whether we can realize the goal of the White (emphasis added)

However. ICANN's first major task has proven very difficult. At present, the SRS is still in its early stages of operation and only one accredited registrar. registrar.com, has begun commercial use of it. NSI. which was originally a co-drafter of Postel's new- IANA plan. now refuses to recognize ICANN-s authority over its business. despite the NTIA's direction to do so. NSI rehses to sign ICANN's model Registrar Agreement and continues to assert ownership over the .corn database. While many of [CAW'S hostile critics would be happy to see ICANN fall apart. it is generally understood that the alternative is direct governmental action. likely in the form of United States intervention, with or without endorsement from other nations. In an August. 1999 interview. interim president Roberts gave his view on the aiternative:

The default. if ICANN fails and everything falls back into the hands of government. leads to a very problematic situation of various bilateral and multilateral negotiations- It's hard to predict what kind of outcome we'd see. Those who want to be on the pessimist side of this have predicted that if the default were realized the Internet would be Balkanized because that's how nation-states have historically conducted themselves. I think on the other side of that, there's enormous evidence that almost all the developed economies and for that matter most of the developing economies and the leadership in those economies have concluded that their leverage into the 2 1 st century is the Internet. Whal we huve to keep returning to, lohen people clsk "FV& ure yozr doing this without getring 200 governments into the ucr?" is ewrybodj has a stake in tntrking a

298 Quoted in D.L. Wilson. "-Juryis still out on private-sector Net authority" SiliconVuIl~.corn(IS August 1999), ~http~/www.mercurycenter.com/svtech/ne~aO81 699.htrn>. non-governmental. non-Balkmized situation (emphasis added)

More enlightened observers in the Internet community recognize that even if a return to the former paradigm of coordination were possible. the community would be much fhher away from achieving the nearly universally-shared initial goal of adding new TLDs. In any event, the open scenario employed herein assumes the survival and successful operation of ICANN.

Putting aside ICANN's difficulties. and the enormity of the tasks which it has yet to even begin, we can at least note that ICANN is reasonably aware of its role. ICANN's articles of incorporation describe one of 1CANh"s purposes as "promoting the global public interest in the operational stability of the ~nternet."'~~This is a remarkable purpose clause for a California non-pro fit corporation. However. most of ICANN's most vocal critics oppose the attachment of the word "public" to any aspect of the Internet, seeing it as the takeover of the Internet by power-hungry. self-agpndizing (but indeterminate) bureaucrats. For its part, however, ICANN's non-binding Governmental Advisory Committee (GAC) has not been shy about asserting the public nature of the Internet's technical infrastructure.

At its third meeting. which coincided with ICANN's in Santiago, Chile. the week of August 23. 1999. the GAC reaffirmed its May, 1999 resolutions that --the Internet naming system is a public resource and that the management of a TLD Registry must be in the public intere~t."'~' The GAC sent a strong message to NSI and advocates of proprietary TLDs by stating that --the GAC considers that no private intellectual or other property rights inhere to the TLD itself nor accrue to the delegated manager of the TLD as the result of such delegation. 7.302 Finally. the committee members in attendance. described as '-representing over 30 national governments. distinct economies as recognized in international for* and multinational governmental and treaty organizations." reaffirmed that "the delegation of a ccTLD Registry is subject to the ultimate authority of the relevant public authority or

299 Ib id.

300 Supra note 54.

301 ICANN Governmental Advisory Committee. "Communique of the Governmental Advisory Committee" (24 August 1999).

The GAC is not the only international forum in which assertions of the public interest in global networks have been made- -4t its November, 1998 Plenipotentiary Conference in Minneapolis. Minnesota. the [TU passed a resolution relating to "IP-based networks." These excerpts from that resolution. in particular. were met with considerable hostility in the traditional Internet community:

The Plenipotentiary Conference of the International Telecommunication Union (Minneapolis. 1998), considering a) that advances icr the global information infi-astructure. including the developrent of Internet Protocol (1P)-based nettvorks and especially the fnterner. me un isszre of crucial importcmce to the frrture. as an important engine for growth in the world economy in the 2 1st century:

recognizing

a) that IP-based networks have evolved to a widely accessible medium used for global commerce and communication and there is therefore a need to identi@ the global activities related to IP-based networks with respect to. for example: i) infrastructure. interoperability and standardization: ii) Internet naming and addressing: iii) dissemination of information about IP-based nehvorks and the implications of their development for ITU Member States, particularly the least developed countries:

c) that it is in the pztblic interest thar IP-based networks and other telecornrnunication nenvorh shozrld be able to interoperate so as to provide rhe qrraliry of service required by users. [---I resolves

I that ITU shall fully embrace the opportunities for telecommunication development that arise from the growth of IP- based services: 2 that ITU shall clearly identify. for its Member States and Sector Members and for the general public, the range of Internet-related issues that fall within the responsibilities incumbent on the Union under its Constitution: 3 that ITU shall collaborate with other relevant organizations to ensure thuz growth in If nenvorking delivers maimurn bmeJ7ts fro [he global cornmrmir_E. and participate as appropriate in any directIy related international initiative. [...fw (emphasis added)

The ITU has since been monitoring Internet governance activities. participating in the GAC, and working with ICANN. the [Em.the W3C. and ETSI (the European Telecommunications Standards Institute) in forming ICANN's Protocol Supporting Organization (PSO) (about which more below).

ICANN still has a long way to go before it is the recognized successor of IANA. The initial directors, who were supposed to serve only a year and then resign when a democratically-elected board was elected. have voted to extend their terms for up to another year,305mainly because ICANN has not been able to create a membership body to elect their successors. With the expiry of their second terms now set for the same date as that of the NTIA-NSI Cooperative Agreement. September 30.2000 could be a very important day in the Internet's history.

With respect to the two other substantive areas examined herein. protocols and standards and peering and interconnection. current arrangements would similarly continue in an open scenario. This means that the IETF and W3C would continue to effectively develop

504 Plenipotentiary Conference of the International Telecommunication Union. Resolution COM5/14 (4 November 1998), .

3 05 "RESOLVED,that under in [sic] Article V. Section I of the Corporation's bylaws the term of each of the At Large Directors of the Initial Board is extended to expire on the sooner of (i) the seating of the At Large Directof s successor selected pursuant to the process referred to in Article V. Section 4(iv) of the Bylaws and (ii) September SO. 2000.".

Finally. private peering arrangements would continue to be adequately addressed by private arrangements. which. of course. assumes a market of many suppliers, at all levels. Frieden and Huston. as we have seen. find it hard to believe that this state of afFairs can continue. but so long as it does. then this particular aspect of the Internet's physical infrastructure can best be governed by market discipline and relationships. At the consumer access levelT the open scenario assumes that broadband services wiIl become widely available in Canada and the United States. and will rneaningfidly exhibit the "open" characteristics of the Internet. That is. users will be abte to contract for Internet access services fiom the ISP of their choice, and not be restricted to the ISP affTliated with the owner of the broadband lines which run into their homes. This outcome in the United States, of course, seems to depend on AT&T opening its TCI subsidiary's cable networks to its competitors. In Canada of course. this outcome has been mandated by the CRTC.

On all counts, an open scenario requires the continued strength of the impiicit norms which are shared by the many people and institutions which make the Internet work. That is, a widespread commitment to the value of maintaining a common public metanetwork must continue to animate the actions of all players, both in governance roles and competitive roles. Marjory Blumenthal describes the challenge going forward:

An architecture for globat networking raises many questions about balance of power among users. system administrators. service providers. and government entities - each of which may wear multiple hats and have multiple interests, and all of which may have to function with or want to share a common infrastructure despite some degree of mutual distrust-'w

Even with the carryover effects from the non-commercial and early commercial eras, this balance appears to exist in the commercial era at least to enough of a degree to maintain the status quo. As suggested by disputes like instant messaging and the battle for broadband open access, however, this state of affairs is vulnerable. If the forces which maintain the Inrernet S inji-astrricture can adapt to the fnterner lr evoltifion and continzie ro egectively

306 M.S. Blumenthal, "Architecture and Expectations: Nehvorks of the World-Unite!". in Institute for Information Studies, supra note 99: 1 at 4 1. promote the network principles of universal interoperability and interconnection, non- proprietary protocols and networks, crnd unity, then these efforts shozrtd he supported. However, there are perstrasive reasons why this is r mlikeIy.

One year before Jon Postel's death. Internet pioneer David Clark speculated as to what the Internet would do if it lost its benevolent dictator- Clark off-handedly remarked that the options would probably be "something that's much more either convoluted. Machiavellian or public.0-307 ICANN might be described as a combination of the first and third. The second option. that of finding a new. even stronger dictator, was apparently not pursued. ICANN is now engaged in sorting out the relationships among the many different people and institutions which together make the DNS work. Despite being described as a private-sector solution. ICANN has relied almost entirely on the NTIA to do its "heavy lifting." Even with the NTIA's help. ICANN has still been unable to bring NSI under its control. The implication is that without this strong public help. the Internet community could not likely introduce competition in the registration of the most valuable TLDs. The two years of failure which preceded the Green Paper mersuggests that this community would likely continue to have trouble agreeing on a basis for adding new TLDs. let alone actually adding them.

In 1997. Gillett and Kapor aptly characterized the balance between those aspects of the Intemet's technical infrastructure which require central coordination and those which "take care of themselves":

Contrary to popular portrayal as total anarchy. the Internet is actually managed. It runs like a decentralized organization. but without any single person or organization filling the manager's role. The system that allows 99 percent of day-to-day operations to be coordinated without a central authority is embedded in the technical design of the Internet. The manager's job - handling the exceptional one percent - is performed by not one but several organizations?os

307 Supra note 37.

308 Supra note 91 at 3. This statement, of course, assumes that the technical design of the Internet either couid not or will not change. It assumes that all participants will continue to value interoperability and interconnection as ends in themselves. Indeed, the authors explicitly recognized as much:

The success of the Internet depends on a shared belief in the importance of interoperability. Erosion of this belief could be the single biggest threat to the internet's futurelm

Our task in assessing the fragmented (and closed) scenarios. then. is to think about what to do if the 99% -- 1% balance could no longer maintain the virtuous features of today's Internet. Even in the open scenario. as the stakes surrounding the Internet continue to rise, the significance of control over the I% would become enormous. Ieading to more and more questions about who holds that control and how they exercise it. On the fiagnented scenario, there is assumed to come a point at which decisions must be made, in general, whether to fragment the name and number spaces. routing systems. protocol suites, or interconnection patterns, or whether to do whatever is necessary to maintain them. QoS again provides an example. and puts the next question (to which we wilI return) starkly: even if these elements are fragmented. who could do anything about it?

We have seen that the implementation of QoS may Balkanize the Internet to some degree due to uneven capabilities of networks to handle the multiple grades of traffic which QoS (and several other similar efforts) are designed to offer. A similar phenomenon is observable in the telephony environment. Christopher Weare has documented the issues relating to the impIementation of similar technology in the local telephone system, known as advanced inteIligent networks (AIN). These enhancements to digitally-switched telephone networks are what facilitate centralized voice messaging, busy call return, and residential conference calling, among the many other value-added services which digital networks can now offer. The addition of these capabilities. of course. coincides with the transition to local competition, in which many carriers serve a particular local area not just one. Those carriers need to be able to access these extended fimctionalities on other carriers' networks, and to make their own interoperate with them. Weare explains some of the issues:

..,the manner in which carriers implement A[N will greatly affect the interconnection of competing nehvorks and service providers. Because AM is not fully standardized, carriers are implementing versions of it based on proprietary elements. raising the possibility that network control functions will not be interoperable across networks. AN is also likely to have important competitive consequences. For example. independent telecommunications providers' abilities to offer advanced telecommunications services will depend on their access to AIN functiona~ities."~

Once again, we can look at the recent Canadian approach to these problems.

In the Local Competition Decision. the CRTC ordered the interconnection of CCS7 signaling networks among dl carriers in a particular local calling area (which was the suggestion of all of the parties).3" CCS? is an industry-standard implementation of AIN technology. The Commission deferred to a joint regulator-industry committee the negotiation of a minimum set of CCS7 control messages which all carriers must exchange with each Beyond this standardized protocol suite. the Commission mandated the use of "open" protocols and interfaces wherever possible:

The Commission is of the view that the interconnections required by this Decision must rely on the use of industry standard network interfaces to the greatest extent possible. The implementation and use of non-standard interfaces and reliance on proprietary standards will be permitted only in exceptional circumstance^^"^

If one were planning an internet. this would be a good general rule. The introduction of new control hctionality in the Canadian local telephone industry is governed by at least two forces. First, the common interests of all camers to be able to integrate their networks with all others to accomplish the seamless termination of calls. Second, just in case one carrier, say. an incumbent, decided not to offer its competitors the tools to be able to offer service on a par with its own, the regulator has ordered a minimum level of integration of all carriers'

3 10 Supra note 8 at I6 1.

311 Supra note 12 I at paras. 3541.

3'2 Ibid., at para. 41. The committee was the CRTC Interconnection Steering Committee (CISC). The author has compared CISC and ICANN elsewhere. See C. McTaggart. "In Search of Effective Self- Regulatory Structures: The CRTC Interconnection Steering Committee and the Internet Corporation for Assigned Names and Numbers" [unpublished].

3 13 Supra note 12 I at para. 16. systems. The goal, of course. is interoperability and interconnection across the entire network.

If not for the prevalence of the shared norms discussed herein among those individuals and entities which collectively make the Internet work, could we expect that all participants would voluntarily and equitably interconnect their QoS (and other network control) systems? We can. of course. hope that they would, since the value of using the same rules and standards as everyone else is one of the basic value propositions which support the Internet. Yet the instant messaging and InterNIC database issues suggest that these incentives are not always enough. We have also seen that much of the Internet legal literature erroneously assumes that the wide-open. competitive nature of content markets also defines infrastructure markets. Where it is not necessary for competitors to cooperate with respect to facilities, this is most often the case. However, there is only one (important) name and number space, one routing system. one common transport protocol. and one of several other elements of technical infrastructure- Contrary to other misconceptions. some of these elements are indeed controlled by single entities. The .corn domain is a prime example. The solution to the perceived undesirability of having just one entity in charge of that element has led the NTIA and ICANN to adopt a very interventionist firm-specific approach- However, if it appeared that QoS would indeed be implemented in an uneven manner. chiefly by way of conflicting proprietary protocols. could anybody make such an intervention in the public interest?

One of ICANN's four specific areas of responsibility is "coordinating the assignment of Internet technical parameters as needed to maintain universal connectivity on the Internet." However. looking back at what [ANA did in this area, it is likely that these words are limited to more clerical than actual policy-making powers. Postet simply made sure the same number was not assigned to two things, he did not decide who or what was more worthy of receiving a particular number. This power has been translated into ICANN's corporate structure in the form of the Protocol Supporting Organization. The PSO is expected to serve as the protocol policy development "department" of ICANN. and make recommendations to ICANN itself. The duties of the "Protocol Council" (which is to meet once per year) include: (b) Policy Deveiopment (i) In the tradition of the Internet. standards development policies and conflict resolution mechanisms should be created and utilized by those institutions most directly involved. without undue interference from centralized bodies, (ii) The ICANN Bylaws vest in the PSO the primary responsibility for developing and recommending substantive policies in the area of protocol parameter assignment.""

These duties certainly elevate what PosteI did to a much more formal level. but it is still unlikely that they give the PSO or [CAM the power to dictare the course of protocol deveIoprnent, rather only to coordinate it administratively (still an enormousty influential power). That is most likely to be the view of the IETF. which would probably react with hostility to any attempt to install a "king" in this space,

The IETF itself would not likely be able to do anything to ensure an even implementation of QoS beyond encouraging it. The IETF has no legal authority over Internet protocols or standards and its vitality depends entirely on industry commitment to its processes. Even if it did, it might not be able to arrive at consensus on such a broad policy issue. affecting the interests of many players. both at the table and not, and with billions of dollars at stake. David Clark provides another important insight into the IETF:

This is a technical body. These are not academics or researchers, they're mostly from corporations at this point, but they are techies, because this group is trying to write technical standards. and in fact when they try to address economics and policy. they usually fail flat on their nose. which is why part of the trouble is the DNS."~

Beyond an institutional inability to balance competing interests and impose choices, Clark suggests that the problem may also lie at the personal level:

The IETF does not like to discuss policy issues, and when it tries, it fails- It's just not - they're a bunch of engineers. They're just not

3 14 Section 4(b), ICANN Protocol Supporting Organization (PSO) Memorandum of Understanding (14 July I999), executed by ICANN, IETF. ITU. W3C. and ETSL ~httprl/tvww.icann.or~pso/pso- mou.htm>. Note the ritualistic incantation of "the tradition of the Internet" and distaste for "centralized bodies."

3 15 ZittrainIClark Transcript, supra note 37. intellectually organized to deal with issues of law or policy or econorni~s,3'~

We have already noted the predilection among engineers for simply solving problems, and not being constrained by external considerations- This professiond discipline must be recognized as a major "invisible" factor in the remarkably interoperable Internet, The god was to make it work a certain way. and that result was achieved. It is only now that commercial pressures are leading to attempts to introduce control factors into the system, such as QoS protocols.

The IETF is very effective when there is a collective commitment to working out a cooperative solution. When there is not. either the matter would never come before the IETF. or if it already has. then it could probably not be expected to be resolved within the IETF. This phenomenon can be expected to occur with increasing regularity as the stakes in the Internet continue to rise. and if the trends identified in the fragmented scenario continue.

The Internet has traditionally been thought of as something of a frontier, where anyone can get venture capital funding. start up a company, and compete with anybody else on an equal footing. While that may in fact still be true. the frontier imagery is no longer accurate. The giants of the pre-Internet world have taken notice of its commercial potential, and there are even a few giants born of the Internet itself. whose stock prices the market has driven to astronomic levels. With their considerable cash, and inflated stock, respectively, these (predominantly American) corporate giants have made acquisition afier acquisition, leading to significant consolidation in many aspects of the Internet industry.

As the number of players decreases, the likelihood of them being able to agree on common standards probably decreases too. As the numbers decrease, each player's stake gets higher. More importantly, the ability to successfUlly introduce incompatible products increases because their instalied bases can become so large. Once a firm has a large installed base (even if it does not dominate the market), it can maintain incompatibilities with rivals' products to retain control over that installed base and increase the difficulty of entering the market. This is clearly AOL's strategy in instant messaging. White it would have been sensible as a matter of interconnectivity to allow users of other systems to be able to access its AIM system, AOL hopes that the large pool of users "on its side of the fence'' will encourage non-AIM members to sign up with AOL and leave their old nework behind- With e-mail, however, AOL apparently takes the view that interoperability and interconnectivity are the better strategy. It could try to apply its AIM strategy to e-mail. but its members would likely react very negatively, AOL and Microsoft are already bitter enemies and both owe their considerable success to closed systems and proprietary standards. not open. To the extent that these companies or others like them continue to consolidate various aspects of the Internet industry, we can most likely not expect them to cooperate in the interest of interoperability and interconnection for their own sake.

The shortcomings of the IETF and the trend towards consolidation both illustrate the fimdanental weakness of consensus-based decision-making bodies: they assume that consensus is possible. Because the [ETF works on the basis of "rough consensus," there is a belief that this is how all Intemet governance bodies make decisions. This. of course, is not accurate. as Postel made many decisions "on the fly," either on his own or after consulting very small groups of people. ICANN describes itself as "a global, non-profit, consensus-driven organization,-3 17 Not only is the Intemet technical community fractious, but it is particularly capable of voicing its displeasure to ICANN because e-mail is the preferred mode of communication among individuals in the field. E-mail messages, of course. can be sent to one thousand people almost as easily as to one. The loud objections found on the mailing lists dedicated to discussing ICANN policy will always give the impression that consensus has not been reached. It presently appears that consensus will be extremely difficult for ICANN to achieve on just about anything.

Indeed. there is not even consensus on whether ICANN is the product of Internet community consensus at all. Elsewhere in his pro forma testimony to the House Commerce Committee, Karl Auerbach said. in part:

ICANN was not created through community consensus nor is there now a consensus that it is the best. or even a good, way of addressing the issues of set forth in the NTIA White Paper. How do I know this? I was there. I've participated in the Internet Governance debates for many years and interacted with many of the principals.

3'7 ICANN, "Backgrounfl (July 1999). - The simple fact is that during the year of 1998, it became increasingly clear that the IANA proposals that became [CAW were the anointed plan. the only plan that would be accepted by NT1A. ConsequentIy. recognizing the futi I ity of any alternative plan, the proponents of those alternatives. grudgingty acquiesced to the IANA plan and attempted to mitigate the worst excesses of the IANA plan. The point to be noted is that no alternative to the IANA plan ever had a chance of success. The IANA plan was created by insiders and was clearly given not only the inside track but was also given the checkered flag well before the race had even begun?'s

One might respond that given the many previous unsuccessful attempts. the NTlA felt that a winner had to be picked. because the amorpho~sInternet community would never have come to a consensus on one. The success of the IETF in achieving consensus on narrow technical matters has misled many people into thinking that the same process can be used with respect to global issues of network policy. Johnson and Post have expressed just this

Unfortunately. there are some matters on which consensus is probably not possible. These are the difficult issues of global network policy. While the United States Department of Defense was able to mandate the standardization of the ARPANET on TCP/IP on January 1, 1983, there simply is no longer one body which could impose that kind of change today. ICANN may not be it either. Whether this is a good thing or a bad thing is, of course, one of the central questions which we still need to consider. We uill first consider some of the implications of the closed scenario. which will lead us back to this difficult question.

As described previously. the closed scenario assumes the replacement of the "public Internet" with two or more closed. proprietary networks. which offer greater functionality within, but dramatically decreased interoperability and interconnectivity beyond their bounds. Nemorks rvozdd d~flerenticrrethemselves by means of exclrrsive confenr and

3 I8 Supra note 274.

3 19 See supra note 200. features, instead of all offering the same ZeveZ of access to the sume pool of content. The unified. shared technical infrastructure of the Internet would be replaced or enhanced with proprietary systems or extensions. "Internet Classic-' may still exist. but at best would be frozen at present fimctionality levels, and at worst. not even entirely available through the premium networks. A worse outcome still. if there were only three major networks (perhaps AOL, @Home, and a well-funded latecomer called Microsofi Zoom) would be if they either did not exchange e-maiI amongst themselves. or not with the same degree of functionality to which users are accustomed within their home networks. Much more than restrictions on what Web sites can be accessed. restrictions on one's e-mail reach mark a dramatic derogation fiom what we take the Internet to be.

The closed scenario forces us to consider the potential responses to the outcomes which it suggests, and the Iegal basis for those responses.

1. Market Forces and Social Goalg

The first comment that can be made about the trends suggested by the fragmented and closed scenarios is that future nehvorks will look the way the market wants them to look. Operators will provide whatever they can sell to consumers. and ideally that will match what they need. At least that is the hope. Whether this is always the case is one of the major preoccupations of communications law. The extremely hrge investment necessary to build a communications network. combined with the social importance of all citizens having affordable access to basic communications services. are two of the main reasons why this industry has been so much more closely overseen by governments than others,

In Canada and the United States we have recently chosen competitive markets over monopoly as the best way to ensure that services are as widely and affordably available as possible. Where a market is contested, market discipline replaces rate regulation. The focus of regulators shifts to maintaining competitive markets, and acting as a referee among the players. Yet in territories and market segments which are not contested, tariff regulation continues. The potential for competition is not considered an adequate substitute for public regulation, only actual competition. Where we do not think that the market will provide a socially desirable level of service, we continue to require the incumbent carrier to do so. This has required the creation of complex subsidy arrangements, but represents our continued commitment to the basic social goals which have always underlain communications regulation in Canada and the United States-

In the future it seems we will have to decide whether these goals remain important enough to have the state continue to intervene to ensure that they are achieved. Canada has already decided this question in the affirmative by mandating meaningful third party access to broadband networks provided by telephone and cable companies. The FCC's Chairman, William Kennard, at Ieast appears to appreciate what is at stake. if not actually being willing to take the step which the CRTC has:

As we leave the Industrial Age and enter the Infonnation Age, it's clear that despite all the technical advances and globalization, the formula for economic success has remained the same: economic prosperity relies on high-speed access to the critical network of information and commerce. That nehvork is the Internet. and the type of access needed is broad band.jZ0

Just how high-speed access differed from plain old access in the Industrial Age is unclear, but Kennard clearly expresses the belief of many people all over the world: that the Internet is the future. He also asserts that access to it is "the formula for economic success." There is clearly a significant public interest at stake with respect to the Internet. and it is simply the modern incarnation of the same public interest which has always underlain our goals in the governance of communications services. Yet this new phenomenon is inherently global.

Questions of physical access to networks like the telephone and Internet can be effectively dealt with through domestic regulation, as demonstrated by the CRTC's decision with respect to cable access. Issues like it can be expected to continue to arise, as the key issue of access evolves in the coming years. Similarly, access issues higher up in the hierarchy of the Internet's physical infrastructure might also be addressable domestically. While these actions are practically most likely to be taken by public authorities in the United

320 See supra note 238. States, backbone access issues could arise at any level in the hierarchy and therefore in almost any country.

Yet when one appreciates the elements of the Internet's technical infrastructure, it is clear that they are almost by definition global in scope. Because the Internet is simultaneously everywhere and nowhere. its number spaces, routing tabIes and protocols do not physically reside anywhere. Entities which manage them are certainly physically located somewhere on earth. but because so many elements are. as Anthony Rutkowski has indicated, "under the collective control of the millions of organizations and individuals who make those resources available," they require a global frame of reference. Lf we observe failures in physical access markets. or abuse of dominance in other markets. we can generally deal with them on a national basis. But what of those aspects of the network which are by their nature global?

2. The Role of Cornnetition Law

At this point we need to consider the potential role of competition law. As mentioned. generally. if a particular firm can be shown to have abused a dominant position in a market, or if it intentionally denies a competitor access to an "essential facility" needed to compete, then competition law remedies may be available. This process has been put into motion several times to investigate Microsofi's dominance over the personal computer architecture, and not only in the United States. In general, competition law can be an effective response to market conduct which is not in the public interest, and therefore will certainly continue to hnction in this role whether the Internet evolves in a way that is more Iike one or another of the open. fragmented. or closed scenarios,

Yet there are at least three reasons why competition law may not be sufficient. First, competition law rules are often triggered only late in a market's evolution, at a point when one or more firms already possess enough market power to potentially threaten the public interest. Closely related is the fact that competition law is reactive. not proactive. It cannot positively advance the pubIic interest. but rather only try to impose a remedy after the fact. a highly imperfect process which rarely seems to please anybody involved. Finally, competition law is a nationd matter. Competition authorities in multiple jurisdictions might coordinate their actions, but differences in approach and interests in different countries make this rare. Most technical infixstructure issues will be global in nature. The DNS is only the frrst example. The NTIA acknowledged this in the White Paper:

The U.S. Government believes that the Internet is a global medium and that its technical management should fully reflect the global diversity of Internet users. We recognize the need for and fully support mechanisms that would ensure international input into the management of the domain name system."'

National competition law regimes cannot provide the degree of robust proactive oversight of the Internet's technical infrastructure which would be required in a hgmented or closed scenario-

It is important to note that given the choice of how best to oversee public telecommunications networks. both Canada and the United States chose expert regulators acting proactively and employing competition law principles over competition law on its own. New Zealand unsuccessfuily opted for the latter. resulting in the predictable unwillingness of the incumbent to assist in the conversion of its network into a new public network of the type which we now have in Canada and the United states.'" Ironically, the incumbent was ultimately forced to do so under the threat of detailed regulation by the New Zealand government. Where it is possible, specialized regulation has proved preferable to competition law alone in the context of communications networks.

3. Global Network Policv

To the extent that global networks, and particularty their technical infrastructures, are beyond the control of any one nation. the ability of pubiic authorities to implement policies of openness with respect to them is diminished. Even if we agree that it would be ideal if such policies could be imposed with respect to the Internet's technical infrastructure, we have to ask who should do it, and on what authority. This is another of the central issues in Internet governance.

32 I Supra note 45 at 18.

322 M, Webb & M. Taylor, "Light-handed Regulatian of Telecommunications in New Zealand: Is Generic Competition Law Sufficient?" (1998199) 1 International Journal of Communications Law & Policy 7,

If the optima1 number of is indeed one, governance of the system itself must in the final analysis be effective at the global level,'"

This. of course, has been a goal in the creation of ICANN. Yet the key word is effective. Should technical infi.asiruc~ureissues ctrise in the fitture which require inrentention in the public interest, as this thesis has suggested is quite likely, rvho should or could do he intervening? This question will become particularly relevant on September 30. 2000, when the NTIA-NSI Cooperative Agreement expires and the "privatization" of the DNS is scheduled to be complete. At present. ICANN's authority derives from the explicit backing of the United States government. The hope is clearly that by the final handover date. [CANN will be recognized by all stakeholders as the one global entity with authority over the DNS and other areas of responsibility listed in its articles of incorporation. However. given that the chances of ICANN even surviving until the transition date are not clear at present, the following analysis will consider the ability of any entity or entities to pertbrm Internet governance fictions on a global basis. in the public interest.

We have seen that NSI and 1ANA carried into the commercial era a measure of authority over technical infrastructure derived from their status as United States government contractors. Prior to the commercial era, of course, the Internet's predecessor networks were hded and directed by various agencies of the United States government. This historical fact best explains the claims which the United States and its nationals have made to continued authority in the Internet's modem era. The standards bodies' authority is more in the nature of moral authority. The IETF's devoted following is so sensitive about assertions of authority that they are quick to point out that the IETF is not even incorporated. It is more of a phenomenon, or. as its "attendees" often refer to it. merely a -'meeting." What ail of these bodies share is charismatic and trusted leadership. Again. Gillett and Kapor have described the phenomenon cleverly: "IANA-s authority depends on how close Jon Postel comes to performing this role with the wisdom of King Solomon. 9.324 Now that Postel is

323 Supra note 26 1 at 3 1.

324 Supra note 9 1 at 24. gone, and given that other leaders will eventually go too. we need to find a new basis for the authority of those fhctions which are required in the best interests of the network.

Walter Baer has contributed a --checklist" of requirements which any global body must meet in order to engage in governance of any kind. and it provides a usell framework with which to consider Internet governance bodies:

( 1 ) clear objectives and authority: (2) the support of major stakeholders: (3) timely decision-making processes: (4) an expert and results-oriented stam- (5) real enforcement powers: and (6) adequate financial resources.'"

ICANN's objectives are set out plainly in its articles of incorporation. yet there appears to be significant disagreement about what those objectives should mean and what action they might justify. ICANN is forced to seek two very different types of authority: the support of the Internet community and that of the nations of the world. These clearly place ICANN in very different milieu. On one side are a group of very vocal. very opinionated, and not particularIy constructive critics who see ICANN as a threat to the decentralized. "bottom-up" traditions of Internet governance. On the other are the nations of (predominantly) the industrialized world, whose nationals are becoming ever more dependent on the Internet, and concerned about the content which it carries.

ICANN appears to have gained the support of these nations because, after some initial insistence from the European Union that a more public body be created,'26 none

- - 33 W.S. Baer, "WiII the Global Information Infrastructure Need Transnational (or Any) Governance?" in B. Kahin & E.J. Wilson, I I I, eds.. Nationd fnformurion fnfrasrmcture hitiarives: fision and Policy Design (Cambridge, MA: MIT Press, 1997) 532 at 548.

316 See Council of the European Union. European Commission, "Internet Governance: Reply of the European Community and its Member States to the US Green Papei' (1 6 March 19981. ~http~/w.ispo.cec.be/eiVpolicy/govreplyhtml.Most of the EU's concerns appear to have been addressed in the White Paper because it endorsed the new-IANA process in July, 1998: "Communication from the European Commission to the European Parliament and to the Council: Internet Governance, Management of Internet Names and Addresses, Analysis and Assessment from the European Commission of the United States Department of Commerce White Papei' (29 July 1998), ~http~/~vw.ispo.cec.bdeif/dns/com98476.html~. have publicly objected to ICANN's existence and actions. The existence of the GAC appears to offer its members an adequate outlet for their desire to be involved in Internet governance. To the great consternation of ICANN's critics. though, the GAC's meetings are not open to the public, so the tenor of its discussions can only be gleaned from its occasional communiquPs.

Garnering the support of major stakeholders seems like a particularly difficult assignment in the context of the internet. As the single, common communications and "e- commerce" platform for the entire world, the Internet's stakeholders are now very broad. Practically, of course. the major stakeholders are the governments and corporations most closely involved in any given subject-matter. Perhaps the most important group of private- sector stakeholders in the modern Internet are the operators of the most significant portions of the Internet's high-level physical infrastructure. most of whom appear to support ICANN. ICANN is attempting to proceed without the support of the most relevant corporate stakeholder in the domain name business, NSI. Whether [CANN enjoys the support of minor stakeholders is unknown because it still does not have the membership structure which the White Paper required it to have. to ensure that it is truly representative of the global Internet community.

Any Internet governance body would require the support of the thousands of autonomous administrators of smaller networks which comprise the middle and lower levels of the Internet. If these individuals do not recognize that body's authority. they can simply refhe to refer to the resources or abide by the rules which it declares. Of course, as has already been suggested. the practical likelihood of this kind of large-scale "'civil disobedience" is slim because of the enormous power of the legacy root. not to mention commercial goodwill. Other roots are technically and legally possible, but may be of such 1ittIe value as to be a practical impossibility. The nominal power of the network administrators, though. should not be ignored.

The lack of timely decision-making processes is often cited as a weakness of public international organizations. and the opposite as an advantage of private bodies. The authors of the White Paper preferred a private coordinating process because it was more likely to "move rapidly enough to meet the changing needs of the Internet and of Internet users."327 The other side of timely decision-making, of course. is for decision-making, and this explains why public decision-making processes rend to take a Iong rime. One of Postel's greatest advantages was his speed. As David Clark indicated. obtaining a protocol parameter was often as easy as calling him?" Speed can be expected to be a trade-off for representativeness in any decision which an Internet governance body might make. Indeed, if ICANN does ever find a suitable structure for mass participation. its actions would probably be slowed even fiuther.

The requirement of an expert and results-oriented staff seems more like a given than a contentious point. However. Baer might be alluding to the high levei of tec hnicd proficiency which would be required of the staff of any information infkistructure- related body. This is certainly the case with the Canadian and American domestic communications regulatory agencies.

The fifth requirement. that of real enforcement powers. is the key one. Adequate financial resources would again seem a sine qtrn non. as ICANN found out only six months into its tenure.329 ICANN's greatest weakness may prove to be its lack of real enforcement powers. The historical fact of most of the Internet's infrastructure being located in the United States made that country the natural choice for ICANN-s incorporation and operation. Yet the technical infrastructure is global in nature. The solution has been the imposition of a chain of registry and registrar agreements in which the signer acknowledges ICAVN's authority and agrees as a contractual matter to abide by its decisions. The implication, of course, as NSI well knows. is that ICANN has no legal power over any entity which has not signed such an agreement. [CAW will likely find it increasingly difficult to effectively manage a global nehvork from California as the expansion and internationalization of that network continues-

3 27 Supra ncte 45 at 22.

328 Supra note 3 7.

3 29 See supra note 64. 4. The Need For Global Authority

There is clearly a gap between the need for effective public governance of the Internet's infrastructure and a basis for the authority of such activity- ICANN is an attempt at using a private corporation with public purposes and the support of public authorities to accomplish this task. The architect of ICANN's corporate structure has well expressed the unprecedented nature of this paradigm:

ICANN is in some ways an experiment to see if the private sector can provide the kind of infnstructure management in cyberspace that has always been the province of governments in the physical world- It wil1 be interesting to see if it works. For e-commerce to prosper, we should all hope that it does?'0

It would, of course, be ideal if ICANN does work, that is. if it could authoritatively manage those elements of the Internet's technicai infrastructure which require central coordination, and do it in accordance with the network policy principles enumerated in Section VII.

The Internet community. which over time will become more and more equivalent to the world community. must find a way to conceive of itself as a collective, having a common interest in the maintenance of the single infrastructure which underlies the many uses to which the Internet can be put. Decisions engaging issues of network policy will continue to arise as the Internet matures, However. virulent anti-statism and an inability to consider the public nature of the infrastructure can be expected to hobble that pursuit in the near term.

While the entity or entities entrusted with the governance of the Internet's infrastructure need not be government bodies. they must both behave like government bodies and effectively serve the public interest. Just as important. they must enjoy the support of nations and be accorded effective enforcement powers wherever they may need to act- As a general theme, issues of uccess to the Internet's technical infrastructure will continue to arise and need to be authoritatively resolved. The resolution of these issues might tend toward what has been labeled "openness." or rather toward fragmentation or Balkanization of the public Internet. This thesis has argued that these decisions should either be made in favour

330 Supra note 203. of openness, if they are made by public entities. or made subject to public policies of openness, if they are made by private entities.

5. Coherent Policv and Control

It might seem inappropriate to advocate centralized policy-making on the Internet when one of the very elements of openness is the absence of such centralized control. The fact that there is no central gatekeeper or authority which determines the level of each participant's access to the nehvork should not be lightly dismissed as a contributing factor to the Internet's unprecedented openness. It could also be that the technical characteristics of the Internet do indeed make it impossible to impose consistent. coherent network policy at a global level. Two things can be said. though.

First, while the principles of network policy put fonvard in this study are meant to be consistent at the global level. the actual implementation of them would fall to many different governance bodies. just as they have always been. The IETF. the CRTC and FCC. and even individual ISPs will all be confronted with decisions relating to the Internet's technical infrastructure as a consequence of its evolution. Each decision should be approached on a consistent. principled basis. That kind of holistic conception of the Internet and network poiicy simply does not exist today. This suggests the utility of an internationally-recognized set of basic principles for Internet infrastructure governance, which would at least have the positive effect of raising awareness of the import of infrastructure issues. and the surprising frequency with which they arise. However, there are certain functions which do actually require effective centralized coordination and therefore consistent, coherent policy. The root zone. name and number spaces. and root server system are perhaps the most important at present-

Second. the perceived lack of need for centralized policy-making is intimately tied to the perceived lack of control and even impossibility of control over the network. Yet the current trend in protocol development is toward more control. QoS efforts are designed to make the Intemet yield more than one class of service. Other efforts. such as diffserv, are designed to introduce control elements into Intemet communications so that higher functionality services can be provided with a greater degree of security and reliability. These are fundamental changes in philosophy in the Intemet, and ironically are mirrored in a similarly fimdamental change in the philosophy of telephone networks. which are currently being converted to IP networks- Clearly the end result. from an equipment vendor's point of view. will be the same hardware running the same sohvare. Farji-arn replacing or existing in paruUel with the telephone system. the Inferner seems desrined to merge wi[h it.

In the realm of governance we can imagine a similar convergence, and therefore appreciate the necessity of developing a more mature conception of Internet governance, which can accommodate time-honoured principles of telecommunications law, along with Internet-specific principles. This does not mean that the internet should be regulated like the telephone system. and certainly not on the outdated paradigm of telecommunications regulation which persists in most peoples' minds. Rather. it means that issues like access and interconnection will have the same significance. and engage the same public interests. in the Internet context as they do in the telephony context.

This is not to say that we need a "global FCC" or CRTC. but rather to suggest that the kinds of issues which these bodies have traditionally dealt with wili become more and more global, and therefore may require a global response. As telecommunications networks are converted to TCP/IP and linked as constituent parts of the Internet, then access issues in telecommunications will become inherently global. We need to decide how important it is that communications networks display characteristics like openness and accessibility and recognize that ensuring that they do will shortly be a global matter. Today it seems heavy-handed to create a public international agency to coordinate the Internet, while most nations would be loathe to cede control over their domestic communications networks to a single global authority. Yet if telecommunications industry trends do result in the merging of Internet and telephone technologies and networks. it will become more and more necessary to consider merging telecommunications and Internet governance.

The WTO Agreement on Basic ~elecornmunications~~'might at first seem an obvious contradiction to the statement that countries would be loathe to cede control over their domestic industries to an international body. However. it is important to note what the

33 1 Fourth Protocol to the General Agreement on Trade in Services (30 April 1996). being Annex LB to the WTO Agreement (15 April 1994) ( 1994). 33 International Legal Materials 1 167, The Fourth Protocol is generally referred to as the Agreement on Basic Telecommunications and is available at , and its Schedules of Commitments and Lists of Exemptions are at: ~http~/~~nv~wto.or~~vto/new/gbtoff.htm~. WTO agreement does and does not do. Most importantly. of course. it achieves the eventual lifting of barriers to trade in telecommunications services. and to the provision of such services in the territories of the signatories. It accomplishes the export of many of the principles of Canadian and American competitive telecommunications to many other countries around the world. However. increasing market access is a matter of liberaiizufion of telecommunications markets. not deregulation. In fact. the agreement's accompanying Reference ~a~e?~~requires the creation. from scratch in many cases. of independent regulatory bodies to administer a program of introducing competition where there is currently monopoly. Each country will have to go throub& a process similar to that which produced the CRTC Local Competition Decision and. importantly. resolve the myriad minor issues which the CRTC deferred to CISC.

The Reference Paper also contains very explicit statements of what to Canadians and Americans are some of the most basic principles of modem telecommunication law: that dominant suppliers not be allowed to engage in anti-competitive practices; that entrants be allowed to interconnect with incumbents' networks; and that universal service schemes (which are esplicitl y acknowledged as not being anti-competitive per se) must be administered in a transparent. non-discriminatory and competitively neutral manner, among others. This is not deregulation. but rather necessitates a substantial imposition of regulation, depending on the degree to which a nation's telecommunications industry is already governed by these and other basic principles.

27hus while control over ~c.lecomm~cnicatimservices is sign@antly lost by nations, control over the teleconzrnuniccrt ions. infiastrrtctrtre itself is maintained. This is necessary to ensure successful transitions to competition. Yet this oversight will remain relevant even once the transition to competition is complete (which. of course. is not likely to happen in more than a few of the total markets. services. and territories affected by WTO liberalization) because of the need for common infrastructure to be managed neutrally, in the interests of ail stakeholders. Reliance should be placed to the greatest degree possible, of course, on private-sector initiatives to do the managing, but residual oversight must be maintained in the public interest.

332 WTO Negotiating Group on Basic Telecommunications. "Reference Paper on Regulatory Principles" (24 April 1996).

The Internet's infi-astructure is subject to a unique and complex compound of governance forces. Some constitute management of the network by its participants. Some are more formd, or centralized. yet still appear very decentralized from a traditional governance point of view. Far from being a chaotic abstraction from a mesh of private networks, the Internet is a public metanetwork, a common internetworking architecture shared by an unprecedented number of users and networks. It is the nature of networks generally to require standardized protocols and single. authoritative lists of resources. While its constituent networks are most certainly a private matter at the local level. as part of the public metanetwork, they are part of a fundamentally public network.

That network has exhibited remarkable degrees of openness and accessibiIity from its beginning. The open nature of the Internet is perhaps the single most important explanation of its success. not that it has been unregulated. as many suggest- Certain positive regulatory policies, such as ONA and flat-rated local calling (in Canada and the United States, at least) have had significant (although not always consciously intended) influence on this success. The ability to simply "get on" the Internet and send and receive messages, access an incredible array of information. or even operate a video-on-demand service, make the Internet a truly revolutionary development in the history of communications. Yet we must be carehl not to be deceived into thinking that its positive characteristics arise as a natural consequence of its physical infrastructure. nor that they are immutable.

The remarkable degree of openness that is the hallmark of the Internet is the result of the unique compound of governance forces which shaped the infrastructure, not the other way around. Computers and computer networks only do what humans tell them to do. The Internet's core technologies were designed in a certain way for certain purposes. The goal of their designers was to create a flexible. versatile. and above all else simple architecture for internetworking. Commercial and even explicit public service concerns were not relevant. The Internet's predecessor networks were born in a unique environment where the engineering goal of designing a nehvorking architecture was not compromised by any externai considerations. such as a need to profit from the network or even make it pay for itself. As a consequence, the Internet-s technical characteristics, while excellent at facilitating basic communication (a worthy achievement in itself) are not always conducive to making money. This has given rise to a hdamental tension in the modern. commercial Internet: the need to make money in an environment which was not designed for anything other than simple communications. The commercialization of the Internet has thus led, in a growing number of different aspects of the internet. to marked derogation from the cardinal principle of openness.

Much of the early writing on the Internet and the law's application to it suffers from one or both of two major shortcomings. First. it assumes the Internet's infrastructure and proceeds to analyze the network solely in terms of the content or transactions available on it. Very rarely is the network treated on its own terms. The somewhat magical nature of the Internet, in the sense that nobody seems to run it or pay for it. makes it easy to forget about what makes it happen - what makes all that content so widely available. While there is a multitude of content and possible applications on the Internet. there is only one underlying infrastructure which ties it all together and presents it as a unified whole. Given this enormously important role. the underlying infrastructure is by far the more interesting story. As Marshal1 McLuhan recognized before the Internet was invented, the medium is the message.

The second shortcoming of much of the existing literature is that it assumes that the Internet's positive features are determined by its technical characteristics. and fiuther that those technical characteristics make it impossible for the Internet to be controlled in any way. Again, these assumptions are not supportable. The internet's infrastructure is in fact extremely malleable. lending tremendous influence to the individuals who control its elements. Fortunately. that control has traditionally been exercised by individuals who share a remarkably consistent set of beliefs about how the Internet should work. These implicit norms have guided the decisions which these people make. with the result that the Internet displays certain distinctive characteristics. Just as the hctions of computers depend entirely on the way they are programmed by humans. so the Internet's technical characteristics depend entirely on decisions made by humans. So long as those decisions are made by people who share these same implicit norms. the Internet can be expected to continue to display the characteristics of openness and accessibility.

If the guiding imperatives of the humans who make the Internet work were to change, however, then the characteristics of the network could change as well. Now that the Internet is primarily a cornmerciaf environment. and only secondarily a communications environment, the imperatives of some network participants differ significantly fiom their predecessors. Microsofi allegedly designed its Web browser so that its competitor's messages would never reach their intended recipients. By contrast. the basic protocols of the Internet were designed to be able to cope with congestion and imperfections in the network, with the goal that eventually every packet gets through. Similarly. AOL designed its instant messaging system to work only with AOL subscribers' computers. even though the value of the service to its users could be expanded greatly by making it interoperable with other systems on the Internet at large. Commercial reasons led AOL to restrict certain communications, even though the Internet is fully capable of carrying them,

These kinds of considerations are being weighed in many different aspects of the modem Internet. Entrepreneurs are hoping to have monopolies over new top-level domains like .car and .LIE>. just like NSI had over .corn. NSI itself intends to exclude others fiom the database of xom domain names and to exploit its enormous commercial value. When NSI was given its monopoly by the NSF in 1983, nobody could have imagined the financial rewards which would come with it. This is partly because at that time. most people around the Internet did not think of its elements as property. or something from which to make money. Names and addresses were considered community resources. Many people continue to feel this way about the Internet's entire technical infrastructure. while others hope to be able to profit fiom more and more elements of it.

Certain trends in the commercialization of the Internet suggest that derogations from the cardinal principle of openness will continue. If they do. the technical characteristics of the network may change. forcing us to decide whether its governance structures should respond or not. Unfortunately. many in the Internet community are dogmatically opposed to recognizing any public aspect of the Internet, or to asserting any public interest in its operation. This is ironic because the Internet rides on top of a quintessentially public network: the local telephone network. A tendency to focus on individual networks and elements at the "micro" level make it impossible for some people to conceive of the metanetwork level. the one architecture which all Internet participants share. The maturation of the Internet coincides with intense distrust and animosity toward government in general. and industry regulation in particular. Nevertheless. significant public interests, as important or more important than those which we have faced in the past. are at stake.

If we cannot find a way to protect the public nature of the internet then we stand to lose many of the benefits which far-sighted communications regulation has provided. Much like the Internet, the telephone system is generally assumed. Because it has become such a natural part of our lives. we do not appreciate the value of being connected to telephone networks. While access to information is a key benefit of communications technologies. access to other people is even more valuable. We consider telephone service to be an essential of modem life. The Internet promises to far exceed the telephone in utility, yet we do not treat it the same way.

The issues of universal service and access to information are beyond the scope of this thesis, but intimately tied to the public interest in the development of the Internet. For the Internet to reach its fullest potential as an information and communication tool, it needs to be made as widely accessible as possible and continue to be governed in the public interest. Its traditional governance structures have indirectly served this goal by maintaining the Internet as a profoundly open and accessible environment. We must now decide whether it is appropriate and worth the effort to protect the Internet as a public information and communications environment, or allow it to be dominated by private interests. This is not to suggest public ownership of infi-astr~cture.~~~but rather to insist that those elements of the

333 Akhough some have argued that Internet connectivity should be considered a public good and provided by publicly-owned utilities. See S.C. Cartson. "A Historical, Economic. and Legal Analysis of Municipal Ownership of the Information Highway" (1 999) 25 Rutgers Computer & Technology Law Journal 1. Internet which are shared be governed as far as possible in accordance with the principles of universal interoperability and interconnection, non-proprietary protocols and networks, and unity. The Internet is the world's one communications nehvork, the global public network, The choice is ours whether it will remain so. bST OF ACRONYMSAND GLOSSARY

Those definitions marked with an asterisk are derived from E. Rony & P. Rony ,The Domain Name Handbook: High Stakes und Strafegies in Cyberspace (Lawrence, KA: R&D Books, 1998) 597ff and .

AIM AOL Instant Messenger

AI-N Advanced Intelligent Networks

AOL America Online

AEUN American Regist? for Internet Numbers

ARPA United States Department of Defense Advanced Research Projects Agency, renamed DARPA (Defense Advanced Research Projects Agency) ca. 1972. DARPA is the central research and development organization for the U.S. Department of Defense (DoD). DARPA develops innovative and often high risk technologicat research ideas and prototype systems for use by the U.S. military.*

ARPANET Advanced Research Projects Agency Network

ASCII American Standard Code for Information Interchange

AUP Acceptable Use Policy

Backbone In a hierarchical network, a backbone is the top level transmission path into which other transit networks feed,*

BIND Berkeley Internet Name Domain. BLND software. developed by the University of California at Berkeley, implements a DNS server and a resolver library that enables clients to store and retrieve resources or objects and share this information with other resources on the network. The BMD server runs in the background. servicing queries on a well known network port, Most Internet hosts run BIND.*

Browser Soharethat lets users look at various types of Internet Resources- Browsers can search for documents and obtain them from other computers on the network.+

Cache A local memory file containing elements of Web sites and Internet addresses which can substitute for external searches for the same information.

Common Channel Signaling System. also known as SS7 (Signaling System #7) Country Code TopLevel Domain. A two-character abbreviation for a country according to the standards promulgated by IS0 3 166. This alpha code is used as a top level domain identifier to assist root servers in finding a specific computer address.*

CISC CRTC Interconnection Steering Committee

CLEC Corn petit ive Local Exchange Carrier

Client A computer system employed in nehvorking; also called a host or a server. A workstation requesting the contents of a file from a file server is a client of the file server.*

CNA Canadian Numbering Administration

CN RI Corporation for National Research Initiatives. A non-profit organization dedicated to formulating. planning and carrying out national-level research initiatives on the use of nehvork- based information technology. CNRI was founded in the 1980s by Robert Kahn (co-author with of the TCP/IP protocol) as a civilian Defense Advanced Research Projects Agency (DARPA). CNRI currently houses the secretariat of the Internet Engineering Task Force.*

CPE Customer Prem ises Equipment

CRTC Canadian Radio-television and Telecommunications Corn m ission

DA Directory Assistance

Distributed database Several different data repositories linked together seam lessly so that it works for the user as if it were one single database- A prime example in the Internet is the DNS.*

DNS Domain Name System. The DNS is a general purpose distributed, replicated. data query service. The principal use is the lookup of host IP addresses based on host names. The style of host names now used in the Internet is called "domain name". which offers a means of mapping an easy to remember name to an Internet Protocol number.*

Domain name A unique alpha-numeric designation to facilitate reference to the sets of numbers that actually locate a particular computer connected to the global information network: any name representing any record that exists within the DNS.*

Domain name space All DNS host names t3 into a name hierarchy. or tree. known as the domain name space.*

The "dot" is a standard Internet protocol used worldwide to indicate the top domain file in the DNS. It is a delimiter which identifies and address path to a particular file on a specific computer.* ED1 Electronic Data Interchange

ETSI European Telecommunications Standards Institute

EU European Union

FAQ Frequent1y Asked Quest ions

FCC United States Federal Communications Commission

FTP File Transfer Protocol. The standard rules that govern the transfer of files and programs over the Internet. FTP allows files to be moved from one computer to another over a network, regardless of the types of computers or operating systems involved in the exchange.*

ICANN Governmental Advisory Committee

Global Information Infrastructure

Generic TopLevel Domain. An intemationatly allocated portion of namespace.*

Generic TopLevel Domain Memorandum of Understanding, product of IAHC.

Header Standard fields of data which function much like maiIing labels and identification tags for individual packets-

Host In early ARPANET terminology. a computer that allows users to communicate with other host computers in a network. Individual users communicate by using programs such as e-mail, Telnet, and FTP. More recently. this machine is called either a server or a client.*

Host name The name given to a machine which is the part of the Internet address located immediately let? of the "dot9.*

HTTP Hypertext Transfer Protocol. The set of rules that govern the transfer of most documents passing over the Internet-

IAB Internet Architecture Board (formerly Internet Activities Board), a technical body that oversees the development of the Internet suite of protocols. IAB is the coordinating and oversight body for the actions of the Internet Engineering Task Force (IETF) and the Internet Research Task Force (I RTF). In June of 1 992. the IAB, IETF, and I RTF were given a new legal home under the aegis of the Internet Society. * International Ad Hoc Committee. IAHC was a non-governmental task force of eleven Internet experts drawn fiom Internet-related boards. The IAHC proposed the creation of seven new top level domains to relieve the pressure on the Cob1 top level domain and end the monopoly control over its administration. [t was dissolved on May 1, 1997 afier the signing ceremony of the gTLD-MoU.*

IANA Internet Assigned Numbers Authority- IANA is a government-funded authority that assigns and distributes international domain names and IP numbers or Internet addresses and oversees the Internet software protocols of the officially-sanctioned root servers- It is the central registry for various lnternet protocol parameters, such as port, protocol and enterprise numbers. and options, codes and types. [ANA is an Internet service of the High-Performance Computing and Communications (HPCC) Division of the Information Sciences Lnstitute (ISI), part of the University of Southern California's (USC) School of Engineering. *

ICANN Internet Corporation for Assigned Names and Numbers. A non-profit, pubtic benefit California corporation acknowledged by the Department of Commerce in October 1998 to assume the hnctions of IANA as part of the transfer of Internet administration to the private sector.*

IETF Internet Engineering Task Force. The IETF is the standards promulgating body of the Internet. It is a major source of proposals for protocol standards which are submitted to the IAB for final approval, The IETF is a large. open community of network designers, operators, vendors, and researchers whose purpose is to coordinate the operation, management and evolution of the Intenet, and to resolve short-range and mid-range protocol and architectural issues. Its quarterly meetings are open to anyone who pays the registration fee to attend,*

IFWP International Forum on the White Paper

ILEC Incumbent Local Exchange Carrier

Internet address A 32-bit quantity that uniquely identifies a node on the Internet, i.e., both the network and the specific host that a network application is running on.*

InterNIC Internet Network Information Centre

I P Internet Protocol

IP Number Internet Protocol Number. also known as IP Address. Identifies the address of a host or other intelligent device on the Internet-*

Internet Protocol version 4

Internet Protocol version 6 ISC Internet Sohvare Consortium

1st Information Sciences Institute. University of Southern California SchooI of Engineering

International Organization for Standardization. IS0 is a voluntary, nontreaty. worldwide federation of national standards bodies founded in 1946. It promotes the development of standardization to facilitate the international exchange of goods and services and coopention in the spheres of intellectual, scientific, technological and economic activity. IS0 includes one representative from the national standards organizations of about 100 member countries.*

I SOC Internet Society. Internet Society. a non-profit scientific. educational membership organization incorporated in 1992 in the District of Columbia. ISOC facilitates and supports the technical evolution of the Internet, stimulates interest in and educates the scientific and academic communities, industry and the public about the techno tow. uses and applications of the internet, and promotes the development of new applications for the system. ISOC provides a forum for discussion and collaboration in the operation and use of the global Internet infrastructure. The development of Internet technical standards takes place under the auspices of the ISOC with substantial support fiom the Corporation for National Research Initiatives under a cooperative agreement with the U.S. government.*

ISP Internet Service Provider

I S P/C Internet Service Providers- Consortium iTLD International Top-Level Domain (equivalent to gTLD) fTU International Telecommunication Union. A specialized agency of the United Nations based in Geneva, Switzerland. Established in I865 as the International Telegraph Union and currently works on teIecornmunications policy with governments and private organizations.*

LAN Local Area Network

LEC Local Exchange Carrier

Listserv, or A subject-specific automated e-mail system. Users subscribe to a mailing list listserv, and then are able to comment on related topics and receive comments and responses from other list subscribers, all by e-mail. Used extensively in the Internet community.*

LNP LocaI Number Portability

MoU Memorandum of Understanding MP3 Format for compression and decompression of digital audio signals.

Name resolution The process of mapping a name onto its corresponding address.*

Name server A computer employed to perf'orm name-to-address mapping. This machine is called either a host server or a client.*

NCP Network Control Protocol

NIC Nehvork Information Centre

NI I National Information lnfmtructure

NSF United States National Science Foundation. A US. government agency whose purpose is to promote the advancement of science. NSF firnds science researchers. scientific projects, and infrastructure to improve the quality of scientific research, including networking and communications technology.*

NSFNET Nationai Science Foundation Network

NS I Network Solutions. Inc,

NTI A United States Department of Commerce, National Telecommunications and Information Administration. Designated to coordinate U .S.NII initiatives. In the summer of 1997. the NTIA began a public inquiry (NOI) into the registration and administration of domain names in order to transfer Internet administration to the private sector.*

OECD Organization for Economic Cooperation and Development

ONA Open Network Architecture

ORSC Open Root Sewer Confederacy

OSI Open Systems Interconnection

Packet The unit of data that is routed across the Internet or any other packet- switched computer network. The generic term used to describe units of data at all levels of the protocol stack. but it is most correctly used to describe application data units. DNS packets are composed of five sections: Header. Question, Answer. Authority, and Additional.*

Peering The practice of exchanging traffic among networks of roughly equal size. Protocol .4 formal description of message formats and the rules two computers must follow to exchange those messages. Protocols can describe low- level details of machine-to-machine interfaces (e.g., the order in which bits and bytes are sent across a wire) or high-level exchanges between allocation programs (e-g.. the way in which two programs transfer a file across the Internet).*

PSO ICANN Protocol Supporting Organization

PSTN Public Switched Telephone Network

QoS Quality of Service. A set of initiatives and protocols designed to enable the Internet to offer differentiated classes of transport services, Can also refer to the service itself-

RBL Realtime Blackhole List, Continuously-updated spam filter service provided by ISC which network operators employ to keep spam from entering their networks.

Registrar A company that allocates domain names on the internet under a top level or second level domain.*

Registry The entity that administers a top level domain name on the Internet-*

RFC Request For Comments. RFCs are the "official" documentation series for the technical aspects of the internet.

RrR Regional Internet Registry

Root server A computer that maintains root-cache -- the root servers file -- which contains a list of authoritative root servers. The location and name of this file are specified in the boot file. which contains zone names, authorizations. and pointers to zone database files.*

Router A device which fonvards traffic between nehvorks. The fonvarding decision is based on network layer information and routing tables, ofien constructed by routing protocols.*

SAIC Science Applications International Corporation. Parent company of NSI.

SLD Second-Level Domain

Spam Bulk, unsolicited e-mail. or junk e-mail-

SRS Shared Registry System provided by NSI at the direction of the NTIA and ICANN.

TCP/[P Transmission Control Protocol/Internet Protocol

TLD Top-Level Domain URL Uniform Resource Locator. or Internet address.

W3C World Wide Web Consortium. Develops industry standards for Web applications like HTTP.

WTO World Trade Organization. Successor to the GATT (General Agreement on Tariffs and Trade)-

WWW World Wide Web Note: A If Interner addresses are current as of September 1 7, 1999-

BOOKS

Broc k, G. W. & Rosston, G-L., eds., The Internet and Telecommunications Policy: Selected Papers fiom the 1995 Telecommunications Poky Research Conference (Mahwah, NJ: Lawrence Erl baum Associates, I 996)

Hafner. K. & Lyon, M., Where Wizards St? Up Late: me Origins of the Internet (New York: Simon & Schuster, 1996)

Institute for Information Studies. Khe Promise of Global Nenvorks. Annual Review of the Institute for Information Studies (Queenstown, MD: The Aspen Institute. 1999)

Kahin, B ., ed., Building the Information Infiastnicture (New York: ~McGraw-Hil I. I 992)

Kahin, B. & Keller, J.H., eds., Coordinating the Internet (Cambridge, MA: MIT Press, 1997)

Kahin, B. & Keller, J., eds., Public Access to the lnrerner (Cambridge, MA: MIT Press, 1995)

Kahin, B. & Nesson, C., eds-, Borders in Cyberspace: Information Policy and the Global Information Infiasnucture (Cambridge, MA: MIT Press. 1997)

Kah in, B & Wilson, E.J., I I I, eds., National Information Infi.astructure Initiatives: Vision and Policy Design (Cambridge, MA: MIT Press. 1997)

Mansell, R. & Silverstone, R., eds.. Conrmrinicurion By Design: The Politics of Informution and Communication Technologies (Oxford: Oxford Un ivers i ty Press. 1 996)

McKnight, L. W. & Bailey. J.P., eds., lnrerne~Economics (Cambridge, MA: MIT Press. 1 997)

McLuhan, M., Understanding Media: ?lie Emwsions of~Man(Toronto: Signet, 1964)

Noarn, E. & NiShuilleab hain, A., eds.. Private Nenvorks Public Objectives (Amsterdam: Elsevier Science, 1996)

Noam, E.M. & Wolfson, AJ., eds., GiobaIism and Loculism in Telecommunications (Amsterdam: Elsevier Science, 1997)

Randal I, N ., The Soul of the Internet: Net Gods. Neti=ens and The Wiring of The World (London: International Thomson Computer Press, 1997)

RheingoId, H., The Virtual Community.-Hontesteading on the elecrronicfionrier (Reading, MA: Addison-Wesley, 1993)

Rony, E. & Rony, P., The Domain Name Handbook: High Stakes and Sirategies in C-vberspace (Lawrence, KA: R&D Books, 1998) Sha piro, A. L., The Control Revoluzion: How the Internet is putting individuals in cllurge and changing the world we know (New York: PublicAffairs, 1999)

Zacher, M., Governing Global Networks (Cam bridge: Cam bridge University Press. 1 996)

CHAPTERS

Anania, L. & Solomon. R.J., "Flat - The Minimalist Price," in McKnight. L.W. & Bailey, J.P., eds., Internet Economics (Cambridge. MA: MIT Press. 1997) 9 1

Baer. W.S., "Will the Global Information Infnstructure Need Transnational (or Any) Governance?" in Kahin, B-& Wilson, E.J., 111, eds., Xutional Information Inttwctza-e Initiatives: Vision and Policy Design (Cam bridge. MA: MIT Press. 1997) 532

Blumenthal, M.S., "Architecture and Expectations: Nehvorks of the World-Unite!", in Institute for [nfomation Studies, The Pron~iseof Global ~Venvorks.Annual Review of rf~eInsfitutefor Informa~ionStudies (Queenstown. MD: The Aspen institute, 1999) 1

Clark. D.D., "Intemet Cost Allocation and Pricing," in Mcbight. L.W. & Bailey. J.P.. eds., Internet Economics (Cam bridge, MA: MIT Press, 1997) 2 15

Gillett, S.E. & Kapor, M., "The Self-Governing internet: Coordination By Design,-- in Kahin, B. & Keller, J.H-, eds., Coordinating the Internet (Cambridge. MA: MIT Press, 1997) 3

Johnson, D.R. & Post, D.G., "And How Shall the Net Be Governed?: A Meditation on the Relative Virtues of Decentralized, Emergent Law." in Kahin. B. & Keller, J-H., eds., Coordinating the Internet (Cambridge, MA: MIT Press. 1997) 62

Kahin. B. & McConnell. B.. "Towards a Public Metanetwork: Interconnection, Leveraging, and Privatization of Government-Funded Networks in the United States," in Noam. E. & NiShuilleabhain, A.. eds., Private iVenrorks Public Objectives (Amsterdam: Elsevier Science, 1996) 307

MacKie-Mason. J.K. & Varian, H.R.. "Economic FAQs About the Internet" in McKnight, L.W. & Bailey, J.P., eds., Internet Economics (Cambridge, MA: MIT Press, 1997) 27

MacKie-Mason, J.K. & Varian, H.R.. "Pricing the Internet" in Kahin, B. & Keller. J.. eds., Public Access to the Internet (Cambridge. MA: MIT Press. 1995) 269

Mandelbaum, R. & Mandelbaum. P.A.. "The Strategic Future of the Mid-Level Networks," in Kahin, B., ed,, Building the Information Infi.nstrr~cture(New York: McGraw-Hill. 1992) 62

Mansell, R., Wetwork Governance: Designing New Regimes," in Mansell, R. & Silverstone, R., eds., Communication By Design: The Politics of information and Commrmicacion Technologies (Oxford: Oxford University Press. 1996) 187

Mody, B., "The Internet in the Other Three-Quarters of the World," in institute for Information Studies, The Promise of Global Nenvorh. Annual Review of the Insritute for In$ormation Studies (Queenstown, MD: The Aspen Institute. 1999) 69 Mueller, M., "The User-Driven Network: The Present Extent of Private Networking in the United States," in Noam. E- & NiShuilleabhain, A.. eds., Private Nenvorks Public Objectives (Amsterdam: Elsevier Science, 1996) 65

Noam, E.M., "Beyond Li beraiization: From the Network of Nehvorks to the System of Systems," in Noam, E. & NiShuilleabhain, A,, eds.. Private hrer\vorks Public 0bjecthe.s (Amsterdam: Elsevier Science, 1996) 423

Rutkowski, A.M., "A Taxonomy of Nehvorks: 1s It Public or NotT, in Noarn, E. & NiShuilleabhain, A., eds., Private Nenvorks Public Objecrives (Amsterdam: Elsevier Science. 1 996) 1

Rutkowski, A.M., "Factors Shaping Internet Self-Governance" in Kahin, B. & Keller, J.H., eds., Coordinating the Internet (Cambridge. MA: MIT Press, 1997) 92

Shaw, R,"Internet Domain Names: Whose Domain is This?," in Kahin, B. & Keller, J.H., eds.. Coordinating the Inremer (Cam bridge. MA: IMIT Press, 1997) I 07

Weare, C., "Organizing Interoperability Economic Institutions and the Development of InteroperabiIity," in Brock, G-W. & Rosston. G.L., eds., 7he Inremet and Telecommunications Policy: Selected Papers front the 1995 Telecommunications Policy Research Conference (Mahwah, NJ: Lawrence Erlbaum Associates, 1996) 14 1

JOURNAL ARTICLES

Albert, G.P. Jr., "Right on the Mark: Defining the Nexus Between Trademarks and internet Domain Names" (1997) 15 John Marshall Journal of Computer & Information Law 277

Burk, DL., "A First Look at the Emerging Law of Cybermarks" (1995) 1 Richmond Journal of Law & Technology 1

Carlson, S.C., "A Historical, Economic, and Legal Analysis of Municipal Ownership of the Information Highway'' (1999) 25 Rutgers Computer & Technology Law Journal 1

Cerf. V.G. & Kahn, R.E., "A ProtocoI for Packet-Network Intercommunication'' IEEE Transactions on Communications Teclmology (May 1974) 627

"Developments in the Law - The Law of Cyberspace" (1999) I 12 Harvard Law Review 1574

Economides, N., "The Economics of Nehvorks" ( 1996) 14 International Journal of Industrial Organization 673, also

Frieden, R., "Without Public Peer: The Potential Regulatory and Universal Service Consequences of Internet Balkanization" (1998) 3 Virginia Journal of Law & Technology 8. ~http://vjoIt.student,virginia.eddgraphics/vol3/voi3~a~8.html~

Gibbons, L.J., "No Regulation, Government Regulation. or Self-Regulation: Social Enforcement or Social Contracting for Governance in Cyberspace" (1997) 6 Cornell Journal of Law & Public Policy 475

Johnson, D.R. & Post, D.G., "Law and Borders" (1996) 48 Stanford Law Review 1367, also ~http://www.temple.edu/lawschooI/dpost/Borders.htmI> Kahn, R.E., "The Role of Government in the Evolution of the Internet" (1994) Vol- 37, No. 8 Communications of the ACM 15

Lemley, M.A., "The Law and Economics of internet Norms." (1999) 73 Chicago-Kent Law Review (forthcoming), draft available at

Maher. D.W., "Trademark Law on the Internet-Will it Scale? The Challenge to Develop International Trademark Law" ( 1997) 16 John Marshall Journal of Computer & Information Law 3

McChesney, R.W., "The Internet and U.S. Communication Policy-Making in Historical and Critical Perspective" (1996) 46 Journal of Communication 98. also

Noam, E. "The Public Te~ecommunicationsNetwork: A Concept in Transition" (I 987) 37 Journal of Communication 30

O'Rourke, M.A., "Fencing Cyberspace: Drawing Borders in a Virtual World'' (1 998) 82 Minnesota Law Review 609

Perritt, H.H., Jr., "Cyberspace Self-Government: Town Hall Democracy or Rediscovered Royalism?" (1997) 12 Berkeley Technology Law Journal 2,

Perritt, H.H., Jr., "Jurisdiction in Cyberspace" ( 1996) 4 1 Villanova Law Review 1

Petrazzini, B. & Kibati, M., "The Internet in Developing Countries" (1999) Vol- 42, No. 6 Communications of the ACM 3 1. ~http:/~~~nv.acm.org/pubs/articles/joumals/cacm/1999-42- 6/p3 1-petrazzini/p3 I -petrazzini.pdf,

Post. D.G., "Anarchy, State, and the Internet: An Essay on Law-Making in Cyberspace" [I9951 Journal of Online Law 3, ~http://~v.law.comell.edu/jol/post.htrnI~,also

Post. D.G.. "The Unsettled Paradox: The Internet, the State. and the Consent of the Governed" (1998) 5 Indiana Journal of Global Legal Studies 52 1. also ~http://www.temple.edu/lawschool/dpost/Sov.htmI>

Priest, M., "The Privatization of Regulation: Five Models of Self-regulation" (I 997-98) 29 Ottawa Law Review 233

SaIbu. S.S., "Who Should Govern the internet: Monitoring and Supporting a New Frontier" (1998) 1 1 Harvard Journal of Law & Technology 429

Win, S., "Governing Cyberspace: The Need for an International Solution" (1 996-97) 32 Gonzaga Law Review 365

Steele, H.L., Jr., "The Web That Binds Us All: The Future Legal Environment of the Internet" (1997) 19 Houston Journal of International Law 495 Webb, M. & Taylor, M., "Light-handed Regulation of Telecommunications in New Zealand: Is Generic Competition Law Sufficient?" ( l998/W) 2 International Journal of Communications Law & Policy 7, ~http://~w~v.digitaI-law.net/~JCLP~2~1999/ij~Ip~~vebdoc-7 - 2-1 999.htmb

Weiswasset, G- "Domain Names, The internet, and Trademarks: Infringement in Cyberspace" (1997) 13 Santa Clara Computer & High Technology Law Journal 137

Wisebrod, D., "Controlling the Uncontrollable: Regulating the InternetTt( 1995) 4 Media & Communications Law Review 33 1

Wu, T.S., "Cyberspace Sovereignty? - The Internet and the International System" ( 1997) 10 Harvard Journal of Law & Technology 647

PRESENTATIONS AND LWPUBLISHED PAPERS

Cukier, K.N., "Peering and Fearing: ISP Interconnection and Regulatory Issues" ( 1998).

Cukier. K.N., "Rich Man. Poor Man: The Geopolitics of Internet Policy Making" (Internet Society MET'98 Conference, Geneva. Switzerland. 2 1-24 July 1998),

Huston, G., "Interconnection, Peering, and Settlements" (Internet Society [NET-99 Conference, San Jose, California. 23 June 1999)- ~http://w~~~~.isoc.org/inet99/proceedinsIe/ 1 e-1 .htm>

Johnson, D.R. & Post, D.G., "The New Civic Virtue of the Net: A Complex Systems Model for the Governance of Cyberspace", ~http://www.ternple.edu/lawschooI/dpost/Newcivicvirtue.htmI~

Marble, S., "The Impacts of Settlement Issues on Business Evolution in the Internet" (161h Telecommunications Policy Research Conference. Washington. D.C.. 5 October 1998), ~http://~v.si,umich.edu/-prie/tprc/abst~cts98/marble.PDF~

Mueller, M., "The 'Governance' Debacle: How the Ideal of Internehvorking Got Buried by Politics" (Internet Society WET'98 Conference, Geneva Switzerland, 2 1-24 July 1998).

Mueller, M., "Trademarks and Domain Names: Property Rights and Institutional Evolution in Cyberspace" (Telecommunications Policy Research Conference, Arlington. Virginia, October 3, 1998) [unpublished]

Murai, J., Presentation to ICANN Root Server System Advisory Committee, May 26. 1999, Berlin, Germany, slide no. 5:

Post, D., "Governing Cyberspace: 'Where is James Madison when you need him?"' (posted 6 June 1 999), ~hnp:Nivw.icannwatch.or~arcl1ives/essays/930604982.shtmI>

Post, D.G., "Of Horses, Black Holes, and Decentralized Law-Making in Cyberspace" (Private CensorshipPedect Choice: Speech regulation on the Net, Yale Law School, New Haven, Connecticut, 9- 1 1 ApriI 1999). Rutkowski, A-M,, "Internet Law and Policy: Current state of the art" (Federal Communications Bar Association, Washington, D.C, 9 June 1999)

Shaw. R., "Public Policy Issues," Presentation to ICANN interim board of directors. Brussels, Belgium (25 November 1998).

SPEECHES

Lessig, L., "Governance" (Draft 3.0 I ) (Keynote. Computer Professionals for Social Responsibility (CPSR)Conference on Internet Governance. Massachusetts Institute of Technology, Cambridge, Massachusetts, I0 October 1998).

Lessig, L., "Open Code and Open Societies: Vaiues of Internet Governance- (Draft 2) ( 1999 Sibfey Lecture, University of Georgia. Athens, Georgia. I6 February 1999).

Kennard, W.E., Chairman, United States Federal Communications Commission. "The Road Not Taken: Building a Broadband Future for America" (National Cable Television Association, Chicago, Illinois. 1 5 June 1999).

REPORTS

Denton, T.M. Consultants, "Netheads versus Bellheads: Research into Emerging Policy Issues in the Development and Deployment of Internet Protocols - Final Report" (Report prepared for Industry Canada) ( 1999),

Drake, W.J., rapporteur. Toward Sustainable Corrlpetirion in Global Telecommuniccrrions From Principle to Pracrice (A Report of the Third Annual Aspen institute Roundtable on International Telecommunications) (Washington, DL.: The Aspen Institute. 1999)

Esbin, B., "Internet Over Cable: Defining the Future In Terms of the Past'' United States Federal Communications Commission Office of Plans and Policy Working Paper Series No. 30 (August 1998), ~http://~~~v.fcc.gov/Bureaus/OPP/working~

Oman, J., "The FCC and the Unregulation of the Internet" United States Federal Communications Commission Office of Plans and Policy Working Paper Series No. 3 1 (July 1999), ~http://~wv.fcc.gov/Bureaus/OPP/workinggapers/oppwp31 .txt>

Werbach, K., "Digital Tornado: The internet and Telecommunications Policy" United States Federal Communications Commission Office of Plans and Policy Working Paper Series No. 29 (March 1997), ~http://www~fcc.gov/Bureaus/OPP/workin~l~

REQUESTS FOR COMMENTS (RFCs)

RFC 920, Postel, J. & Reynolds, .I.,"Domain Requirements" (October 1984), RFC 1 160, Cerf, V., "The Internet Activities Board (May I990),

RFC I59 1, Postel, J., "Domain Name System Structure and Delegation" (March 199.1).

RFC 17t 8, IETF Secretariat & Maikin, G.. "The Tao of the IETF: A Guide for New Attendees of the Internet Engineering Task Force",

RFC 1958, Carpenter, B., ed.. "Architectural Principles of the [nternet" (June 1996).

RFC 20 1 0, Manning, B. & Vixie, P.. '-Operational Criteria for Root Name Servers" (October 1 W6),

RFC 2026, Bradner, S.. '-The Internet Standards Process'' (Revision 3) (October 1 996)-

RFC 2468, Cerf, V., "I Remember IANA" (October 1998).

RFC 2664, Plzak, R., Wells A. & Krol. E., "FYI on Questions and Answers: Answers to Commonly Asked 'New Internet User' Questions" (August 1999).

AGREEMENTS

Generic Top Level Domain Memorandum of L'itderstanding (gTLD-MoU).

Internet Corporation for Assigned Names and Numbers. Registrar License and Agreement,

Internet Corporation for Assigned Names and Numbers Protocol Supporting Organization (PSO) Memorandum of Understanding (1 4 July 1999). executed by ICANN. IETF, [TU, W3C, and ETSI.

Letter from J. Beckwith Burr, United States Department of Commerce, to David Graves, Network Solutions, Incorporated [sic] (February 26. 1999).

Memorandum of Understanding Between The C/: S- &pCIrt~ettt of Commerce and internet Corporationfor Assigned Names and ~Vzrmbers,(November 25, 1998),

National Science Foundation Cooperative Agreement No. NCR-92 18742, Amendment 4. ~http://~~v.nehvorksolutions.com/nsf/a~.htrni~

National Science Foundation Cooperative Agreement No. NCR-92 18742, Amendment 1 I. Network Information Services Manager(s) for NSFNET and the NREN: INTERNIC Registration Services Cooperative Agreement No. NCR-92 t 8742 between National Science Foundation and Network Solutions, Incorporated [sic]. dated January I, 1983, ~http://www,networksolutions.com/nsf/agreemen~index~htmi~

CORPORATE DOCUMENTS

Articles of Amendment of the Articles of Incorporation of The American Registry for Internet Numbers, Ltd.,

Articles Of Incorporation Of Internet Corporation For Assigned Names And Numbers (As Revised November 2 1, 1998),

PERIODICAL ARTICLES

"Controversial Access Issues Will Provide CRTC With Sol id Mandate Into The New Millennium" Canadian Commzinicarions ~VenvorkLetter. Vol. 19, No. 26 (23 August 1999) 1 de Jonge, P., "Riding the Wild, Perilous Waters of Arnazon-com" The New York Times :Maguzine (14 March 1999) 36

Jennings, D.M.,Landweber, L.H., Fuchs. I.H.. Farber. D.J. & Adrion, W.R., "Computer Networking for Scientists," Science 23 1 (22 February 1986) 943

Kelly, K., "The Roaring Zeros" Wired (September 1 999) 15 1

O'Reilly, T., "The Open Source Revolution" Release /.O (November 1998). ~http://~~v.edventure.com/releaseI/ I 198.htrnP

Sims. J., "Privatizing the Domain Name System: The Formation of the Internet Corporation" E- Money. Vol. l, No. 9 (January 1999) 3

Werbach, K., "The Architecture of Internet 2.0" Release 1.0 (February 1999).

NEWS REPORTS (PAPER)

Cauley, L., "AT&T to Shun Exclusive Pacts For Cable TV" Wall Street Jortrnal( I 5 June 1999) B8.5

Corcoran, T., "Let the market control Rogers" rVarionuI POS~(1 3 July 1999) C7

Cukier, K.N., "The Internet Loses Its Head" Wall Srreer Journal (October 22, 1998) A22

Scoffield, H., "E-commerce expected to explode, OECD says" The GIobe and Mail (29 September 1998) B6

Simons, .I.,"Internet-Address Firm Receives 2 Loans But Says It Still Needs $1 Million More" Wall Street Journal (23 August 1 999) BS -5 Tory, J.H, Letter to the Editor, "Internet pandemoniume Nutional Post (15 March 1999) CS

Varian, H.R., "How to Strengthen the [nternet's Backbone" Wall Street Journal (8 June 1998) A22

W ingfield, N. & Bank, D., "Microsoft-AOL War Heats Up Over Net Access" Wall Street Journal (5 August 1999) B6.3

Zaslow, J., 'Wet prophett USA Weekend ( 19-2 1 February 1999) 19

NEWS REPORTS (INTERNET)

"AOL messaging policy might risk cable deals-' CNETrVewsxom (27 July 1999).

"A0 L Names And reesen CTO" Wired News ( 18 February 1 999),

"AT&T: TCI deal threatened" Wired ,Mews ( 16 November 1998), ~http://www.wired.com/news/news/business/story/16289.htm I>

"AT&T Wins War For Mediaone" Wired News (5 May 1999),

Bemier, P., "Standards Adoption Key for IP Voice Service" Inrer@ctive Week (15 October 1997), ~http://~vww.zdnet.com/zdnn/content/inwk43/158899.html~

Bicknell, C., "Yahoo Gobbles Up GeoCities" Wired News (28 January 1999), ~http://www.wired.com/news/news/business/sto/1 7595.htmI>

Biggs. B.S., "Microsoft's Open Source Fears-' TechWeb News (I 9 November 1998).

Clausing, J., "A Planned Internet Yellow Pages Draws Federal Scrutiny" New York Times Cyberrimes (26 July 1999). ~http:~/~~w.nytimes.com/Iibrary/tech/99/07/6iztech/articles/26ican.h~l~

Clausing, J., "Crusader Thwarts Invaders of the E-Mailbox" ,Wew York Times CyberTirnes (14 December 1998). ~http://~nnv.nytimes.com/library/tech/98/12/biztech/articles/ 14spam,htmI>

Clausing, J., "Internet Address Company Grilled in Congress'' New York Times Cjiberrimes (23 July 1999), ~http:~/www.nytimes.com/libnry/tech/99/O7/cyber/a~icled23domain.html~

Cook. G., "A Shadow Government: Clinton Administration To Establish Public Authority (New IANA Corp.) To Run Internet" The COOK Report on Infernel (November. 1998 - Extra Edition), ~http:/l~~v.cookreport.com/sellout.html~

Gillrnor, D., "Messaging flap makes Microsofi. AOL instant hypocrites" San Jose Mercury News Silicon ValZey.com (26 July 1 999), ~http://~w.mercurycenter.com/svtecll/cgi llmor/docs/dg072799.htrn> Hansell, S., "Positions Harden in Instant-Message Fight" New York Times Cybertinies (28 July 1999), ~http://www.nytimes.com/library/tech~99/07/biztecMarticIes/28maiI.htm I>

Healey, I., ed., "FCC: Keep marketplace competitive" San Jose Mercury News Silicon VaIZey.com (25 July 1 999), ~http://ww.mercurycenter.com/svtech/news/indepth/docs/qa072699.htm~

Junnarkar S. & Farmer, M.A., "Microsofi AT&T in $5 billion pact" CNETNervs.com (6 May 1999). ~http://news.cnet,corn/ne~vs/0-1004-200-342 147.htmI>

Kap Ian, K., "Web Power Struggle Delays New Domain Name System" LA Times.com (24 June 1999), ~http://tATimes.com/cgi-bin/sfwebcli?DBLIST=lt99&DOCNUM=53095>

Macavinta, C., "ICANN running out of money" CiVET iVervs.com (7 July 1999),

Marsan, C. Durn, "New Net domain name authority out of cash" Comnpurerrvorld ( 15 July 1999). ~h~p://www.computerworld.com/home/news.nsf/a11/9907I54icann>

Martinez, M.J., "Network Solutions Registers Dissent: What's Up With Domain Name Database?" ABC Nervs.com (27 March 1999). ~http://abcnews.go.codsections/tech/dailyne~vs/ne~o~99O326~hml~

McGuire, D- "Coalition Accuses ATTITCI of Censorsh ip" ivervsbyres (2 1 July 1999).

McGuire, D., "ICANN Nixed Deal To Bolster NSI Control Of Registry" ~Vetvsb-vtes(23 July 1999),

"Microsoft invests $400 million in Rogers Communications" CNET Nervs.com ( I Z July 1999)-

"M icrosofi Speeds Hong Kong Net" Wired iVerc:s (9 March 1999).

Miles. S., "Microworkz shake-up underscores free PC troubles" CiVET Nervs.cont (26 August 1999).

"NSI skyrockets on fee collection decision." CNET rVert.s.com (April 2 1. 1999)

Oakes. C., "Companies Decry NetSol Pol icy" Wired Netvs ( 1 8 February 1 999), ~http://www.wired.com/news/print~version/politics/story/17973 .htm I>

Oakes. C., "MS Wins Patent for Web Standard" Wired News (4 February 1999).

Patrizio, A., "AltaVista Joins Free ISP Brigade" Wired iV~rvs( 12 August 1999), ~http://www.wired.com/news/news/business/story/2125 I .htrnI>

Spangler, T., 'Wet's Old Guard Shaping New DNS" inrernet World (23 February 1998). ~http://www.iw.com/print/1998/02/23/news/ 19980223-guard-htmb Wassermann, E., "Congress Ready to Weigh In on ICANN Zhe 1nd11sb-yStandard ( I 5 July 1999,

Wassermann, E. "Just Whose InterNIC Is It. An yvayT 77ie Industry Standard (26 March 1999). ~http://www,thestandard.com/articles/display/O,1449.4009.00.htm I>

Wilson, D.L., "Jury is still out on private-sector Net authority" SiIiconVaIIeev.corn(1 5 August I999), ~http://www.mercurycentet.com/svtech/news/indepth/docs/qa08I699.htm>

Wirbel, L., "Internet Protocol Gets Rules For Good Behavior'' TechWeb News (I 1 May 1998), ~http://www.techweb.com/news/story/TWB199805 I 1 SO0 15>

Wolk. M., "AOL Battles Rivals Over Instant Messaging" Evcite News (25 July 1999). ~http://news.excite.com/ne~vs~r/990725/00/tech-microsoft-ao I>

NEWS RELEASES

America Online, Inc., News Release. "AOL Surpasses I8 Million Members'' (17 August 1999),

Boston Working Group, News Release, "BWG Disputes ICANN's Domain Name Competition Claims" (April 23, 1999), ~http://~~~~~v.domainhandbook.com/pr-bwghmI> internet Corporation for Assigned Names and Numbers, "Esther Dyson's Response to Questions," letter from E. Dyson to R-Nader and J. Love. Consumer Project on Technology (15 June I999),

Internet Corporation for Assigned Names and Numbers. News Release. "ICANN Names Competitive Domain-Name Registrars- (April 2 1. 1999).

Internet Corporation for Assigned Names and Numbers Governmental Advisory Committee, "Communique of the Governmental Advisory Committee" (24 August 1999). ~h~p://www.noie.gov~au/d0~s/gacmtg3~communique~htm~

Nehvork Solutions, Inc., News Release. "Network Solutions Announces Record 1999 First Quarter Revenue and Earnings" (April 22. 1999). ~http://netsol.com/ir/filings/ER-IQ99.htmI>

United States Congress House Committee on Commerce. News Release. "Bliley Blasts ICANN Management of Domain Names: Questions Authority To Levy Domain Name Tad' (22 June 1999) GOVERNMENT DOCUMENTS

Canada

Canada, Senate, Subcommittee on Communications of the Standing Senate Committee on Transport and Communications, Final Report, CVkd to Win! Canada's Positioning Within rhe World's Technological Revolution (May 1999).

'-Communication from the European Commission to the European Parliament and to the Council: Internet Governance, Management of Internet Names and Addresses, Anaiysis and Assessment from the European Commission of the United States Department of Commerce White Paper" (29 July 1998). ~http://~w.ispo.cec.be/eif/dns/com98476.htmI>

Council of the European Union, European Commission. -Internet Governance: Reply of the European Community and its Member States to the US Green Papet"' (1 6 March 1998). ~http://www.ispo.cec.be/eif/policy/govreply.html~

United States

The White House, A Framework for GIobuf Electronic Conzmerce (July 1. 1997).

United States Department of Commerce. A Proposaf to Improve Technical Managenrent of Internet Names and Addresses: Disczmion Drufr 1/30/98 (30 January 1998).

United States Department of Commerce. ~~1unugemc.ntof Internet Names and Addrc.ssm (5 June 1998), ~http://~~w.ntia.doc.gov/ntiahome/domainname~65~98dns.htm~

United States Federal Trade Commission, "Privacy Online: A Report to Congress" (June 1998), ~http:llw~v.ftc.govlreports/privacyjltoc.htm~

United States Federal Trade Commission. "Self-regulation and Privacy Online: A Report to Congress" (July 1 999).

International Agencies

Plenipotentiary Conference of the International Telecommunication Union, Resolution COM5/14 (4 November 1998), .

STATUTES

Telecommunications Act (Canada), S.C. 1 993. C. 3 8 TREATY DOCUMENTS

Fourth Protocol to the General Agreement on Trade in Services (30 April 1996). being Annex 1 B to the WTO Agreement (I 5 April 1994) ( 1994). 33 international Legal Materials 1 167

WTO Negotiating Group on Basic Telecommunications. "Reference Paper on Regulatory Principles" (24 April 1996),

CASES

Canada

Bell Canada v. Unitel Comrnunicutions lnc ( 1992). 99 Dominion Law Reports (4Ih) 53 3 (Fed. c.A.)

Tele-Direct (Publications) inc. v. American Brtsiness information. Ik(1 W6), 74 Canadian Patent Reports (3d) 72 (Fed. T.D.), affirmed ( 1997). 76 Canadian Patent Reports (3d) 296 (Fed, C.A.), leave to appeal refused (2l May 1998) (Doc. 26403) (S-CC)

United States

Partial Transcript of Proceedings in the Superior Court for the State of California, Santa Clara County Judicial District, Before the Honourable Robert A, Baines, Judge, Blue Mountain Arts [PlaintiffJv. Microsoft and WebTV [Defendant], January 28, 1999,

Reno v. American Civil Liberties As.sociutiotr. 1 17 S. Ct. 2329 (1997)

DECISIONS AND ORDERS OF ADMINISTRATIVE AGENCIES

Canada

Attaclmzent of Subscriber-Provided Eqzripmenr, Telecom Decision CRTC 82- 14. November 23, 1982

Irnplernenrarion of Regulatory Framework - Development of Carrier Interfaces und Other Procedures, Telecom Public Notice CRTC 9648, August 1, 1996.

Local Competition, Telecom Decision CRTC 97-8. May 1. 1997.

Regularion Under The Teleconrmunicutions Acr Of Cable Carriers' Access Services, Telecom Decision CRTC 99-8, July 6. 1999.

Regulation Under The TelecommunicutionsAct Of Cerrcrin Telecommunicutions Services Ofleered By "Broadcast Curriers ", Telecom Decision CRTC 98-9, July 9, 1998. ~http://www.crtc.gc.ca/en~telecom/decisio1998/d989-0.txt> United States

Carterphone (I968), 13 FCC (26)420

Firsr Report and Order, FCC 96-325 (rel. August 8. 1996)

INTERNET-ONLY MATERIALS

Auerbach, K., "What I would say to the House Commerce Committee were I invited to testifji' (1 7 July 1 999), ~http://w~vw.cavebear,com/cavebear/growUissue~2.htm~

Berkman Center for Internet and Society. "The Power of Openness: Why C it it ens. Education, Government and Business Should Care About the Coming Revolution in Open Source Code Software," ( 1999).

Carpenter, B., "What Does the IAB Do. Anyway?".

Director's Message, Uth~eetin~ of the Internet Engineering Task Force. 15-19 March. 1999, Minneapolis, Minnesota

Electronic Privacy Information Center (EPIC). -Surfer Beware II: Notice is Not Enough" (June 1998), ~http://~~w.epic.or~reports/surfer-be2~html~

Gerstner, L., "A Policy of Restraint." ~http://~~v.ibm.com/thinkma~10u/rest~l~

Global Internet Project. "The Opportunity and the Challenge to Sustain Rapid Internet Growth: A Policy Architecture for the Internet" (Version 1.0).

Internet Corporation for Assigned Names and Numbers. --Background" (July 1999).

Internet Engineering Task Force Differentiated Services (diffserv) Working Group Charter, ~http://w~v.ietf.or~g/htmI.charters/diffserv-charter.htmI~

Internet Software Consortium, "Internet Domain Survey. July 1999," ~http:/l~v.isc.org/dsview.cgi?domainsumey/WWW-9907/repo~.htm1~

Landweber, L. & Internet Society, "International Connectivity Map" (Version 16) ( I5 3 une I997), ~ftp://ftp.cs.wisc.edu/connectivity_table/veionl6.bmp

Leiner, B.M., Cerf, V.G., Clark, D.D.. Kahn, R.E.. Kleinrock. L., Lynch. D.C., Postel. J., Roberts, L.G. & Wolff, S., "A Brief History of the Internet (Version 3.1 ),

Nehvork Solutions, Inc., "1 5 Minute Series: What is a Network? ( 1996).

Rutkowski, A.M., "US DOD [Internet] Assigned Numbers [Authority]*. Nehvork Information Centers (NICs), Contractors. and Activities: known detailed history" [sic] ( 1996), Science Appiications International Corporation, 'SAIC Subsidiary Profile: Network Solutions, Inc.," ~http://www.saic.corn/company/subsidiarieshetshm l>

Semeria, C., Understanding IP Addressing: Eveyfhing Yozr Ever Wmted to Know, ~http://www3com.com/nsc/501302s.htmI>

Singleton, S., "The Internet Needs an Independence Day" (6 July 1999).

Transcript of Dialogue between J. Zittrain and D. Clark, Harvard Law School, Cambridge, Massachusetts ( I October 1997).

World Internetworking Alliance. "Competing Models of Internet DNS Service Governance" (20 September 1997).

OTHER

Bell Canada Tenns of Service For Regulated Services (Effective 25 September 1986).

Canadian Cable Television Association (CCTA), "Submission to CRTC In Response to Telecom Decision 98-9, Technical Report on the Status of Implementation of Access for Internet Service Providers" (8 February 1999).

Comments of the Internet Service Providers' Consortium concerning the deployment of advanced telecommunications capability. submitted to US. Federal Communications Commission Common C-er Bureau Docket No. 98- 1.16 (Section 706 Notice of Inquiry proceeding) (1 4 September 1998). ~http:/l~~~~~v.ispc.or~policy/filin~s/fcc199809 14.shtmb

Jenn ings, D., "Foreword" Randal I. N .. The Soul of lhe Internef: tVer Gods. ~Verizensmcl The Wiring of The World (London: International Thomson Computer Press, 1997) at i;r.

"OSI Reference Model,"

Rutkowski, A.M., "Comment" ( 1999).

The OJcford English Dictionary (2d). Vol. VI (Oxford: Clarendon Press. 1989)

Vixie. P.A., "Re: bogosity," message posted to newdom (New Domains) discussion list (newdorn@vr;u.net) (28 October 1 996)